id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1711.08028 | Rasmus Berg Palm | Rasmus Berg Palm, Ulrich Paquet, Ole Winther | Recurrent Relational Networks | Accepted at NIPS 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is concerned with learning to solve tasks that require a chain of
interdependent steps of relational inference, like answering complex questions
about the relationships between objects, or solving puzzles where the smaller
elements of a solution mutually constrain each other. We introduce the
recurrent relational network, a general purpose module that operates on a graph
representation of objects. As a generalization of Santoro et al. [2017]'s
relational network, it can augment any neural network model with the capacity
to do many-step relational reasoning. We achieve state of the art results on
the bAbI textual question-answering dataset with the recurrent relational
network, consistently solving 20/20 tasks. As bAbI is not particularly
challenging from a relational reasoning point of view, we introduce
Pretty-CLEVR, a new diagnostic dataset for relational reasoning. In the
Pretty-CLEVR set-up, we can vary the question to control for the number of
relational reasoning steps that are required to obtain the answer. Using
Pretty-CLEVR, we probe the limitations of multi-layer perceptrons, relational
and recurrent relational networks. Finally, we show how recurrent relational
networks can learn to solve Sudoku puzzles from supervised training data, a
challenging task requiring upwards of 64 steps of relational reasoning. We
achieve state-of-the-art results amongst comparable methods by solving 96.6% of
the hardest Sudoku puzzles.
| [
{
"version": "v1",
"created": "Tue, 21 Nov 2017 20:34:48 GMT"
},
{
"version": "v2",
"created": "Mon, 28 May 2018 11:44:06 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Oct 2018 07:44:25 GMT"
},
{
"version": "v4",
"created": "Thu, 29 Nov 2018 15:11:23 GMT"
}
] | 1,543,536,000,000 | [
[
"Palm",
"Rasmus Berg",
""
],
[
"Paquet",
"Ulrich",
""
],
[
"Winther",
"Ole",
""
]
] |
1711.08101 | Levi Lelis | Rubens O. Moraes and Levi H. S. Lelis | Asymmetric Action Abstractions for Multi-Unit Control in Adversarial
Real-Time Games | AAAI'18 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Action abstractions restrict the number of legal actions available during
search in multi-unit real-time adversarial games, thus allowing algorithms to
focus their search on a set of promising actions. Optimal strategies derived
from un-abstracted spaces are guaranteed to be no worse than optimal strategies
derived from action-abstracted spaces. In practice, however, due to real-time
constraints and the state space size, one is only able to derive good
strategies in un-abstracted spaces in small-scale games. In this paper we
introduce search algorithms that use an action abstraction scheme we call
asymmetric abstraction. Asymmetric abstractions retain the un-abstracted
spaces' theoretical advantage over regularly abstracted spaces while still
allowing the search algorithms to derive effective strategies, even in
large-scale games. Empirical results on combat scenarios that arise in a
real-time strategy game show that our search algorithms are able to
substantially outperform state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Wed, 22 Nov 2017 01:35:29 GMT"
}
] | 1,511,395,200,000 | [
[
"Moraes",
"Rubens O.",
""
],
[
"Lelis",
"Levi H. S.",
""
]
] |
1711.08378 | Matthew Botvinick | M. Botvinick, D.G.T. Barrett, P. Battaglia, N. de Freitas, D. Kumaran,
J. Z Leibo, T. Lillicrap, J. Modayil, S. Mohamed, N.C. Rabinowitz, D. J.
Rezende, A. Santoro, T. Schaul, C. Summerfield, G. Wayne, T. Weber, D.
Wierstra, S. Legg and D. Hassabis | Building Machines that Learn and Think for Themselves: Commentary on
Lake et al., Behavioral and Brain Sciences, 2017 | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We agree with Lake and colleagues on their list of key ingredients for
building humanlike intelligence, including the idea that model-based reasoning
is essential. However, we favor an approach that centers on one additional
ingredient: autonomy. In particular, we aim toward agents that can both build
and exploit their own internal models, with minimal human hand-engineering. We
believe an approach centered on autonomous learning has the greatest chance of
success as we scale toward real-world complexity, tackling domains for which
ready-made formal models are not available. Here we survey several important
examples of the progress that has been made toward building autonomous agents
with humanlike abilities, and highlight some outstanding challenges.
| [
{
"version": "v1",
"created": "Wed, 22 Nov 2017 16:35:29 GMT"
}
] | 1,511,395,200,000 | [
[
"Botvinick",
"M.",
""
],
[
"Barrett",
"D. G. T.",
""
],
[
"Battaglia",
"P.",
""
],
[
"de Freitas",
"N.",
""
],
[
"Kumaran",
"D.",
""
],
[
"Leibo",
"J. Z",
""
],
[
"Lillicrap",
"T.",
""
],
[
"Modayil",
"J.",
""
],
[
"Mohamed",
"S.",
""
],
[
"Rabinowitz",
"N. C.",
""
],
[
"Rezende",
"D. J.",
""
],
[
"Santoro",
"A.",
""
],
[
"Schaul",
"T.",
""
],
[
"Summerfield",
"C.",
""
],
[
"Wayne",
"G.",
""
],
[
"Weber",
"T.",
""
],
[
"Wierstra",
"D.",
""
],
[
"Legg",
"S.",
""
],
[
"Hassabis",
"D.",
""
]
] |
1711.08819 | Piotr Mirowski | Kory Wallace Mathewson and Piotr Mirowski | Improvised Comedy as a Turing Test | 4 pages, 3 figures. Presented at 31st Conference on Neural
Information Processing Systems 2017. Workshop on Machine Learning for
Creativity and Design | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The best improvisational theatre actors can make any scene partner, of any
skill level or ability, appear talented and proficient in the art form, and
thus "make them shine". To challenge this improvisational paradigm, we built an
artificial intelligence (AI) trained to perform live shows alongside human
actors for human audiences. Over the course of 30 performances to a combined
audience of almost 3000 people, we have refined theatrical games which involve
combinations of human and (at times, adversarial) AI actors. We have developed
specific scene structures to include audience participants in interesting ways.
Finally, we developed a complete show structure that submitted the audience to
a Turing test and observed their suspension of disbelief, which we believe is
key for human/non-human theatre co-creation.
| [
{
"version": "v1",
"created": "Thu, 23 Nov 2017 20:13:34 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Dec 2017 00:25:58 GMT"
}
] | 1,512,432,000,000 | [
[
"Mathewson",
"Kory Wallace",
""
],
[
"Mirowski",
"Piotr",
""
]
] |
1711.09142 | Zhuo Xu | Zhuo Xu, Haonan Chang, and Masayoshi Tomizuka | Cascade Attribute Learning Network | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose the cascade attribute learning network (CALNet), which can learn
attributes in a control task separately and assemble them together. Our
contribution is twofold: first we propose attribute learning in reinforcement
learning (RL). Attributes used to be modeled using constraint functions or
terms in the objective function, making it hard to transfer. Attribute
learning, on the other hand, models these task properties as modules in the
policy network. We also propose using novel cascading compensative networks in
the CALNet to learn and assemble attributes. Using the CALNet, one can zero
shoot an unseen task by separately learning all its attributes, and assembling
the attribute modules. We have validated the capacity of our model on a wide
variety of control problems with attributes in time, position, velocity and
acceleration phases.
| [
{
"version": "v1",
"created": "Fri, 24 Nov 2017 21:12:52 GMT"
}
] | 1,511,827,200,000 | [
[
"Xu",
"Zhuo",
""
],
[
"Chang",
"Haonan",
""
],
[
"Tomizuka",
"Masayoshi",
""
]
] |
1711.09186 | Xinyang Deng | Xinyang Deng and Wen Jiang | D numbers theory based game-theoretic framework in adversarial decision
making under fuzzy environment | 59 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adversarial decision making is a particular type of decision making problem
where the gain a decision maker obtains as a result of his decisions is
affected by the actions taken by others. Representation of alternatives'
evaluations and methods to find the optimal alternative are two important
aspects in the adversarial decision making. The aim of this study is to develop
a general framework for solving the adversarial decision making issue under
uncertain environment. By combining fuzzy set theory, game theory and D numbers
theory (DNT), a DNT based game-theoretic framework for adversarial decision
making under fuzzy environment is presented. Within the proposed framework or
model, fuzzy set theory is used to model the uncertain evaluations of decision
makers to alternatives, the non-exclusiveness among fuzzy evaluations are taken
into consideration by using DNT, and the conflict of interests among decision
makers is considered in a two-person non-constant sum game theory perspective.
An illustrative application is given to demonstrate the effectiveness of the
proposed model. This work, on one hand, has developed an effective framework
for adversarial decision making under fuzzy environment; One the other hand, it
has further improved the basis of DNT as a generalization of Dempster-Shafer
theory for uncertainty reasoning.
| [
{
"version": "v1",
"created": "Sat, 25 Nov 2017 04:16:43 GMT"
}
] | 1,511,827,200,000 | [
[
"Deng",
"Xinyang",
""
],
[
"Jiang",
"Wen",
""
]
] |
1711.09401 | Long Ouyang | Long Ouyang and Michael C. Frank | Pedagogical learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A common assumption in machine learning is that training data are i.i.d.
samples from some distribution. Processes that generate i.i.d. samples are, in
a sense, uninformative---they produce data without regard to how good this data
is for learning. By contrast, cognitive science research has shown that when
people generate training data for others (i.e., teaching), they deliberately
select examples that are helpful for learning. Because the data is more
informative, learning can require less data. Interestingly, such examples are
most effective when learners know that the data were pedagogically generated
(as opposed to randomly generated). We call this pedagogical learning---when a
learner assumes that evidence comes from a helpful teacher. In this work, we
ask how pedagogical learning might work for machine learning algorithms.
Studying this question requires understanding how people actually teach complex
concepts with examples, so we conducted a behavioral study examining how people
teach regular expressions using example strings. We found that teachers'
examples contain powerful clustering structure that can greatly facilitate
learning. We then develop a model of teaching and show a proof of concept that
using this model inside of a learner can improve performance.
| [
{
"version": "v1",
"created": "Sun, 26 Nov 2017 15:17:02 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Nov 2017 22:13:42 GMT"
}
] | 1,512,345,600,000 | [
[
"Ouyang",
"Long",
""
],
[
"Frank",
"Michael C.",
""
]
] |
1711.09441 | Matteo Brunelli | Bice Cavallo and Matteo Brunelli | A general unified framework for interval pairwise comparison matrices | null | International Journal of Approximate Reasoning, 93, 178--198, 2018 | 10.1016/j.ijar.2017.11.002 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Interval Pairwise Comparison Matrices have been widely used to account for
uncertain statements concerning the preferences of decision makers. Several
approaches have been proposed in the literature, such as multiplicative and
fuzzy interval matrices. In this paper, we propose a general unified approach
to Interval Pairwise Comparison Matrices, based on Abelian linearly ordered
groups. In this framework, we generalize some consistency conditions provided
for multiplicative and/or fuzzy interval pairwise comparison matrices and
provide inclusion relations between them. Then, we provide a concept of
distance between intervals that, together with a notion of mean defined over
real continuous Abelian linearly ordered groups, allows us to provide a
consistency index and an indeterminacy index. In this way, by means of suitable
isomorphisms between Abelian linearly ordered groups, we will be able to
compare the inconsistency and the indeterminacy of different kinds of Interval
Pairwise Comparison Matrices, e.g. multiplicative, additive, and fuzzy, on a
unique Cartesian coordinate system.
| [
{
"version": "v1",
"created": "Sun, 26 Nov 2017 19:15:24 GMT"
}
] | 1,511,827,200,000 | [
[
"Cavallo",
"Bice",
""
],
[
"Brunelli",
"Matteo",
""
]
] |
1711.09744 | Clemente Rubio-Manzano | Clemente Rubio-Manzano, Tomas Lermanda Senoceain | How linguistic descriptions of data can help to the teaching-learning
process in higher education, case of study: artificial intelligence | null | Journal of Intelligent & Fuzzy Systems, vol. 37, no. 6, pp.
8397-8415, 2019 | 10.3233/JIFS-190935 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence is a central topic in the computer science
curriculum. From the year 2011 a project-based learning methodology based on
computer games has been designed and implemented into the intelligence
artificial course at the University of the Bio-Bio. The project aims to develop
software-controlled agents (bots) which are programmed by using heuristic
algorithms seen during the course. This methodology allows us to obtain good
learning results, however several challenges have been founded during its
implementation.
In this paper we show how linguistic descriptions of data can help to provide
students and teachers with technical and personalized feedback about the
learned algorithms. Algorithm behavior profile and a new Turing test for
computer games bots based on linguistic modelling of complex phenomena are also
proposed in order to deal with such challenges.
In order to show and explore the possibilities of this new technology, a web
platform has been designed and implemented by one of authors and its
incorporation in the process of assessment allows us to improve the teaching
learning process.
| [
{
"version": "v1",
"created": "Mon, 27 Nov 2017 15:13:53 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Dec 2017 14:00:27 GMT"
},
{
"version": "v3",
"created": "Tue, 30 Jan 2018 20:00:15 GMT"
}
] | 1,609,977,600,000 | [
[
"Rubio-Manzano",
"Clemente",
""
],
[
"Senoceain",
"Tomas Lermanda",
""
]
] |
1711.10241 | Mithun Chakraborty | Nawal Benabbou, Mithun Chakraborty, Vinh Ho Xuan, Jakub Sliwinski,
Yair Zick | The Price of Quota-based Diversity in Assignment Problems | null | TEAC 8.3.14 (2020) 1-32 | 10.1145/3411513 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce and analyze an extension to the matching problem on a weighted
bipartite graph: Assignment with Type Constraints. The two parts of the graph
are partitioned into subsets called types and blocks; we seek a matching with
the largest sum of weights under the constraint that there is a pre-specified
cap on the number of vertices matched in every type-block pair. Our primary
motivation stems from the public housing program of Singapore, accounting for
over 70% of its residential real estate. To promote ethnic diversity within its
housing projects, Singapore imposes ethnicity quotas: each new housing
development comprises blocks of flats and each ethnicity-based group in the
population must not own more than a certain percentage of flats in a block.
Other domains using similar hard capacity constraints include matching
prospective students to schools or medical residents to hospitals. Limiting
agents' choices for ensuring diversity in this manner naturally entails some
welfare loss. One of our goals is to study the trade-off between diversity and
social welfare in such settings. We first show that, while the classic
assignment program is polynomial-time computable, adding diversity constraints
makes it computationally intractable; however, we identify a
$\tfrac{1}{2}$-approximation algorithm, as well as reasonable assumptions on
the weights that permit poly-time algorithms. Next, we provide two upper bounds
on the price of diversity -- a measure of the loss in welfare incurred by
imposing diversity constraints -- as functions of natural problem parameters.
We conclude the paper with simulations based on publicly available data from
two diversity-constrained allocation problems -- Singapore Public Housing and
Chicago School Choice -- which shed light on how the constrained maximization
as well as lottery-based variants perform in practice.
| [
{
"version": "v1",
"created": "Tue, 28 Nov 2017 11:58:54 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Dec 2017 08:54:55 GMT"
},
{
"version": "v3",
"created": "Fri, 31 Aug 2018 06:43:20 GMT"
},
{
"version": "v4",
"created": "Wed, 12 Sep 2018 07:51:10 GMT"
},
{
"version": "v5",
"created": "Sat, 1 Dec 2018 10:53:13 GMT"
},
{
"version": "v6",
"created": "Wed, 5 Dec 2018 10:07:25 GMT"
},
{
"version": "v7",
"created": "Sat, 19 Jan 2019 07:08:11 GMT"
},
{
"version": "v8",
"created": "Sat, 3 Oct 2020 10:03:44 GMT"
}
] | 1,601,942,400,000 | [
[
"Benabbou",
"Nawal",
""
],
[
"Chakraborty",
"Mithun",
""
],
[
"Xuan",
"Vinh Ho",
""
],
[
"Sliwinski",
"Jakub",
""
],
[
"Zick",
"Yair",
""
]
] |
1711.10314 | Shayegan Omidshafiei | Shayegan Omidshafiei, Dong-Ki Kim, Jason Pazis, Jonathan P. How | Crossmodal Attentive Skill Learner | International Conference on Autonomous Agents and Multiagent Systems
(AAMAS) 2018, NIPS 2017 Deep Reinforcement Learning Symposium | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the Crossmodal Attentive Skill Learner (CASL), integrated
with the recently-introduced Asynchronous Advantage Option-Critic (A2OC)
architecture [Harb et al., 2017] to enable hierarchical reinforcement learning
across multiple sensory inputs. We provide concrete examples where the approach
not only improves performance in a single task, but accelerates transfer to new
tasks. We demonstrate the attention mechanism anticipates and identifies useful
latent features, while filtering irrelevant sensor modalities during execution.
We modify the Arcade Learning Environment [Bellemare et al., 2013] to support
audio queries, and conduct evaluations of crossmodal learning in the Atari 2600
game Amidar. Finally, building on the recent work of Babaeizadeh et al. [2017],
we open-source a fast hybrid CPU-GPU implementation of CASL.
| [
{
"version": "v1",
"created": "Tue, 28 Nov 2017 14:38:21 GMT"
},
{
"version": "v2",
"created": "Sun, 14 Jan 2018 23:43:31 GMT"
},
{
"version": "v3",
"created": "Tue, 22 May 2018 14:39:29 GMT"
}
] | 1,527,033,600,000 | [
[
"Omidshafiei",
"Shayegan",
""
],
[
"Kim",
"Dong-Ki",
""
],
[
"Pazis",
"Jason",
""
],
[
"How",
"Jonathan P.",
""
]
] |
1711.10317 | Chao Zhao | Chao Zhao and Min Zhao and Yi Guan | Classification of entities via their descriptive sentences | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hypernym identification of open-domain entities is crucial for taxonomy
construction as well as many higher-level applications. Current methods suffer
from either low precision or low recall. To decrease the difficulty of this
problem, we adopt a classification-based method. We pre-define a concept
taxonomy and classify an entity to one of its leaf concept, based on the name
and description information of the entity. A convolutional neural network
classifier and a K-means clustering module are adopted for classification. We
applied this system to 2.1 million Baidu Baike entities, and 1.1 million of
them were successfully identified with a precision of 99.36%.
| [
{
"version": "v1",
"created": "Tue, 28 Nov 2017 14:49:06 GMT"
}
] | 1,511,913,600,000 | [
[
"Zhao",
"Chao",
""
],
[
"Zhao",
"Min",
""
],
[
"Guan",
"Yi",
""
]
] |
1711.10401 | Kumar Sankar Ray | Rajesh Misra, Kumar S. Ray | A Modification of Particle Swarm Optimization using Random Walk | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Particle swarm optimization comes under lot of changes after James Kennedy
and Russell Eberhart first proposes the idea in 1995. The changes has been done
mainly on Inertia parameters in velocity updating equation so that the
convergence rate will be higher. We are proposing a novel approach where
particles movement will not be depend on its velocity rather it will be decided
by constrained biased random walk of particles. In random walk every particles
movement based on two significant parameters, one is random process like toss
of a coin and other is how much displacement a particle should have. In our
approach we exploit this idea by performing a biased random operation and based
on the outcome of that random operation, PSO particles choose the direction of
the path and move non-uniformly into the solution space. This constrained,
non-uniform movement helps the random walking particle to converge quicker then
classical PSO. In our constrained biased random walking approach, we no longer
needed velocity term (Vi), rather we introduce a new parameter (K) which is a
probabilistic function. No global best particle (PGbest), local best particle
(PLbest), Constriction parameter (W) are required rather we use a new term
called Ptarg which is loosely influenced by PGbest.We test our algorithm on
five different benchmark functions, and also compare its performance with
classical PSO and Quantum Particle Swarm Optimization (QPSO).This new approach
have been shown significantly better than basic PSO and sometime outperform
QPSO in terms of convergence, search space, number of iterations.
| [
{
"version": "v1",
"created": "Thu, 16 Nov 2017 10:59:34 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Feb 2018 13:27:37 GMT"
}
] | 1,519,689,600,000 | [
[
"Misra",
"Rajesh",
""
],
[
"Ray",
"Kumar S.",
""
]
] |
1711.10574 | Mehmet Aydin | Mehmet Emin Aydin and Ryan Fellows | A reinforcement learning algorithm for building collaboration in
multi-agent systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a proof-of concept study for demonstrating the viability
of building collaboration among multiple agents through standard Q learning
algorithm embedded in particle swarm optimisation. Collaboration is formulated
to be achieved among the agents via some sort competition, where the agents are
expected to balance their action in such a way that none of them drifts away of
the team and none intervene any fellow neighbours territory. Particles are
devised with Q learning algorithm for self training to learn how to act as
members of a swarm and how to produce collaborative/collective behaviours. The
produced results are supportive to the algorithmic structures suggesting that a
substantive collaboration can be build via proposed learning algorithm.
| [
{
"version": "v1",
"created": "Tue, 28 Nov 2017 21:46:42 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Apr 2018 15:58:28 GMT"
}
] | 1,522,972,800,000 | [
[
"Aydin",
"Mehmet Emin",
""
],
[
"Fellows",
"Ryan",
""
]
] |
1711.11175 | Sahin Geyik | Sahin Cem Geyik, Jianqiang Shen, Shahriar Shariat, Ali Dasdan, Santanu
Kolay | Towards Data Quality Assessment in Online Advertising | 10 pages, 7 Figures. This work has been presented in the KDD 2016
Workshop on Enterprise Intelligence | KDD 2016 Workshop on Enterprise Intelligence | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In online advertising, our aim is to match the advertisers with the most
relevant users to optimize the campaign performance. In the pursuit of
achieving this goal, multiple data sources provided by the advertisers or
third-party data providers are utilized to choose the set of users according to
the advertisers' targeting criteria. In this paper, we present a framework that
can be applied to assess the quality of such data sources in large scale. This
framework efficiently evaluates the similarity of a specific data source
categorization to that of the ground truth, especially for those cases when the
ground truth is accessible only in aggregate, and the user-level information is
anonymized or unavailable due to privacy reasons. We propose multiple
methodologies within this framework, present some preliminary assessment
results, and evaluate how the methodologies compare to each other. We also
present two use cases where we can utilize the data quality assessment results:
the first use case is targeting specific user categories, and the second one is
forecasting the desirable audiences we can reach for an online advertising
campaign with pre-set targeting criteria.
| [
{
"version": "v1",
"created": "Thu, 30 Nov 2017 01:22:45 GMT"
}
] | 1,512,086,400,000 | [
[
"Geyik",
"Sahin Cem",
""
],
[
"Shen",
"Jianqiang",
""
],
[
"Shariat",
"Shahriar",
""
],
[
"Dasdan",
"Ali",
""
],
[
"Kolay",
"Santanu",
""
]
] |
1711.11180 | Dhaval Adjodah | Dhaval Adjodah, Dan Calacci, Yan Leng, Peter Krafft, Esteban Moro,
Alex Pentland | Improved Learning in Evolution Strategies via Sparser Inter-Agent
Network Topologies | This paper is obsolete | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We draw upon a previously largely untapped literature on human collective
intelligence as a source of inspiration for improving deep learning. Implicit
in many algorithms that attempt to solve Deep Reinforcement Learning (DRL)
tasks is the network of processors along which parameter values are shared. So
far, existing approaches have implicitly utilized fully-connected networks, in
which all processors are connected. However, the scientific literature on human
collective intelligence suggests that complete networks may not always be the
most effective information network structures for distributed search through
complex spaces. Here we show that alternative topologies can improve deep
neural network training: we find that sparser networks learn higher rewards
faster, leading to learning improvements at lower communication costs.
| [
{
"version": "v1",
"created": "Thu, 30 Nov 2017 01:42:54 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Feb 2019 21:34:40 GMT"
}
] | 1,550,448,000,000 | [
[
"Adjodah",
"Dhaval",
""
],
[
"Calacci",
"Dan",
""
],
[
"Leng",
"Yan",
""
],
[
"Krafft",
"Peter",
""
],
[
"Moro",
"Esteban",
""
],
[
"Pentland",
"Alex",
""
]
] |
1711.11231 | Shu Guo | Shu Guo, Quan Wang, Lihong Wang, Bin Wang, Li Guo | Knowledge Graph Embedding with Iterative Guidance from Soft Rules | To appear in AAAI 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of
current research. Combining such an embedding model with logic rules has
recently attracted increasing attention. Most previous attempts made a one-time
injection of logic rules, ignoring the interactive nature between embedding
learning and logical inference. And they focused only on hard rules, which
always hold with no exception and usually require extensive manual effort to
create or validate. In this paper, we propose Rule-Guided Embedding (RUGE), a
novel paradigm of KG embedding with iterative guidance from soft rules. RUGE
enables an embedding model to learn simultaneously from 1) labeled triples that
have been directly observed in a given KG, 2) unlabeled triples whose labels
are going to be predicted iteratively, and 3) soft rules with various
confidence levels extracted automatically from the KG. In the learning process,
RUGE iteratively queries rules to obtain soft labels for unlabeled triples, and
integrates such newly labeled triples to update the embedding model. Through
this iterative procedure, knowledge embodied in logic rules may be better
transferred into the learned embeddings. We evaluate RUGE in link prediction on
Freebase and YAGO. Experimental results show that: 1) with rule knowledge
injected iteratively, RUGE achieves significant and consistent improvements
over state-of-the-art baselines; and 2) despite their uncertainties,
automatically extracted soft rules are highly beneficial to KG embedding, even
those with moderate confidence levels. The code and data used for this paper
can be obtained from https://github.com/iieir-km/RUGE.
| [
{
"version": "v1",
"created": "Thu, 30 Nov 2017 05:13:33 GMT"
}
] | 1,512,086,400,000 | [
[
"Guo",
"Shu",
""
],
[
"Wang",
"Quan",
""
],
[
"Wang",
"Lihong",
""
],
[
"Wang",
"Bin",
""
],
[
"Guo",
"Li",
""
]
] |
1711.11289 | Himanshu Sahni | Himanshu Sahni, Saurabh Kumar, Farhan Tejani, Charles Isbell | Learning to Compose Skills | Presented at NIPS 2017 Deep RL Symposium | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a differentiable framework capable of learning a wide variety of
compositions of simple policies that we call skills. By recursively composing
skills with themselves, we can create hierarchies that display complex
behavior. Skill networks are trained to generate skill-state embeddings that
are provided as inputs to a trainable composition function, which in turn
outputs a policy for the overall task. Our experiments on an environment
consisting of multiple collect and evade tasks show that this architecture is
able to quickly build complex skills from simpler ones. Furthermore, the
learned composition function displays some transfer to unseen combinations of
skills, allowing for zero-shot generalizations.
| [
{
"version": "v1",
"created": "Thu, 30 Nov 2017 09:47:28 GMT"
}
] | 1,512,086,400,000 | [
[
"Sahni",
"Himanshu",
""
],
[
"Kumar",
"Saurabh",
""
],
[
"Tejani",
"Farhan",
""
],
[
"Isbell",
"Charles",
""
]
] |
1712.00180 | Jason Bernard | Jason Bernard, Ian McQuillan | New Techniques for Inferring L-Systems Using Genetic Algorithm | 18 pages. 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lindenmayer systems (L-systems) are a formal grammar system that iteratively
rewrites all symbols of a string, in parallel. When visualized with a graphical
interpretation, the images have self-similar shapes that appear frequently in
nature, and they have been particularly successful as a concise, reusable
technique for simulating plants. The L-system inference problem is to find an
L-system to simulate a given plant. This is currently done mainly by experts,
but this process is limited by the availability of experts, the complexity that
may be solved by humans, and time. This paper introduces the Plant Model
Inference Tool (PMIT) that infers deterministic context-free L-systems from an
initial sequence of strings generated by the system using a genetic algorithm.
PMIT is able to infer more complex systems than existing approaches. Indeed,
while existing approaches are limited to L-systems with a total sum of 20
combined symbols in the productions, PMIT can infer almost all L-systems tested
where the total sum is 140 symbols. This was validated using a test bed of 28
previously developed L-system models, in addition to models created
artificially by bootstrapping larger models.
| [
{
"version": "v1",
"created": "Fri, 1 Dec 2017 03:55:59 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Dec 2017 15:00:14 GMT"
}
] | 1,512,432,000,000 | [
[
"Bernard",
"Jason",
""
],
[
"McQuillan",
"Ian",
""
]
] |
1712.00222 | Chong Di | Chong Di | A double competitive strategy based learning automata algorithm | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning Automata (LA) are considered as one of the most powerful tools in
the field of reinforcement learning. The family of estimator algorithms is
proposed to improve the convergence rate of LA and has made great achievements.
However, the estimators perform poorly on estimating the reward probabilities
of actions in the initial stage of the learning process of LA. In this
situation, a lot of rewards would be added to the probabilities of non-optimal
actions. Thus, a large number of extra iterations are needed to compensate for
these wrong rewards. In order to improve the speed of convergence, we propose a
new P-model absorbing learning automaton by utilizing a double competitive
strategy which is designed for updating the action probability vector. In this
way, the wrong rewards can be corrected instantly. Hence, the proposed Double
Competitive Algorithm overcomes the drawbacks of existing estimator algorithms.
A refined analysis is presented to show the $\epsilon-optimality$ of the
proposed scheme. The extensive experimental results in benchmark environments
demonstrate that our proposed learning automata perform more efficiently than
the most classic LA $SE_{RI}$ and the current fastest LA $DGCPA^{*}$.
| [
{
"version": "v1",
"created": "Fri, 1 Dec 2017 07:54:53 GMT"
}
] | 1,512,345,600,000 | [
[
"Di",
"Chong",
""
]
] |
1712.00428 | Sekou Remy | Oliver Bent, Sekou L. Remy, Stephen Roberts, Aisha Walcott-Bryant | Novel Exploration Techniques (NETs) for Malaria Policy Interventions | Under-review | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of decision-making under uncertainty is daunting, especially for
problems which have significant complexity. Healthcare policy makers across the
globe are facing problems under challenging constraints, with limited tools to
help them make data driven decisions. In this work we frame the process of
finding an optimal malaria policy as a stochastic multi-armed bandit problem,
and implement three agent based strategies to explore the policy space. We
apply a Gaussian Process regression to the findings of each agent, both for
comparison and to account for stochastic results from simulating the spread of
malaria in a fixed population. The generated policy spaces are compared with
published results to give a direct reference with human expert decisions for
the same simulated population. Our novel approach provides a powerful resource
for policy makers, and a platform which can be readily extended to capture
future more nuanced policy spaces.
| [
{
"version": "v1",
"created": "Fri, 1 Dec 2017 17:59:49 GMT"
}
] | 1,512,345,600,000 | [
[
"Bent",
"Oliver",
""
],
[
"Remy",
"Sekou L.",
""
],
[
"Roberts",
"Stephen",
""
],
[
"Walcott-Bryant",
"Aisha",
""
]
] |
1712.00547 | Tim Miller | Tim Miller, Piers Howe, Liz Sonenberg | Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to
Stop Worrying and Love the Social and Behavioural Sciences | IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In his seminal book `The Inmates are Running the Asylum: Why High-Tech
Products Drive Us Crazy And How To Restore The Sanity' [2004, Sams
Indianapolis, IN, USA], Alan Cooper argues that a major reason why software is
often poorly designed (from a user perspective) is that programmers are in
charge of design decisions, rather than interaction designers. As a result,
programmers design software for themselves, rather than for their target
audience, a phenomenon he refers to as the `inmates running the asylum'. This
paper argues that explainable AI risks a similar fate. While the re-emergence
of explainable AI is positive, this paper argues most of us as AI researchers
are building explanatory agents for ourselves, rather than for the intended
users. But explainable AI is more likely to succeed if researchers and
practitioners understand, adopt, implement, and improve models from the vast
and valuable bodies of research in philosophy, psychology, and cognitive
science, and if evaluation of these models is focused more on people than on
technology. From a light scan of literature, we demonstrate that there is
considerable scope to infuse more results from the social and behavioural
sciences into explainable AI, and present some key results from these fields
that are relevant to explainable AI.
| [
{
"version": "v1",
"created": "Sat, 2 Dec 2017 04:21:14 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Dec 2017 04:23:25 GMT"
}
] | 1,512,518,400,000 | [
[
"Miller",
"Tim",
""
],
[
"Howe",
"Piers",
""
],
[
"Sonenberg",
"Liz",
""
]
] |
1712.00576 | Yan Zhu | Yan Zhu, Shaoting Zhang, Dimitris Metaxas | Interactive Reinforcement Learning for Object Grounding via Self-Talking | NIPS 2017 - Visually-Grounded Interaction and Language (ViGIL)
Workshop | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans are able to identify a referred visual object in a complex scene via a
few rounds of natural language communications. Success communication requires
both parties to engage and learn to adapt for each other. In this paper, we
introduce an interactive training method to improve the natural language
conversation system for a visual grounding task. During interactive training,
both agents are reinforced by the guidance from a common reward function. The
parametrized reward function also cooperatively updates itself via
interactions, and contribute to accomplishing the task. We evaluate the method
on GuessWhat?! visual grounding task, and significantly improve the task
success rate. However, we observe language drifting problem during training and
propose to use reward engineering to improve the interpretability for the
generated conversations. Our result also indicates evaluating goal-ended visual
conversation tasks require semantic relevant metrics beyond task success rate.
| [
{
"version": "v1",
"created": "Sat, 2 Dec 2017 09:15:10 GMT"
}
] | 1,512,432,000,000 | [
[
"Zhu",
"Yan",
""
],
[
"Zhang",
"Shaoting",
""
],
[
"Metaxas",
"Dimitris",
""
]
] |
1712.00646 | Eyke H\"ullermeier | Eyke H\"ullermeier | From knowledge-based to data-driven modeling of fuzzy rule-based
systems: A critical reflection | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper briefly elaborates on a development in (applied) fuzzy logic that
has taken place in the last couple of decades, namely, the complementation or
even replacement of the traditional knowledge-based approach to fuzzy
rule-based systems design by a data-driven one. It is argued that the classical
rule-based modeling paradigm is actually more amenable to the knowledge-based
approach, for which it has originally been conceived, while being less apt to
data-driven model design. An important reason that prevents fuzzy (rule-based)
systems from being leveraged in large-scale applications is the flat structure
of rule bases, along with the local nature of fuzzy rules and their limited
ability to express complex dependencies between variables. This motivates
alternative approaches to fuzzy systems modeling, in which functional
dependencies can be represented more flexibly and more compactly in terms of
hierarchical structures.
| [
{
"version": "v1",
"created": "Sat, 2 Dec 2017 17:42:49 GMT"
}
] | 1,512,432,000,000 | [
[
"Hüllermeier",
"Eyke",
""
]
] |
1712.00709 | Alper Kose | Alper Kose, Berke Aral Sonmez and Metin Balaban | Simulated Annealing Algorithm for Graph Coloring | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this Random Walks project is to code and experiment the Markov
Chain Monte Carlo (MCMC) method for the problem of graph coloring. In this
report, we present the plots of cost function \(\mathbf{H}\) by varying the
parameters like \(\mathbf{q}\) (Number of colors that can be used in coloring)
and \(\mathbf{c}\) (Average node degree). The results are obtained by using
simulated annealing scheme, where the temperature (inverse of
\(\mathbf{\beta}\)) parameter in the MCMC is lowered progressively.
| [
{
"version": "v1",
"created": "Sun, 3 Dec 2017 05:34:54 GMT"
}
] | 1,512,432,000,000 | [
[
"Kose",
"Alper",
""
],
[
"Sonmez",
"Berke Aral",
""
],
[
"Balaban",
"Metin",
""
]
] |
1712.00929 | Tomoaki Nakamura | Tomoaki Nakamura, Takayuki Nagai, Tadahiro Taniguchi | SERKET: An Architecture for Connecting Stochastic Models to Realize a
Large-Scale Cognitive Model | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To realize human-like robot intelligence, a large-scale cognitive
architecture is required for robots to understand the environment through a
variety of sensors with which they are equipped. In this paper, we propose a
novel framework named Serket that enables the construction of a large-scale
generative model and its inference easily by connecting sub-modules to allow
the robots to acquire various capabilities through interaction with their
environments and others. We consider that large-scale cognitive models can be
constructed by connecting smaller fundamental models hierarchically while
maintaining their programmatic independence. Moreover, connected modules are
dependent on each other, and parameters are required to be optimized as a
whole. Conventionally, the equations for parameter estimation have to be
derived and implemented depending on the models. However, it becomes harder to
derive and implement those of a larger scale model. To solve these problems, in
this paper, we propose a method for parameter estimation by communicating the
minimal parameters between various modules while maintaining their programmatic
independence. Therefore, Serket makes it easy to construct large-scale models
and estimate their parameters via the connection of modules. Experimental
results demonstrated that the model can be constructed by connecting modules,
the parameters can be optimized as a whole, and they are comparable with the
original models that we have proposed.
| [
{
"version": "v1",
"created": "Mon, 4 Dec 2017 06:58:39 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Dec 2017 03:17:44 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Dec 2017 01:26:54 GMT"
}
] | 1,512,604,800,000 | [
[
"Nakamura",
"Tomoaki",
""
],
[
"Nagai",
"Takayuki",
""
],
[
"Taniguchi",
"Tadahiro",
""
]
] |
1712.00988 | Sachin Pawar | Sachin Pawar, Pushpak Bhattacharya, and Girish K. Palshikar | End-to-End Relation Extraction using Markov Logic Networks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of end-to-end relation extraction consists of two sub-tasks: i)
identifying entity mentions along with their types and ii) recognizing semantic
relations among the entity mention pairs. %Identifying entity mentions along
with their types and recognizing semantic relations among the entity mentions,
are two very important problems in Information Extraction. It has been shown
that for better performance, it is necessary to address these two sub-tasks
jointly. We propose an approach for simultaneous extraction of entity mentions
and relations in a sentence, by using inference in Markov Logic Networks (MLN).
We learn three different classifiers : i) local entity classifier, ii) local
relation classifier and iii) "pipeline" relation classifier which uses
predictions of the local entity classifier. Predictions of these classifiers
may be inconsistent with each other. We represent these predictions along with
some domain knowledge using weighted first-order logic rules in an MLN and
perform joint inference over the MLN to obtain a global output with minimum
inconsistencies. Experiments on the ACE (Automatic Content Extraction) 2004
dataset demonstrate that our approach of joint extraction using MLNs
outperforms the baselines of individual classifiers. Our end-to-end relation
extraction performance is better than 2 out of 3 previous results reported on
the ACE 2004 dataset.
| [
{
"version": "v1",
"created": "Mon, 4 Dec 2017 10:26:59 GMT"
}
] | 1,512,432,000,000 | [
[
"Pawar",
"Sachin",
""
],
[
"Bhattacharya",
"Pushpak",
""
],
[
"Palshikar",
"Girish K.",
""
]
] |
1712.01093 | Christoph Adami | Christoph Adami | The mind as a computational system | 17 pages with three figures. In memory of Jerry Fodor | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The present document is an excerpt of an essay that I wrote as part of my
application material to graduate school in Computer Science (with a focus on
Artificial Intelligence), in 1986. I was not invited by any of the schools that
received it, so I became a theoretical physicist instead. The essay's full
title was "Some Topics in Philosophy and Computer Science". I am making this
text (unchanged from 1985, preserving the typesetting as much as possible)
available now in memory of Jerry Fodor, whose writings had influenced me
significantly at the time (even though I did not always agree).
| [
{
"version": "v1",
"created": "Fri, 1 Dec 2017 16:34:54 GMT"
}
] | 1,512,432,000,000 | [
[
"Adami",
"Christoph",
""
]
] |
1712.01949 | Yantian Zha | Yantian Zha, Yikang Li, Sriram Gopalakrishnan, Baoxin Li, Subbarao
Kambhampati | Recognizing Plans by Learning Embeddings from Observed Action
Distributions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in visual activity recognition have raised the possibility of
applications such as automated video surveillance. Effective approaches for
such problems however require the ability to recognize the plans of agents from
video information. Although traditional plan recognition algorithms depend on
access to sophisticated planning domain models, one recent promising direction
involves learning approximated (or shallow) domain models directly from the
observed activity sequences DUP. One limitation is that such approaches expect
observed action sequences as inputs. In many cases involving vision/sensing
from raw data, there is considerable uncertainty about the specific action at
any given time point. The most we can expect in such cases is probabilistic
information about the action at that point. The input will then be sequences of
such observed action distributions. In this work, we address the problem of
constructing an effective data-interface that allows a plan recognition module
to directly handle such observation distributions. Such an interface works like
a bridge between the low-level perception module, and the high-level plan
recognition module. We propose two approaches. The first involves resampling
the distribution sequences to single action sequences, from which we could
learn an action affinity model based on learned action (word) embeddings for
plan recognition. The second is to directly learn action distribution
embeddings by our proposed Distr2vec (distribution to vector) model, to
construct an affinity model for plan recognition.
| [
{
"version": "v1",
"created": "Tue, 5 Dec 2017 22:06:25 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Nov 2018 17:30:54 GMT"
}
] | 1,543,276,800,000 | [
[
"Zha",
"Yantian",
""
],
[
"Li",
"Yikang",
""
],
[
"Gopalakrishnan",
"Sriram",
""
],
[
"Li",
"Baoxin",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1712.03043 | Zengkun Li | Zengkun Li | A Heuristic Search Algorithm Using the Stability of Learning Algorithms
in Certain Scenarios as the Fitness Function: An Artificial General
Intelligence Engineering Approach | 12 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a non-manual design engineering method based on heuristic
search algorithm to search for candidate agents in the solution space which
formed by artificial intelligence agents modeled on the base of
bionics.Compared with the artificial design method represented by meta-learning
and the bionics method represented by the neural architecture chip,this method
is more feasible for realizing artificial general intelligence,and it has a
much better interaction with cognitive neuroscience;at the same time,the
engineering method is based on the theoretical hypothesis that the final
learning algorithm is stable in certain scenarios,and has generalization
ability in various scenarios.The paper discusses the theory preliminarily and
proposes the possible correlation between the theory and the fixed-point
theorem in the field of mathematics.Limited by the author's knowledge
level,this correlation is proposed only as a kind of conjecture.
| [
{
"version": "v1",
"created": "Fri, 8 Dec 2017 12:23:13 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Dec 2017 13:46:49 GMT"
},
{
"version": "v3",
"created": "Fri, 27 Jul 2018 02:08:04 GMT"
}
] | 1,532,908,800,000 | [
[
"Li",
"Zengkun",
""
]
] |
1712.03223 | Majdi Mafarja Dr. | Majdi Mafarja and Seyedali Mirjalili | S-Shaped vs. V-Shaped Transfer Functions for Antlion Optimization
Algorithm in Feature Selection Problems | 7 pages | Majdi Mafarja, Derar Eleyan, Salwani Abdullah, and Seyedali
Mirjalili. 2017. S-Shaped vs. V-Shaped Transfer Functions for Ant Lion
Optimization Algorithm in Feature Selection Problem. In Proceedings of ICFNDS
'17 | 10.1145/3102304.3102325 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature selection is an important preprocessing step for classification
problems. It deals with selecting near optimal features in the original
dataset. Feature selection is an NP-hard problem, so meta-heuristics can be
more efficient than exact methods. In this work, Ant Lion Optimizer (ALO),
which is a recent metaheuristic algorithm, is employed as a wrapper feature
selection method. Six variants of ALO are proposed where each employ a transfer
function to map a continuous search space to a discrete search space. The
performance of the proposed approaches is tested on eighteen UCI datasets and
compared to a number of existing approaches in the literature: Particle Swarm
Optimization, Gravitational Search Algorithm, and two existing ALO-based
approaches. Computational experiments show that the proposed approaches
efficiently explore the feature space and select the most informative features,
which help to improve the classification accuracy.
| [
{
"version": "v1",
"created": "Wed, 6 Dec 2017 05:17:12 GMT"
}
] | 1,513,036,800,000 | [
[
"Mafarja",
"Majdi",
""
],
[
"Mirjalili",
"Seyedali",
""
]
] |
1712.03280 | Deepak Dilipkumar | Ben Parr, Deepak Dilipkumar, Yuan Liu | Nintendo Super Smash Bros. Melee: An "Untouchable" Agent | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nintendo's Super Smash Bros. Melee fighting game can be emulated on modern
hardware allowing us to inspect internal memory states, such as character
positions. We created an AI that avoids being hit by training using these
internal memory states and outputting controller button presses. After training
on a month's worth of Melee matches, our best agent learned to avoid the
toughest AI built into the game for a full minute 74.6% of the time.
| [
{
"version": "v1",
"created": "Fri, 8 Dec 2017 21:07:18 GMT"
}
] | 1,513,036,800,000 | [
[
"Parr",
"Ben",
""
],
[
"Dilipkumar",
"Deepak",
""
],
[
"Liu",
"Yuan",
""
]
] |
1712.04020 | Roman Yampolskiy | Roman V. Yampolskiy | Detecting Qualia in Natural and Artificial Agents | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Hard Problem of consciousness has been dismissed as an illusion. By
showing that computers are capable of experiencing, we show that they are at
least rudimentarily conscious with potential to eventually reach
superconsciousness. The main contribution of the paper is a test for confirming
certain subjective experiences in a tested agent. We follow with analysis of
benefits and problems with conscious machines and implications of such
capability on future of computing, machine rights and artificial intelligence
safety.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2017 20:53:47 GMT"
}
] | 1,513,123,200,000 | [
[
"Yampolskiy",
"Roman V.",
""
]
] |
1712.04065 | Miao Liu | Miao Liu, Marlos C. Machado, Gerald Tesauro, Murray Campbell | The Eigenoption-Critic Framework | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Eigenoptions (EOs) have been recently introduced as a promising idea for
generating a diverse set of options through the graph Laplacian, having been
shown to allow efficient exploration. Despite its initial promising results, a
couple of issues in current algorithms limit its application, namely: (1) EO
methods require two separate steps (eigenoption discovery and reward
maximization) to learn a control policy, which can incur a significant amount
of storage and computation; (2) EOs are only defined for problems with discrete
state-spaces and; (3) it is not easy to take the environment's reward function
into consideration when discovering EOs. To addresses these issues, we
introduce an algorithm termed eigenoption-critic (EOC) based on the
Option-critic (OC) framework [Bacon17], a general hierarchical reinforcement
learning (RL) algorithm that allows learning the intra-option policies
simultaneously with the policy over options. We also propose a generalization
of EOC to problems with continuous state-spaces through the Nystr\"om
approximation. EOC can also be seen as extending OC to nonstationary settings,
where the discovered options are not tailored for a single task.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2017 23:21:42 GMT"
}
] | 1,513,123,200,000 | [
[
"Liu",
"Miao",
""
],
[
"Machado",
"Marlos C.",
""
],
[
"Tesauro",
"Gerald",
""
],
[
"Campbell",
"Murray",
""
]
] |
1712.04172 | Yueh-Hua Wu | Yueh-Hua Wu and Shou-De Lin | A Low-Cost Ethics Shaping Approach for Designing Reinforcement Learning
Agents | AAAI 2018 Oral Presentation | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a low-cost, easily realizable strategy to equip a
reinforcement learning (RL) agent the capability of behaving ethically. Our
model allows the designers of RL agents to solely focus on the task to achieve,
without having to worry about the implementation of multiple trivial ethical
patterns to follow. Based on the assumption that the majority of human
behavior, regardless which goals they are achieving, is ethical, our design
integrates human policy with the RL policy to achieve the target objective with
less chance of violating the ethical code that human beings normally obey.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2017 08:35:52 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Sep 2018 04:59:19 GMT"
}
] | 1,536,624,000,000 | [
[
"Wu",
"Yueh-Hua",
""
],
[
"Lin",
"Shou-De",
""
]
] |
1712.04182 | Wenpin Jiao | Wenpin Jiao | A Generic Model for Swarm Intelligence and Its Validations | 15 pages | WSEAS Transactions on Information Science and Applications, ISSN /
E-ISSN: 1790-0832 / 2224-3402, Volume 18, 2021, Art. #14, p.116-130 | 10.37394/23209.2021.18.14 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The modeling of emergent swarm intelligence constitutes a major challenge and
it has been tacked in a number of different ways. However, existing approaches
fail to capture the nature of swarm intelligence and they are either too
abstract for practical application or not generic enough to describe the
various types of emergence phenomena. In this paper, a contradiction-centric
model for swarm intelligence is proposed, in which individuals determine their
behaviors based on their internal contradictions whilst they associate and
interact to update their contradictions. The model hypothesizes that 1) the
emergence of swarm intelligence is rooted in the development of individuals'
internal contradictions and the interactions taking place between individuals
and the environment, and 2) swarm intelligence is essentially a combinative
reflection of the configurations of individuals' internal contradictions and
the distributions of these contradictions across individuals. The model is
formally described and five swarm intelligence systems are studied to
illustrate its broad applicability. The studies confirm the generic character
of the model and its effectiveness for describing the emergence of various
kinds of swarm intelligence; and they also demonstrate that the model is
straightforward to apply, without the need for complicated computations.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2017 09:25:02 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Sep 2021 00:57:24 GMT"
}
] | 1,631,232,000,000 | [
[
"Jiao",
"Wenpin",
""
]
] |
1712.04306 | Mihai Nadin | Mihai Nadin | In folly ripe. In reason rotten. Putting machine theology to rest | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computation has changed the world more than any previous expressions of
knowledge. In its particular algorithmic embodiment, it offers a perspective,
within which the digital computer (one of many possible) exercises a role
reminiscent of theology. Since it is closed to meaning, algorithmic digital
computation can at most mimic the creative aspects of life. AI, in the
perspective of time, proved to be less an acronym for artificial intelligence
and more of automating tasks associated with intelligence. The entire
development led to the hypostatized role of the machine: outputting nothing
else but reality, including that of the humanity that made the machine happen.
The convergence machine called deep learning is only the latest form through
which the deterministic theology of the machine claims more than what extremely
effective data processing actually is. A new understanding of complexity, as
well as the need to distinguish between the reactive nature of the artificial
and the anticipatory nature of the living are suggested as practical responses
to the challenges posed by machine theology.
| [
{
"version": "v1",
"created": "Sun, 3 Dec 2017 23:26:16 GMT"
}
] | 1,513,123,200,000 | [
[
"Nadin",
"Mihai",
""
]
] |
1712.04363 | Patrick Klose | Patrick Klose, Rudolf Mester | Simulated Autonomous Driving on Realistic Road Networks using Deep
Reinforcement Learning | The paper is submitted to be included in the proceedings of
Applications of Intelligent Systems 2018 (APPIS 2018) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using Deep Reinforcement Learning (DRL) can be a promising approach to handle
various tasks in the field of (simulated) autonomous driving. However, recent
publications mainly consider learning in unusual driving environments. This
paper presents Driving School for Autonomous Agents (DSA^2), a software for
validating DRL algorithms in more usual driving environments based on
artificial and realistic road networks. We also present the results of applying
DSA^2 for handling the task of driving on a straight road while regulating the
velocity of one vehicle according to different speed limits.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2017 15:55:53 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Apr 2018 14:16:19 GMT"
}
] | 1,522,800,000,000 | [
[
"Klose",
"Patrick",
""
],
[
"Mester",
"Rudolf",
""
]
] |
1712.04596 | Son-Il Kwak | Son-Il Kwak, Oh-Chol Gwon, Chung-Jin Kwak | Consideration on Example 2 of "An Algorithm of General Fuzzy
InferenceWith The Reductive Property" | 6 pages, 0 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we will show that (1) the results about the fuzzy reasoning
algoritm obtained in the paper "Computer Sciences Vol. 34, No.4, pp.145-148,
2007" according to the paper "IEEE Transactions On systems, Man and
cybernetics, 18, pp.1049-1056, 1988" are correct; (2) example 2 in the paper
"An Algorithm of General Fuzzy Inference With The Reductive Property" presented
by He Ying-Si, Quan Hai-Jin and Deng Hui-Wen according to the paper "An
approximate analogical reasoning approach based on similarity measures"
presented by Tursken I.B. and Zhong zhao is incorrect; (3) the mistakes in
their paper are modified and then a calculation example of FMT is supplemented.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2017 03:13:25 GMT"
}
] | 1,513,209,600,000 | [
[
"Kwak",
"Son-Il",
""
],
[
"Gwon",
"Oh-Chol",
""
],
[
"Kwak",
"Chung-Jin",
""
]
] |
1712.04909 | Subhash Kak | Subhash Kak | Reasoning in Systems with Elements that Randomly Switch Characteristics | 10 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine the issue of stability of probability in reasoning about complex
systems with uncertainty in structure. Normally, propositions are viewed as
probability functions on an abstract random graph where it is implicitly
assumed that the nodes of the graph have stable properties. But what if some of
the nodes change their characteristics? This is a situation that cannot be
covered by abstractions of either static or dynamic sets when these changes
take place at regular intervals. We propose the use of sets with elements that
change, and modular forms are proposed to account for one type of such change.
An expression for the dependence of the mean on the probability of the
switching elements has been determined. The system is also analyzed from the
perspective of decision between different hypotheses. Such sets are likely to
be of use in complex system queries and in analysis of surveys.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2017 18:25:20 GMT"
}
] | 1,513,209,600,000 | [
[
"Kak",
"Subhash",
""
]
] |
1712.05247 | Matthew Piekenbrock | Matthew Piekenbrock, Derek Doran | Intrinsic Point of Interest Discovery from Trajectory Data | 10 pages, 9 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a framework for intrinsic point of interest discovery
from trajectory databases. Intrinsic points of interest are regions of a
geospatial area innately defined by the spatial and temporal aspects of
trajectory data, and can be of varying size, shape, and resolution. Any
trajectory database exhibits such points of interest, and hence are intrinsic,
as compared to most other point of interest definitions which are said to be
extrinsic, as they require trajectory metadata, external knowledge about the
region the trajectories are observed, or other application-specific
information. Spatial and temporal aspects are qualities of any trajectory
database, making the framework applicable to data from any domain and of any
resolution. The framework is developed under recent developments on the
consistency of nonparametric hierarchical density estimators and enables the
possibility of formal statistical inference and evaluation over such intrinsic
points of interest. Comparisons of the POIs uncovered by the framework in
synthetic truth data to thousands of parameter settings for common POI
discovery methods show a marked improvement in fidelity without the need to
tune any parameters by hand.
| [
{
"version": "v1",
"created": "Thu, 14 Dec 2017 14:26:39 GMT"
}
] | 1,513,296,000,000 | [
[
"Piekenbrock",
"Matthew",
""
],
[
"Doran",
"Derek",
""
]
] |
1712.05514 | Siddharthan Perundurai Rajaskaran | Siddharthan Rajasekaran, Jinwei Zhang, and Jie Fu | Inverse Reinforce Learning with Nonparametric Behavior Clustering | 9 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inverse Reinforcement Learning (IRL) is the task of learning a single reward
function given a Markov Decision Process (MDP) without defining the reward
function, and a set of demonstrations generated by humans/experts. However, in
practice, it may be unreasonable to assume that human behaviors can be
explained by one reward function since they may be inherently inconsistent.
Also, demonstrations may be collected from various users and aggregated to
infer and predict user's behaviors. In this paper, we introduce the
Non-parametric Behavior Clustering IRL algorithm to simultaneously cluster
demonstrations and learn multiple reward functions from demonstrations that may
be generated from more than one behaviors. Our method is iterative: It
alternates between clustering demonstrations into different behavior clusters
and inverse learning the reward functions until convergence. It is built upon
the Expectation-Maximization formulation and non-parametric clustering in the
IRL setting. Further, to improve the computation efficiency, we remove the need
of completely solving multiple IRL problems for multiple clusters during the
iteration steps and introduce a resampling technique to avoid generating too
many unlikely clusters. We demonstrate the convergence and efficiency of the
proposed method through learning multiple driver behaviors from demonstrations
generated from a grid-world environment and continuous trajectories collected
from autonomous robot cars using the Gazebo robot simulator.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2017 03:13:23 GMT"
}
] | 1,513,555,200,000 | [
[
"Rajasekaran",
"Siddharthan",
""
],
[
"Zhang",
"Jinwei",
""
],
[
"Fu",
"Jie",
""
]
] |
1712.05812 | Stuart Armstrong | Stuart Armstrong and S\"oren Mindermann | Occam's razor is insufficient to infer the preferences of irrational
agents | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inverse reinforcement learning (IRL) attempts to infer human rewards or
preferences from observed behavior. Since human planning systematically
deviates from rationality, several approaches have been tried to account for
specific human shortcomings. However, the general problem of inferring the
reward function of an agent of unknown rationality has received little
attention. Unlike the well-known ambiguity problems in IRL, this one is
practically relevant but cannot be resolved by observing the agent's policy in
enough environments. This paper shows (1) that a No Free Lunch result implies
it is impossible to uniquely decompose a policy into a planning algorithm and
reward function, and (2) that even with a reasonable simplicity prior/Occam's
razor on the set of decompositions, we cannot distinguish between the true
decomposition and others that lead to high regret. To address this, we need
simple `normative' assumptions, which cannot be deduced exclusively from
observations.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2017 19:05:01 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Dec 2017 07:35:59 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Mar 2018 15:48:35 GMT"
},
{
"version": "v4",
"created": "Thu, 6 Sep 2018 16:36:54 GMT"
},
{
"version": "v5",
"created": "Mon, 29 Oct 2018 15:39:38 GMT"
},
{
"version": "v6",
"created": "Fri, 11 Jan 2019 14:36:40 GMT"
}
] | 1,547,424,000,000 | [
[
"Armstrong",
"Stuart",
""
],
[
"Mindermann",
"Sören",
""
]
] |
1712.05855 | Joseph Gonzalez | Ion Stoica, Dawn Song, Raluca Ada Popa, David Patterson, Michael W.
Mahoney, Randy Katz, Anthony D. Joseph, Michael Jordan, Joseph M.
Hellerstein, Joseph E. Gonzalez, Ken Goldberg, Ali Ghodsi, David Culler,
Pieter Abbeel | A Berkeley View of Systems Challenges for AI | Berkeley Technical Report | null | null | EECS-2017-159 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing commoditization of computer vision, speech recognition
and machine translation systems and the widespread deployment of learning-based
back-end technologies such as digital advertising and intelligent
infrastructures, AI (Artificial Intelligence) has moved from research labs to
production. These changes have been made possible by unprecedented levels of
data and computation, by methodological advances in machine learning, by
innovations in systems software and architectures, and by the broad
accessibility of these technologies.
The next generation of AI systems promises to accelerate these developments
and increasingly impact our lives via frequent interactions and making (often
mission-critical) decisions on our behalf, often in highly personalized
contexts. Realizing this promise, however, raises daunting challenges. In
particular, we need AI systems that make timely and safe decisions in
unpredictable environments, that are robust against sophisticated adversaries,
and that can process ever increasing amounts of data across organizations and
individuals without compromising confidentiality. These challenges will be
exacerbated by the end of the Moore's Law, which will constrain the amount of
data these technologies can store and process. In this paper, we propose
several open research directions in systems, architectures, and security that
can address these challenges and help unlock AI's potential to improve lives
and society.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2017 22:01:52 GMT"
}
] | 1,513,641,600,000 | [
[
"Stoica",
"Ion",
""
],
[
"Song",
"Dawn",
""
],
[
"Popa",
"Raluca Ada",
""
],
[
"Patterson",
"David",
""
],
[
"Mahoney",
"Michael W.",
""
],
[
"Katz",
"Randy",
""
],
[
"Joseph",
"Anthony D.",
""
],
[
"Jordan",
"Michael",
""
],
[
"Hellerstein",
"Joseph M.",
""
],
[
"Gonzalez",
"Joseph E.",
""
],
[
"Goldberg",
"Ken",
""
],
[
"Ghodsi",
"Ali",
""
],
[
"Culler",
"David",
""
],
[
"Abbeel",
"Pieter",
""
]
] |
1712.06180 | Per-Arne Andersen | Per-Arne Andersen, Morten Goodwin, Ole-Christoffer Granmo | Towards a Deep Reinforcement Learning Approach for Tower Line Wars | Proceedings of the 37th SGAI International Conference on Artificial
Intelligence, Cambridge, UK, 2017, Artificial Intelligence XXXIV, 2017 | null | 10.1007/978-3-319-71078-5 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There have been numerous breakthroughs with reinforcement learning in the
recent years, perhaps most notably on Deep Reinforcement Learning successfully
playing and winning relatively advanced computer games. There is undoubtedly an
anticipation that Deep Reinforcement Learning will play a major role when the
first AI masters the complicated game plays needed to beat a professional
Real-Time Strategy game player. For this to be possible, there needs to be a
game environment that targets and fosters AI research, and specifically Deep
Reinforcement Learning. Some game environments already exist, however, these
are either overly simplistic such as Atari 2600 or complex such as Starcraft II
from Blizzard Entertainment. We propose a game environment in between Atari
2600 and Starcraft II, particularly targeting Deep Reinforcement Learning
algorithm research. The environment is a variant of Tower Line Wars from
Warcraft III, Blizzard Entertainment. Further, as a proof of concept that the
environment can harbor Deep Reinforcement algorithms, we propose and apply a
Deep Q-Reinforcement architecture. The architecture simplifies the state space
so that it is applicable to Q-learning, and in turn improves performance
compared to current state-of-the-art methods. Our experiments show that the
proposed architecture can learn to play the environment well, and score 33%
better than standard Deep Q-learning which in turn proves the usefulness of the
game environment.
| [
{
"version": "v1",
"created": "Sun, 17 Dec 2017 21:29:45 GMT"
}
] | 1,513,641,600,000 | [
[
"Andersen",
"Per-Arne",
""
],
[
"Goodwin",
"Morten",
""
],
[
"Granmo",
"Ole-Christoffer",
""
]
] |
1712.06365 | Stuart Armstrong | Stuart Armstrong, Xavier O'Rourke | 'Indifference' methods for managing agent rewards | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | `Indifference' refers to a class of methods used to control reward based
agents. Indifference techniques aim to achieve one or more of three distinct
goals: rewards dependent on certain events (without the agent being motivated
to manipulate the probability of those events), effective disbelief (where
agents behave as if particular events could never happen), and seamless
transition from one reward function to another (with the agent acting as if
this change is unanticipated). This paper presents several methods for
achieving these goals in the POMDP setting, establishing their uses, strengths,
and requirements. These methods of control work even when the implications of
the agent's reward are otherwise not fully understood.
| [
{
"version": "v1",
"created": "Mon, 18 Dec 2017 12:28:45 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Dec 2017 13:32:08 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Feb 2018 11:00:29 GMT"
},
{
"version": "v4",
"created": "Tue, 5 Jun 2018 11:10:23 GMT"
}
] | 1,528,243,200,000 | [
[
"Armstrong",
"Stuart",
""
],
[
"O'Rourke",
"Xavier",
""
]
] |
1712.06440 | Liu Feng | Feng Liu, Yong Shi, Ying Liu | Three IQs of AI Systems and their Testing Methods | 15 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid development of artificial intelligence has brought the artificial
intelligence threat theory as well as the problem about how to evaluate the
intelligence level of intelligent products. Both need to find a quantitative
method to evaluate the intelligence level of intelligence systems, including
human intelligence. Based on the standard intelligence system and the extended
Von Neumann architecture, this paper proposes General IQ, Service IQ and Value
IQ evaluation methods for intelligence systems, depending on different
evaluation purposes. Among them, the General IQ of intelligence systems is to
answer the question of whether the artificial intelligence can surpass the
human intelligence, which is reflected in putting the intelligence systems on
an equal status and conducting the unified evaluation. The Service IQ and Value
IQ of intelligence systems are used to answer the question of how the
intelligent products can better serve the human, reflecting the intelligence
and required cost of each intelligence system as a product in the process of
serving human.
| [
{
"version": "v1",
"created": "Thu, 14 Dec 2017 17:49:04 GMT"
}
] | 1,513,641,600,000 | [
[
"Liu",
"Feng",
""
],
[
"Shi",
"Yong",
""
],
[
"Liu",
"Ying",
""
]
] |
1712.06560 | Jeff Clune | Edoardo Conti, Vashisht Madhavan, Felipe Petroski Such, Joel Lehman,
Kenneth O. Stanley, Jeff Clune | Improving Exploration in Evolution Strategies for Deep Reinforcement
Learning via a Population of Novelty-Seeking Agents | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evolution strategies (ES) are a family of black-box optimization algorithms
able to train deep neural networks roughly as well as Q-learning and policy
gradient methods on challenging deep reinforcement learning (RL) problems, but
are much faster (e.g. hours vs. days) because they parallelize better. However,
many RL problems require directed exploration because they have reward
functions that are sparse or deceptive (i.e. contain local optima), and it is
unknown how to encourage such exploration with ES. Here we show that algorithms
that have been invented to promote directed exploration in small-scale evolved
neural networks via populations of exploring agents, specifically novelty
search (NS) and quality diversity (QD) algorithms, can be hybridized with ES to
improve its performance on sparse or deceptive deep RL tasks, while retaining
scalability. Our experiments confirm that the resultant new algorithms, NS-ES
and two QD algorithms, NSR-ES and NSRA-ES, avoid local optima encountered by ES
to achieve higher performance on Atari and simulated robots learning to walk
around a deceptive trap. This paper thus introduces a family of fast, scalable
algorithms for reinforcement learning that are capable of directed exploration.
It also adds this new family of exploration algorithms to the RL toolbox and
raises the interesting possibility that analogous algorithms with multiple
simultaneous paths of exploration might also combine well with existing RL
algorithms outside ES.
| [
{
"version": "v1",
"created": "Mon, 18 Dec 2017 18:10:39 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jun 2018 19:04:46 GMT"
},
{
"version": "v3",
"created": "Mon, 29 Oct 2018 18:02:53 GMT"
}
] | 1,540,944,000,000 | [
[
"Conti",
"Edoardo",
""
],
[
"Madhavan",
"Vashisht",
""
],
[
"Such",
"Felipe Petroski",
""
],
[
"Lehman",
"Joel",
""
],
[
"Stanley",
"Kenneth O.",
""
],
[
"Clune",
"Jeff",
""
]
] |
1712.06778 | Saptarshi Pal | Saptarshi Pal and Soumya K Ghosh | Learning Representations from Road Network for End-to-End Urban Growth
Simulation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | From our experiences in the past, we have seen that the growth of cities is
very much dependent on the transportation networks. In mega cities,
transportation networks determine to a significant extent as to where the
people will move and houses will be built. Hence, transportation network data
is crucial to an urban growth prediction system. Existing works have used
manually derived distance based features based on the road networks to build
models on urban growth. But due to the non-generic and laborious nature of the
manual feature engineering process, we can shift to End-to-End systems which do
not rely on manual feature engineering. In this paper, we propose a method to
integrate road network data to an existing Rule based End-to-End framework
without manual feature engineering. Our method employs recurrent neural
networks to represent road networks in a structured way such that it can be
plugged into the previously proposed End-to-End framework. The proposed
approach enhances the performance in terms of Figure of Merit, Producer's
accuracy, User's accuracy and Overall accuracy of the existing Rule based
End-to-End framework.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2017 04:36:24 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Jan 2018 12:06:49 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Feb 2018 05:25:11 GMT"
}
] | 1,518,048,000,000 | [
[
"Pal",
"Saptarshi",
""
],
[
"Ghosh",
"Soumya K",
""
]
] |
1712.06935 | Boris Chidlovskii | Boris Chidlovskii | Mining Smart Card Data for Travelers' Mini Activities | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the context of public transport modeling and simulation, we address the
problem of mismatch between simulated transit trips and observed ones. We point
to the weakness of the current travel demand modeling process; the trips it
generates are over-optimistic and do not reflect the real passenger choices. We
introduce the notion of mini activities the travelers do during the trips; they
can explain the deviation of simulated trips from the observed trips. We
propose to mine the smart card data to extract the mini activities. We develop
a technique to integrate them in the generated trips and learn such an
integration from two available sources, the trip history and trip planner
recommendations. For an input travel demand, we build a Markov chain over the
trip collection and apply the Monte Carlo Markov Chain algorithm to integrate
mini activities in such a way that the selected characteristics converge to the
desired distributions. We test our method in different settings on the
passenger trip collection of Nancy, France. We report experimental results
demonstrating a very important mismatch reduction.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2017 14:05:23 GMT"
}
] | 1,513,728,000,000 | [
[
"Chidlovskii",
"Boris",
""
]
] |
1712.07081 | Serdar Kadioglu | Serdar Kadioglu | Column Generation for Interaction Coverage in Combinatorial Software
Testing | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a novel column generation framework for combinatorial
software testing. In particular, it combines Mathematical Programming and
Constraint Programming in a hybrid decomposition to generate covering arrays.
The approach allows generating parameterized test cases with coverage
guarantees between parameter interactions of a given application. Compared to
exhaustive testing, combinatorial test case generation reduces the number of
tests to run significantly. Our column generation algorithm is generic and can
accommodate mixed coverage arrays over heterogeneous alphabets. The algorithm
is realized in practice as a cloud service and recognized as one of the five
winners of the company-wide cloud application challenge at Oracle. The service
is currently helping software developers from a range of different product
teams in their testing efforts while exposing declarative constraint models and
hybrid optimization techniques to a broader audience.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2017 18:01:06 GMT"
}
] | 1,513,728,000,000 | [
[
"Kadioglu",
"Serdar",
""
]
] |
1712.07294 | Caiming Xiong Mr | Tianmin Shu, Caiming Xiong, Richard Socher | Hierarchical and Interpretable Skill Acquisition in Multi-task
Reinforcement Learning | 14 pages, 6 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning policies for complex tasks that require multiple different skills is
a major challenge in reinforcement learning (RL). It is also a requirement for
its deployment in real-world scenarios. This paper proposes a novel framework
for efficient multi-task reinforcement learning. Our framework trains agents to
employ hierarchical policies that decide when to use a previously learned
policy and when to learn a new skill. This enables agents to continually
acquire new skills during different stages of training. Each learned task
corresponds to a human language description. Because agents can only access
previously learned skills through these descriptions, the agent can always
provide a human-interpretable description of its choices. In order to help the
agent learn the complex temporal dependencies necessary for the hierarchical
policy, we provide it with a stochastic temporal grammar that modulates when to
rely on previously learned skills and when to execute new skills. We validate
our approach on Minecraft games designed to explicitly test the ability to
reuse previously learned skills while simultaneously learning new skills.
| [
{
"version": "v1",
"created": "Wed, 20 Dec 2017 02:50:20 GMT"
}
] | 1,513,814,400,000 | [
[
"Shu",
"Tianmin",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Socher",
"Richard",
""
]
] |
1712.07305 | Bo Xin | Xiangyu Kong, Bo Xin, Fangchen Liu, Yizhou Wang | Revisiting the Master-Slave Architecture in Multi-Agent Deep
Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many tasks in artificial intelligence require the collaboration of multiple
agents. We exam deep reinforcement learning for multi-agent domains. Recent
research efforts often take the form of two seemingly conflicting perspectives,
the decentralized perspective, where each agent is supposed to have its own
controller; and the centralized perspective, where one assumes there is a
larger model controlling all agents. In this regard, we revisit the idea of the
master-slave architecture by incorporating both perspectives within one
framework. Such a hierarchical structure naturally leverages advantages from
one another. The idea of combining both perspectives is intuitive and can be
well motivated from many real world systems, however, out of a variety of
possible realizations, we highlights three key ingredients, i.e. composed
action representation, learnable communication and independent reasoning. With
network designs to facilitate these explicitly, our proposal consistently
outperforms latest competing methods both in synthetic experiments and when
applied to challenging StarCraft micromanagement tasks.
| [
{
"version": "v1",
"created": "Wed, 20 Dec 2017 03:00:46 GMT"
}
] | 1,513,814,400,000 | [
[
"Kong",
"Xiangyu",
""
],
[
"Xin",
"Bo",
""
],
[
"Liu",
"Fangchen",
""
],
[
"Wang",
"Yizhou",
""
]
] |
1712.07686 | Manuel Mazzara | Vladimir Marochko, Leonard Johard, Manuel Mazzara, Luca Longo | Pseudorehearsal in actor-critic agents with neural network function
approximation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Catastrophic forgetting has a significant negative impact in reinforcement
learning. The purpose of this study is to investigate how pseudorehearsal can
change performance of an actor-critic agent with neural-network function
approximation. We tested agent in a pole balancing task and compared different
pseudorehearsal approaches. We have found that pseudorehearsal can assist
learning and decrease forgetting.
| [
{
"version": "v1",
"created": "Wed, 20 Dec 2017 19:53:23 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Feb 2018 08:55:29 GMT"
}
] | 1,519,084,800,000 | [
[
"Marochko",
"Vladimir",
""
],
[
"Johard",
"Leonard",
""
],
[
"Mazzara",
"Manuel",
""
],
[
"Longo",
"Luca",
""
]
] |
1712.07893 | Shih-Yang Su | Zhang-Wei Hong, Shih-Yang Su, Tzu-Yun Shann, Yi-Hsiang Chang, and
Chun-Yi Lee | A Deep Policy Inference Q-Network for Multi-Agent Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present DPIQN, a deep policy inference Q-network that targets multi-agent
systems composed of controllable agents, collaborators, and opponents that
interact with each other. We focus on one challenging issue in such
systems---modeling agents with varying strategies---and propose to employ
"policy features" learned from raw observations (e.g., raw images) of
collaborators and opponents by inferring their policies. DPIQN incorporates the
learned policy features as a hidden vector into its own deep Q-network (DQN),
such that it is able to predict better Q values for the controllable agents
than the state-of-the-art deep reinforcement learning models. We further
propose an enhanced version of DPIQN, called deep recurrent policy inference
Q-network (DRPIQN), for handling partial observability. Both DPIQN and DRPIQN
are trained by an adaptive training procedure, which adjusts the network's
attention to learn the policy features and its own Q-values at different phases
of the training process. We present a comprehensive analysis of DPIQN and
DRPIQN, and highlight their effectiveness and generalizability in various
multi-agent settings. Our models are evaluated in a classic soccer game
involving both competitive and collaborative scenarios. Experimental results
performed on 1 vs. 1 and 2 vs. 2 games show that DPIQN and DRPIQN demonstrate
superior performance to the baseline DQN and deep recurrent Q-network (DRQN)
models. We also explore scenarios in which collaborators or opponents
dynamically change their policies, and show that DPIQN and DRPIQN do lead to
better overall performance in terms of stability and mean scores.
| [
{
"version": "v1",
"created": "Thu, 21 Dec 2017 11:53:35 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Apr 2018 06:38:13 GMT"
}
] | 1,523,318,400,000 | [
[
"Hong",
"Zhang-Wei",
""
],
[
"Su",
"Shih-Yang",
""
],
[
"Shann",
"Tzu-Yun",
""
],
[
"Chang",
"Yi-Hsiang",
""
],
[
"Lee",
"Chun-Yi",
""
]
] |
1712.08266 | Saurabh Kumar | Saurabh Kumar, Pararth Shah, Dilek Hakkani-Tur, Larry Heck | Federated Control with Hierarchical Multi-Agent Deep Reinforcement
Learning | Hierarchical Reinforcement Learning Workshop at the 31st Conference
on Neural Information Processing Systems | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a framework combining hierarchical and multi-agent deep
reinforcement learning approaches to solve coordination problems among a
multitude of agents using a semi-decentralized model. The framework extends the
multi-agent learning setup by introducing a meta-controller that guides the
communication between agent pairs, enabling agents to focus on communicating
with only one other agent at any step. This hierarchical decomposition of the
task allows for efficient exploration to learn policies that identify globally
optimal solutions even as the number of collaborating agents increases. We show
promising initial experimental results on a simulated distributed scheduling
problem.
| [
{
"version": "v1",
"created": "Fri, 22 Dec 2017 00:54:48 GMT"
}
] | 1,514,160,000,000 | [
[
"Kumar",
"Saurabh",
""
],
[
"Shah",
"Pararth",
""
],
[
"Hakkani-Tur",
"Dilek",
""
],
[
"Heck",
"Larry",
""
]
] |
1712.08296 | Zilong Ye | James Sunthonlap, Phuoc Nguyen, Zilong Ye | Intelligent Device Discovery in the Internet of Things - Enabling the
Robot Society | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Internet of Things (IoT) is continuously growing to connect billions of
smart devices anywhere and anytime in an Internet-like structure, which enables
a variety of applications, services and interactions between human and objects.
In the future, the smart devices are supposed to be able to autonomously
discover a target device with desired features and generate a set of entirely
new services and applications that are not supervised or even imagined by human
beings. The pervasiveness of smart devices, as well as the heterogeneity of
their design and functionalities, raise a major concern: How can a smart device
efficiently discover a desired target device? In this paper, we propose a
Social-Aware and Distributed (SAND) scheme that achieves a fast, scalable and
efficient device discovery in the IoT. The proposed SAND scheme adopts a novel
device ranking criteria that measures the device's degree, social relationship
diversity, clustering coefficient and betweenness. Based on the device ranking
criteria, the discovery request can be guided to travel through critical
devices that stand at the major intersections of the network, and thus quickly
reach the desired target device by contacting only a limited number of
intermediate devices. With the help of such an intelligent device discovery as
SAND, the IoT devices, as well as other computing facilities, software and data
on the Internet, can autonomously establish new social connections with each
other as human being do. They can formulate self-organized computing groups to
perform required computing tasks, facilitate a fusion of a variety of computing
service, network service and data to generate novel applications and services,
evolve from the individual aritificial intelligence to the collaborative
intelligence, and eventually enable the birth of a robot society.
| [
{
"version": "v1",
"created": "Fri, 22 Dec 2017 03:45:36 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Jan 2018 22:40:38 GMT"
}
] | 1,515,542,400,000 | [
[
"Sunthonlap",
"James",
""
],
[
"Nguyen",
"Phuoc",
""
],
[
"Ye",
"Zilong",
""
]
] |
1712.08875 | arXiv Admin | Meng Wang | Predicting Rich Drug-Drug Interactions via Biomedical Knowledge Graphs
and Text Jointly Embedding | This article has been withdrawn by arXiv administrators due to an
unresolvable authorship dispute | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Minimizing adverse reactions caused by drug-drug interactions has always been
a momentous research topic in clinical pharmacology. Detecting all possible
interactions through clinical studies before a drug is released to the market
is a demanding task. The power of big data is opening up new approaches to
discover various drug-drug interactions. However, these discoveries contain a
huge amount of noise and provide knowledge bases far from complete and
trustworthy ones to be utilized. Most existing studies focus on predicting
binary drug-drug interactions between drug pairs but ignore other interactions.
In this paper, we propose a novel framework, called PRD, to predict drug-drug
interactions. The framework uses the graph embedding that can overcome data
incompleteness and sparsity issues to achieve multiple DDI label prediction.
First, a large-scale drug knowledge graph is generated from different sources.
Then, the knowledge graph is embedded with comprehensive biomedical text into a
common low dimensional space. Finally, the learned embeddings are used to
efficiently compute rich DDI information through a link prediction process. To
validate the effectiveness of the proposed framework, extensive experiments
were conducted on real-world datasets. The results demonstrate that our model
outperforms several state-of-the-art baseline methods in terms of capability
and accuracy.
| [
{
"version": "v1",
"created": "Sun, 24 Dec 2017 04:43:46 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Feb 2018 01:31:22 GMT"
},
{
"version": "v3",
"created": "Sat, 17 Feb 2018 04:56:57 GMT"
},
{
"version": "v4",
"created": "Mon, 12 Mar 2018 16:34:16 GMT"
}
] | 1,520,899,200,000 | [
[
"Wang",
"Meng",
""
]
] |
1712.09344 | Vahid Behzadan | Vahid Behzadan and Arslan Munir | Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger | arXiv admin note: text overlap with arXiv:1701.04143 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent developments have established the vulnerability of deep Reinforcement
Learning (RL) to policy manipulation attacks via adversarial perturbations. In
this paper, we investigate the robustness and resilience of deep RL to
training-time and test-time attacks. Through experimental results, we
demonstrate that under noncontiguous training-time attacks, Deep Q-Network
(DQN) agents can recover and adapt to the adversarial conditions by reactively
adjusting the policy. Our results also show that policies learned under
adversarial perturbations are more robust to test-time attacks. Furthermore, we
compare the performance of $\epsilon$-greedy and parameter-space noise
exploration methods in terms of robustness and resilience against adversarial
perturbations.
| [
{
"version": "v1",
"created": "Sat, 23 Dec 2017 23:57:55 GMT"
}
] | 1,514,505,600,000 | [
[
"Behzadan",
"Vahid",
""
],
[
"Munir",
"Arslan",
""
]
] |
1712.10070 | James Foster | James M. Foster and Matt Jones | Reinforcement Learning with Analogical Similarity to Guide Schema
Induction and Attention | 20 pages, 7 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research in analogical reasoning suggests that higher-order cognitive
functions such as abstract reasoning, far transfer, and creativity are founded
on recognizing structural similarities among relational systems. Here we
integrate theories of analogy with the computational framework of reinforcement
learning (RL). We propose a psychology theory that is a computational synergy
between analogy and RL, in which analogical comparison provides the RL learning
algorithm with a measure of relational similarity, and RL provides feedback
signals that can drive analogical learning. Simulation results support the
power of this approach.
| [
{
"version": "v1",
"created": "Thu, 28 Dec 2017 22:11:53 GMT"
}
] | 1,514,764,800,000 | [
[
"Foster",
"James M.",
""
],
[
"Jones",
"Matt",
""
]
] |
1712.10179 | Juan Juli\'an Merelo-Guerv\'os Pr. | Juan J. Merelo-Guerv\'os, Antonio Fern\'andez-Ares, Antonio \'Alvarez
Caballero, Pablo Garc\'ia-S\'anchez, Victor Rivas | RedDwarfData: a simplified dataset of StarCraft matches | null | null | null | GeNeura 2017-12-01 | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | The game Starcraft is one of the most interesting arenas to test new machine
learning and computational intelligence techniques; however, StarCraft matches
take a long time and creating a good dataset for training can be hard. Besides,
analyzing match logs to extract the main characteristics can also be done in
many different ways to the point that extracting and processing data itself can
take an inordinate amount of time and of course, depending on what you choose,
can bias learning algorithms. In this paper we present a simplified dataset
extracted from the set of matches published by Robinson and Watson, which we
have called RedDwarfData, containing several thousand matches processed to
frames, so that temporal studies can also be undertaken. This dataset is
available from GitHub under a free license. An initial analysis and appraisal
of these matches is also made.
| [
{
"version": "v1",
"created": "Fri, 29 Dec 2017 11:06:16 GMT"
}
] | 1,514,764,800,000 | [
[
"Merelo-Guervós",
"Juan J.",
""
],
[
"Fernández-Ares",
"Antonio",
""
],
[
"Caballero",
"Antonio Álvarez",
""
],
[
"García-Sánchez",
"Pablo",
""
],
[
"Rivas",
"Victor",
""
]
] |
1801.00690 | Yuval Tassa | Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego
de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq,
Timothy Lillicrap, Martin Riedmiller | DeepMind Control Suite | 24 pages, 7 figures, 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The DeepMind Control Suite is a set of continuous control tasks with a
standardised structure and interpretable rewards, intended to serve as
performance benchmarks for reinforcement learning agents. The tasks are written
in Python and powered by the MuJoCo physics engine, making them easy to use and
modify. We include benchmarks for several learning algorithms. The Control
Suite is publicly available at https://www.github.com/deepmind/dm_control . A
video summary of all tasks is available at http://youtu.be/rAai4QzcYbs .
| [
{
"version": "v1",
"created": "Tue, 2 Jan 2018 15:48:14 GMT"
}
] | 1,514,937,600,000 | [
[
"Tassa",
"Yuval",
""
],
[
"Doron",
"Yotam",
""
],
[
"Muldal",
"Alistair",
""
],
[
"Erez",
"Tom",
""
],
[
"Li",
"Yazhe",
""
],
[
"Casas",
"Diego de Las",
""
],
[
"Budden",
"David",
""
],
[
"Abdolmaleki",
"Abbas",
""
],
[
"Merel",
"Josh",
""
],
[
"Lefrancq",
"Andrew",
""
],
[
"Lillicrap",
"Timothy",
""
],
[
"Riedmiller",
"Martin",
""
]
] |
1801.00702 | Xinyang Deng | Xinyang Deng and Wen Jiang | A total uncertainty measure for D numbers based on belief intervals | 14 pages, 2 figures. arXiv admin note: text overlap with
arXiv:1711.09186 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a generalization of Dempster-Shafer theory, the theory of D numbers is a
new theoretical framework for uncertainty reasoning. Measuring the uncertainty
of knowledge or information represented by D numbers is an unsolved issue in
that theory. In this paper, inspired by distance based uncertainty measures for
Dempster-Shafer theory, a total uncertainty measure for a D number is proposed
based on its belief intervals. The proposed total uncertainty measure can
simultaneously capture the discord, and non-specificity, and non-exclusiveness
involved in D numbers. And some basic properties of this total uncertainty
measure, including range, monotonicity, generalized set consistency, are also
presented.
| [
{
"version": "v1",
"created": "Mon, 25 Dec 2017 12:51:25 GMT"
}
] | 1,514,937,600,000 | [
[
"Deng",
"Xinyang",
""
],
[
"Jiang",
"Wen",
""
]
] |
1801.01000 | Christopher Schulze | Christopher Schulze, Marcus Schulze | ViZDoom: DRQN with Prioritized Experience Replay, Double-Q Learning, &
Snapshot Ensembling | 9 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ViZDoom is a robust, first-person shooter reinforcement learning environment,
characterized by a significant degree of latent state information. In this
paper, double-Q learning and prioritized experience replay methods are tested
under a certain ViZDoom combat scenario using a competitive deep recurrent
Q-network (DRQN) architecture. In addition, an ensembling technique known as
snapshot ensembling is employed using a specific annealed learning rate to
observe differences in ensembling efficacy under these two methods. Annealed
learning rates are important in general to the training of deep neural network
models, as they shake up the status-quo and counter a model's tending towards
local optima. While both variants show performance exceeding those of built-in
AI agents of the game, the known stabilizing effects of double-Q learning are
illustrated, and priority experience replay is again validated in its
usefulness by showing immediate results early on in agent development, with the
caveat that value overestimation is accelerated in this case. In addition, some
unique behaviors are observed to develop for priority experience replay (PER)
and double-Q (DDQ) variants, and snapshot ensembling of both PER and DDQ proves
a valuable method for improving performance of the ViZDoom Marine.
| [
{
"version": "v1",
"created": "Wed, 3 Jan 2018 13:49:08 GMT"
}
] | 1,515,024,000,000 | [
[
"Schulze",
"Christopher",
""
],
[
"Schulze",
"Marcus",
""
]
] |
1801.01422 | Louise Dennis Dr | Louise Dennis and Michael Fisher | Practical Challenges in Explicit Ethical Machine Reasoning | In proceedings International Conference on Artificial Intelligence
and Mathematics, Fort Lauderdale, Florida, FL. 3-5 January, 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine implemented systems for ethical machine reasoning with a view to
identifying the practical challenges (as opposed to philosophical challenges)
posed by the area. We identify a need for complex ethical machine reasoning not
only to be multi-objective, proactive, and scrutable but that it must draw on
heterogeneous evidential reasoning. We also argue that, in many cases, it needs
to operate in real time and be verifiable. We propose a general architecture
involving a declarative ethical arbiter which draws upon multiple evidential
reasoners each responsible for a particular ethical feature of the system's
environment. We claim that this architecture enables some separation of
concerns among the practical challenges that ethical machine reasoning poses.
| [
{
"version": "v1",
"created": "Thu, 4 Jan 2018 16:19:33 GMT"
}
] | 1,515,369,600,000 | [
[
"Dennis",
"Louise",
""
],
[
"Fisher",
"Michael",
""
]
] |
1801.01604 | Han Xiao Almighty | Han Xiao | Intelligence Graph | arXiv admin note: substantial text overlap with arXiv:1702.06247 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In fact, there exist three genres of intelligence architectures: logics (e.g.
\textit{Random Forest, A$^*$ Searching}), neurons (e.g. \textit{CNN, LSTM}) and
probabilities (e.g. \textit{Naive Bayes, HMM}), all of which are incompatible
to each other. However, to construct powerful intelligence systems with various
methods, we propose the intelligence graph (short as \textbf{\textit{iGraph}}),
which is composed by both of neural and probabilistic graph, under the
framework of forward-backward propagation. By the paradigm of iGraph, we design
a recommendation model with semantic principle. First, the probabilistic
distributions of categories are generated from the embedding representations of
users/items, in the manner of neurons. Second, the probabilistic graph infers
the distributions of features, in the manner of probabilities. Last, for the
recommendation diversity, we perform an expectation computation then conduct a
logic judgment, in the manner of logics. Experimentally, we beat the
state-of-the-art baselines and verify our conclusions.
| [
{
"version": "v1",
"created": "Fri, 5 Jan 2018 01:29:00 GMT"
}
] | 1,515,369,600,000 | [
[
"Xiao",
"Han",
""
]
] |
1801.01705 | Martijn van Otterlo | Martijn van Otterlo | Gatekeeping Algorithms with Human Ethical Bias: The ethics of algorithms
in archives, libraries and society | Submitted (Nov 2017) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the age of algorithms, I focus on the question of how to ensure algorithms
that will take over many of our familiar archival and library tasks, will
behave according to human ethical norms that have evolved over many years. I
start by characterizing physical archives in the context of related
institutions such as libraries and museums. In this setting I analyze how
ethical principles, in particular about access to information, have been
formalized and communicated in the form of ethical codes, or: codes of
conducts. After that I describe two main developments: digitalization, in which
physical aspects of the world are turned into digital data, and
algorithmization, in which intelligent computer programs turn this data into
predictions and decisions. Both affect interactions that were once physical but
now digital. In this new setting I survey and analyze the ethical aspects of
algorithms and how they shape a vision on the future of archivists and
librarians, in the form of algorithmic documentalists, or: codementalists.
Finally I outline a general research strategy, called IntERMEeDIUM, to obtain
algorithms that obey are human ethical values encoded in code of ethics.
| [
{
"version": "v1",
"created": "Fri, 5 Jan 2018 10:58:13 GMT"
}
] | 1,515,369,600,000 | [
[
"van Otterlo",
"Martijn",
""
]
] |
1801.01733 | Purushottam Dixit | Purushottam D. Dixit | Entropy production rate as a criterion for inconsistency in decision
theory | To appear in Journal of Statistical Physics | null | 10.1088/1742-5468/aac137 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Individual and group decisions are complex, often involving choosing an apt
alternative from a multitude of options. Evaluating pairwise comparisons breaks
down such complex decision problems into tractable ones. Pairwise comparison
matrices (PCMs) are regularly used to solve multiple-criteria decision-making
(MCDM) problems, for example, using Saaty's analytic hierarchy process (AHP)
framework. However, there are two significant drawbacks of using PCMs. First,
humans evaluate PCMs in an inconsistent manner. Second, not all entries of a
large PCM can be reliably filled by human decision makers. We address these two
issues by first establishing a novel connection between PCMs and
time-irreversible Markov processes. Specifically, we show that every PCM
induces a family of dissipative maximum path entropy random walks (MERW) over
the set of alternatives. We show that only `consistent' PCMs correspond to
detailed balanced MERWs. We identify the non-equilibrium entropy production in
the induced MERWs as a metric of inconsistency of the underlying PCMs. Notably,
the entropy production satisfies all of the recently laid out criteria for
reasonable consistency indices. We also propose an approach to use incompletely
filled PCMs in AHP. Potential future avenues are discussed as well.
keywords: analytic hierarchy process, markov chains, maximum entropy
| [
{
"version": "v1",
"created": "Fri, 5 Jan 2018 12:06:56 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Apr 2018 14:18:07 GMT"
}
] | 1,528,848,000,000 | [
[
"Dixit",
"Purushottam D.",
""
]
] |
1801.01788 | Karl Schlechta | Karl Schlechta | A Reliability Theory of Truth | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our approach is basically a coherence approach, but we avoid the well-known
pitfalls of coherence theories of truth. Consistency is replaced by
reliability, which expresses support and attack, and, in principle, every
theory (or agent, message) counts. At the same time, we do not require a
priviledged access to "reality". A centerpiece of our approach is that we
attribute reliability also to agents, messages, etc., so an unreliable source
of information will be less important in future. Our ideas can also be extended
to value systems, and even actions, e.g., of animals.
| [
{
"version": "v1",
"created": "Wed, 3 Jan 2018 14:51:47 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Feb 2018 16:57:31 GMT"
},
{
"version": "v3",
"created": "Sun, 1 Apr 2018 14:28:01 GMT"
}
] | 1,522,713,600,000 | [
[
"Schlechta",
"Karl",
""
]
] |
1801.01807 | Fabricio de Franca Olivetti | Fabricio Olivetti de Franca | A Greedy Search Tree Heuristic for Symbolic Regression | 30 pages, 7 figures, 3 tables, submitted to Information Science on
12/2016 | null | 10.1016/j.ins.2018.02.040 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symbolic Regression tries to find a mathematical expression that describes
the relationship of a set of explanatory variables to a measured variable. The
main objective is to find a model that minimizes the error and, optionally,
that also minimizes the expression size. A smaller expression can be seen as an
interpretable model considered a reliable decision model. This is often
performed with Genetic Programming which represents their solution as
expression trees. The shortcoming of this algorithm lies on this representation
that defines a rugged search space and contains expressions of any size and
difficulty. These pose as a challenge to find the optimal solution under
computational constraints. This paper introduces a new data structure, called
Interaction-Transformation (IT), that constrains the search space in order to
exclude a region of larger and more complicated expressions. In order to test
this data structure, it was also introduced an heuristic called SymTree. The
obtained results show evidence that SymTree are capable of obtaining the
optimal solution whenever the target function is within the search space of the
IT data structure and competitive results when it is not. Overall, the
algorithm found a good compromise between accuracy and simplicity for all the
generated models.
| [
{
"version": "v1",
"created": "Thu, 4 Jan 2018 18:30:38 GMT"
}
] | 1,519,689,600,000 | [
[
"de Franca",
"Fabricio Olivetti",
""
]
] |
1801.01972 | Ling Dong | Zecang Gu and Ling Dong | Distance formulas capable of unifying Euclidian space and probability
space | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For pattern recognition like image recognition, it has become clear that each
machine-learning dictionary data actually became data in probability space
belonging to Euclidean space. However, the distances in the Euclidean space and
the distances in the probability space are separated and ununified when machine
learning is introduced in the pattern recognition. There is still a problem
that it is impossible to directly calculate an accurate matching relation
between the sampling data of the read image and the learned dictionary data. In
this research, we focused on the reason why the distance is changed and the
extent of change when passing through the probability space from the original
Euclidean distance among data belonging to multiple probability spaces
containing Euclidean space. By finding the reason of the cause of the distance
error and finding the formula expressing the error quantitatively, a possible
distance formula to unify Euclidean space and probability space is found. Based
on the results of this research, the relationship between machine-learning
dictionary data and sampling data was clearly understood for pattern
recognition. As a result, the calculation of collation among data and
machine-learning to compete mutually between data are cleared, and complicated
calculations became unnecessary. Finally, using actual pattern recognition
data, experimental demonstration of a possible distance formula to unify
Euclidean space and probability space discovered by this research was carried
out, and the effectiveness of the result was confirmed.
| [
{
"version": "v1",
"created": "Sat, 6 Jan 2018 05:41:08 GMT"
}
] | 1,515,456,000,000 | [
[
"Gu",
"Zecang",
""
],
[
"Dong",
"Ling",
""
]
] |
1801.02193 | Michal \v{C}ertick\'y | Michal \v{S}ustr, Jan Mal\'y, Michal \v{C}ertick\'y | Multi-platform Version of StarCraft: Brood War in a Docker Container:
Technical Report | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a dockerized version of a real-time strategy game StarCraft: Brood
War, commonly used as a domain for AI research, with a pre-installed collection
of AI developement tools supporting all the major types of StarCraft bots. This
provides a convenient way to deploy StarCraft AIs on numerous hosts at once and
across multiple platforms despite limited OS support of StarCraft. In this
technical report, we describe the design of our Docker images and present a few
use cases.
| [
{
"version": "v1",
"created": "Sun, 7 Jan 2018 14:16:59 GMT"
}
] | 1,515,456,000,000 | [
[
"Šustr",
"Michal",
""
],
[
"Malý",
"Jan",
""
],
[
"Čertický",
"Michal",
""
]
] |
1801.02281 | Vatsal Mahajan | Vatsal Mahajan | Winograd Schema - Knowledge Extraction Using Narrative Chains | 4 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Winograd Schema Challenge (WSC) is a test of machine intelligence,
designed to be an improvement on the Turing test. A Winograd Schema consists of
a sentence and a corresponding question. To successfully answer these
questions, one requires the use of commonsense knowledge and reasoning. This
work focuses on extracting common sense knowledge which can be used to generate
answers for the Winograd schema challenge. Common sense knowledge is extracted
based on events (or actions) and their participants; called Event-Based
Conditional Commonsense (ECC). I propose an approach using Narrative Event
Chains [Chambers et al., 2008] to extract ECC knowledge. These are stored in
templates, to be later used for answering the WSC questions. This approach
works well with respect to a subset of WSC tasks.
| [
{
"version": "v1",
"created": "Mon, 8 Jan 2018 00:36:08 GMT"
}
] | 1,515,456,000,000 | [
[
"Mahajan",
"Vatsal",
""
]
] |
1801.02334 | Yunlong Mi | Yunlong Mi, Yong Shi, and Jinhai Li | A generalized concept-cognitive learning: A machine learning viewpoint | 7 pages,3 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Concept-cognitive learning (CCL) is a hot topic in recent years, and it has
attracted much attention from the communities of formal concept analysis,
granular computing and cognitive computing. However, the relationship among
cognitive computing (CC), concept-cognitive computing (CCC), CCL and
concept-cognitive learning model (CCLM) is not clearly described. To this end,
we first explain the relationship of CC, CCC, CCL and CCLM. Then, we propose a
generalized concept-cognitive learning (GCCL) from the point of view of machine
learning. Finally, experiments on some data sets are conducted to verify the
feasibility of concept formation and concept-cognitive process of GCCL.
| [
{
"version": "v1",
"created": "Mon, 8 Jan 2018 08:16:57 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Feb 2018 12:49:41 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Dec 2018 02:48:26 GMT"
}
] | 1,545,868,800,000 | [
[
"Mi",
"Yunlong",
""
],
[
"Shi",
"Yong",
""
],
[
"Li",
"Jinhai",
""
]
] |
1801.02852 | Tomasz Grel | Igor Adamski, Robert Adamski, Tomasz Grel, Adam J\k{e}drych, Kamil
Kaczmarek, Henryk Michalewski | Distributed Deep Reinforcement Learning: Learn how to play Atari games
in 21 minutes | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a study in Distributed Deep Reinforcement Learning (DDRL) focused
on scalability of a state-of-the-art Deep Reinforcement Learning algorithm
known as Batch Asynchronous Advantage ActorCritic (BA3C). We show that using
the Adam optimization algorithm with a batch size of up to 2048 is a viable
choice for carrying out large scale machine learning computations. This,
combined with careful reexamination of the optimizer's hyperparameters, using
synchronous training on the node level (while keeping the local, single node
part of the algorithm asynchronous) and minimizing the memory footprint of the
model, allowed us to achieve linear scaling for up to 64 CPU nodes. This
corresponds to a training time of 21 minutes on 768 CPU cores, as opposed to 10
hours when using a single node with 24 cores achieved by a baseline single-node
implementation.
| [
{
"version": "v1",
"created": "Tue, 9 Jan 2018 09:39:29 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Apr 2018 15:36:09 GMT"
}
] | 1,523,318,400,000 | [
[
"Adamski",
"Igor",
""
],
[
"Adamski",
"Robert",
""
],
[
"Grel",
"Tomasz",
""
],
[
"Jędrych",
"Adam",
""
],
[
"Kaczmarek",
"Kamil",
""
],
[
"Michalewski",
"Henryk",
""
]
] |
1801.03058 | Imon Banerjee | Imon Banerjee, Michael Francis Gensheimer, Douglas J. Wood, Solomon
Henry, Daniel Chang, Daniel L. Rubin | Abstract: Probabilistic Prognostic Estimates of Survival in Metastatic
Cancer Patients | null | AMIA Informatics Conference 2018 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a deep learning model - Probabilistic Prognostic Estimates of
Survival in Metastatic Cancer Patients (PPES-Met) for estimating short-term
life expectancy (3 months) of the patients by analyzing free-text clinical
notes in the electronic medical record, while maintaining the temporal visit
sequence. In a single framework, we integrated semantic data mapping and neural
embedding technique to produce a text processing method that extracts relevant
information from heterogeneous types of clinical notes in an unsupervised
manner, and we designed a recurrent neural network to model the temporal
dependency of the patient visits. The model was trained on a large dataset
(10,293 patients) and validated on a separated dataset (1818 patients). Our
method achieved an area under the ROC curve (AUC) of 0.89. To provide
explain-ability, we developed an interactive graphical tool that may improve
physician understanding of the basis for the model's predictions. The high
accuracy and explain-ability of the PPES-Met model may enable our model to be
used as a decision support tool to personalize metastatic cancer treatment and
provide valuable assistance to the physicians.
| [
{
"version": "v1",
"created": "Tue, 9 Jan 2018 17:51:12 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Jul 2018 23:56:14 GMT"
}
] | 1,531,785,600,000 | [
[
"Banerjee",
"Imon",
""
],
[
"Gensheimer",
"Michael Francis",
""
],
[
"Wood",
"Douglas J.",
""
],
[
"Henry",
"Solomon",
""
],
[
"Chang",
"Daniel",
""
],
[
"Rubin",
"Daniel L.",
""
]
] |
1801.03138 | Ben Parr | Ben Parr | Deep In-GPU Experience Replay | Source code (uses TensorFlow):
https://github.com/bparr/gpu-experience-replay | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Experience replay allows a reinforcement learning agent to train on samples
from a large amount of the most recent experiences. A simple in-RAM experience
replay stores these most recent experiences in a list in RAM, and then copies
sampled batches to the GPU for training. I moved this list to the GPU, thus
creating an in-GPU experience replay, and a training step that no longer has
inputs copied from the CPU. I trained an agent to play Super Smash Bros. Melee,
using internal game memory values as inputs and outputting controller button
presses. A single state in Melee contains 27 floats, so the full experience
replay fits on a single GPU. For a batch size of 128, the in-GPU experience
replay trained twice as fast as the in-RAM experience replay. As far as I know,
this is the first in-GPU implementation of experience replay. Finally, I note a
few ideas for fitting the experience replay inside the GPU when the environment
state requires more memory.
| [
{
"version": "v1",
"created": "Tue, 9 Jan 2018 20:52:33 GMT"
}
] | 1,515,628,800,000 | [
[
"Parr",
"Ben",
""
]
] |
1801.03160 | Felix Lindner | Felix Lindner and Martin Mose Bentzen | A Formalization of Kant's Second Formulation of the Categorical
Imperative | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a formalization and computational implementation of the second
formulation of Kant's categorical imperative. This ethical principle requires
an agent to never treat someone merely as a means but always also as an end.
Here we interpret this principle in terms of how persons are causally affected
by actions. We introduce Kantian causal agency models in which moral patients,
actions, goals, and causal influence are represented, and we show how to
formalize several readings of Kant's categorical imperative that correspond to
Kant's concept of strict and wide duties towards oneself and others. Stricter
versions handle cases where an action directly causally affects oneself or
others, whereas the wide version maximizes the number of persons being treated
as an end. We discuss limitations of our formalization by pointing to one of
Kant's cases that the machinery cannot handle in a satisfying way.
| [
{
"version": "v1",
"created": "Tue, 9 Jan 2018 22:23:21 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Mar 2018 12:23:33 GMT"
},
{
"version": "v3",
"created": "Thu, 11 Jul 2019 13:22:27 GMT"
}
] | 1,562,889,600,000 | [
[
"Lindner",
"Felix",
""
],
[
"Bentzen",
"Martin Mose",
""
]
] |
1801.03168 | Justin Gottschlich | Tae Jun Lee, Justin Gottschlich, Nesime Tatbul, Eric Metcalf, Stan
Zdonik | Greenhouse: A Zero-Positive Machine Learning System for Time-Series
Anomaly Detection | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This short paper describes our ongoing research on Greenhouse - a
zero-positive machine learning system for time-series anomaly detection.
| [
{
"version": "v1",
"created": "Tue, 9 Jan 2018 22:44:21 GMT"
},
{
"version": "v2",
"created": "Sun, 21 Jan 2018 22:05:31 GMT"
},
{
"version": "v3",
"created": "Sun, 11 Feb 2018 22:32:31 GMT"
}
] | 1,518,480,000,000 | [
[
"Lee",
"Tae Jun",
""
],
[
"Gottschlich",
"Justin",
""
],
[
"Tatbul",
"Nesime",
""
],
[
"Metcalf",
"Eric",
""
],
[
"Zdonik",
"Stan",
""
]
] |
1801.03175 | Justin Gottschlich | Tae Jun Lee, Justin Gottschlich, Nesime Tatbul, Eric Metcalf, Stan
Zdonik | Precision and Recall for Range-Based Anomaly Detection | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classical anomaly detection is principally concerned with point-based
anomalies, anomalies that occur at a single data point. In this paper, we
present a new mathematical model to express range-based anomalies, anomalies
that occur over a range (or period) of time.
| [
{
"version": "v1",
"created": "Tue, 9 Jan 2018 23:01:07 GMT"
},
{
"version": "v2",
"created": "Sun, 21 Jan 2018 22:10:35 GMT"
},
{
"version": "v3",
"created": "Sun, 11 Feb 2018 22:20:17 GMT"
}
] | 1,518,480,000,000 | [
[
"Lee",
"Tae Jun",
""
],
[
"Gottschlich",
"Justin",
""
],
[
"Tatbul",
"Nesime",
""
],
[
"Metcalf",
"Eric",
""
],
[
"Zdonik",
"Stan",
""
]
] |
1801.03331 | Craig Innes | Craig Innes, Alex Lascarides, Stefano V Albrecht, Subramanian
Ramamoorthy, Benjamin Rosman | Reasoning about Unforeseen Possibilities During Policy Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Methods for learning optimal policies in autonomous agents often assume that
the way the domain is conceptualised---its possible states and actions and
their causal structure---is known in advance and does not change during
learning. This is an unrealistic assumption in many scenarios, because new
evidence can reveal important information about what is possible, possibilities
that the agent was not aware existed prior to learning. We present a model of
an agent which both discovers and learns to exploit unforeseen possibilities
using two sources of evidence: direct interaction with the world and
communication with a domain expert. We use a combination of probabilistic and
symbolic reasoning to estimate all components of the decision problem,
including its set of random variables and their causal dependencies. Agent
simulations show that the agent converges on optimal polices even when it
starts out unaware of factors that are critical to behaving optimally.
| [
{
"version": "v1",
"created": "Wed, 10 Jan 2018 12:16:43 GMT"
}
] | 1,515,628,800,000 | [
[
"Innes",
"Craig",
""
],
[
"Lascarides",
"Alex",
""
],
[
"Albrecht",
"Stefano V",
""
],
[
"Ramamoorthy",
"Subramanian",
""
],
[
"Rosman",
"Benjamin",
""
]
] |
1801.03354 | Blai Bonet | Wilmer Bandres, Blai Bonet, Hector Geffner | Planning with Pixels in (Almost) Real Time | Published at AAAI-18 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, width-based planning methods have been shown to yield
state-of-the-art results in the Atari 2600 video games. For this, the states
were associated with the (RAM) memory states of the simulator. In this work, we
consider the same planning problem but using the screen instead. By using the
same visual inputs, the planning results can be compared with those of humans
and learning methods. We show that the planning approach, out of the box and
without training, results in scores that compare well with those obtained by
humans and learning methods, and moreover, by developing an episodic, rollout
version of the IW(k) algorithm, we show that such scores can be obtained in
almost real time.
| [
{
"version": "v1",
"created": "Wed, 10 Jan 2018 12:54:00 GMT"
}
] | 1,515,628,800,000 | [
[
"Bandres",
"Wilmer",
""
],
[
"Bonet",
"Blai",
""
],
[
"Geffner",
"Hector",
""
]
] |
1801.03355 | L\'aszl\'o Csat\'o | L\'aszl\'o Csat\'o | Axiomatizations of inconsistency indices for triads | 12 pages | Annals of Operations Research, 280(1-2): 99-110, 2019 | 10.1007/s10479-019-03312-0 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pairwise comparison matrices often exhibit inconsistency, therefore many
indices have been suggested to measure their deviation from a consistent
matrix. A set of axioms has been proposed recently that is required to be
satisfied by any reasonable inconsistency index. This set seems to be not
exhaustive as illustrated by an example, hence it is expanded by adding two new
properties. All axioms are considered on the set of triads, pairwise comparison
matrices with three alternatives, which is the simplest case of inconsistency.
We choose the logically independent properties and prove that they
characterize, that is, uniquely determine the inconsistency ranking induced by
most inconsistency indices that coincide on this restricted domain. Since
triads play a prominent role in a number of inconsistency indices, our results
can also contribute to the measurement of inconsistency for pairwise comparison
matrices with more than three alternatives.
| [
{
"version": "v1",
"created": "Wed, 10 Jan 2018 12:56:48 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Mar 2019 08:16:29 GMT"
}
] | 1,590,624,000,000 | [
[
"Csató",
"László",
""
]
] |
1801.03526 | Daniel Abolafia | Daniel A. Abolafia, Mohammad Norouzi, Jonathan Shen, Rui Zhao, Quoc V.
Le | Neural Program Synthesis with Priority Queue Training | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the task of program synthesis in the presence of a reward
function over the output of programs, where the goal is to find programs with
maximal rewards. We employ an iterative optimization scheme, where we train an
RNN on a dataset of K best programs from a priority queue of the generated
programs so far. Then, we synthesize new programs and add them to the priority
queue by sampling from the RNN. We benchmark our algorithm, called priority
queue training (or PQT), against genetic algorithm and reinforcement learning
baselines on a simple but expressive Turing complete programming language
called BF. Our experimental results show that our simple PQT algorithm
significantly outperforms the baselines. By adding a program length penalty to
the reward function, we are able to synthesize short, human readable programs.
| [
{
"version": "v1",
"created": "Wed, 10 Jan 2018 19:35:25 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Mar 2018 23:40:46 GMT"
}
] | 1,522,195,200,000 | [
[
"Abolafia",
"Daniel A.",
""
],
[
"Norouzi",
"Mohammad",
""
],
[
"Shen",
"Jonathan",
""
],
[
"Zhao",
"Rui",
""
],
[
"Le",
"Quoc V.",
""
]
] |
1801.03737 | Stuart Armstrong | Stuart Armstrong | Counterfactual equivalence for POMDPs, and underlying deterministic
environments | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partially Observable Markov Decision Processes (POMDPs) are rich environments
often used in machine learning. But the issue of information and causal
structures in POMDPs has been relatively little studied. This paper presents
the concepts of equivalent and counterfactually equivalent POMDPs, where agents
cannot distinguish which environment they are in though any observations and
actions. It shows that any POMDP is counterfactually equivalent, for any finite
number of turns, to a deterministic POMDP with all uncertainty concentrated
into the initial state. This allows a better understanding of POMDP
uncertainty, information, and learning.
| [
{
"version": "v1",
"created": "Thu, 11 Jan 2018 12:40:59 GMT"
},
{
"version": "v2",
"created": "Sun, 14 Jan 2018 12:56:00 GMT"
}
] | 1,516,060,800,000 | [
[
"Armstrong",
"Stuart",
""
]
] |
1801.03929 | Lucas Bechberger | Lucas Bechberger and Kai-Uwe K\"uhnberger | Formalized Conceptual Spaces with a Geometric Representation of
Correlations | Published in the edited volume "Conceptual Spaces: Elaborations and
Applications". arXiv admin note: text overlap with arXiv:1706.06366,
arXiv:1707.02292, arXiv:1707.05165 | null | 10.1007/978-3-030-12800-5_3 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The highly influential framework of conceptual spaces provides a geometric
way of representing knowledge. Instances are represented by points in a
similarity space and concepts are represented by convex regions in this space.
After pointing out a problem with the convexity requirement, we propose a
formalization of conceptual spaces based on fuzzy star-shaped sets. Our
formalization uses a parametric definition of concepts and extends the original
framework by adding means to represent correlations between different domains
in a geometric way. Moreover, we define various operations for our
formalization, both for creating new concepts from old ones and for measuring
relations between concepts. We present an illustrative toy-example and sketch a
research project on concept formation that is based on both our formalization
and its implementation.
| [
{
"version": "v1",
"created": "Thu, 11 Jan 2018 08:37:58 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Jun 2019 06:35:31 GMT"
}
] | 1,562,025,600,000 | [
[
"Bechberger",
"Lucas",
""
],
[
"Kühnberger",
"Kai-Uwe",
""
]
] |
1801.03954 | Glen Berseth | Glen Berseth and Michiel van de Panne | Model-Based Action Exploration for Learning Dynamic Motion Skills | 7 pages, 7 figures, conference paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep reinforcement learning has achieved great strides in solving challenging
motion control tasks. Recently, there has been significant work on methods for
exploiting the data gathered during training, but there has been less work on
how to best generate the data to learn from. For continuous action domains, the
most common method for generating exploratory actions involves sampling from a
Gaussian distribution centred around the mean action output by a policy.
Although these methods can be quite capable, they do not scale well with the
dimensionality of the action space, and can be dangerous to apply on hardware.
We consider learning a forward dynamics model to predict the result,
($x_{t+1}$), of taking a particular action, ($u$), given a specific observation
of the state, ($x_{t}$). With this model we perform internal look-ahead
predictions of outcomes and seek actions we believe have a reasonable chance of
success. This method alters the exploratory action space, thereby increasing
learning speed and enables higher quality solutions to difficult problems, such
as robotic locomotion and juggling.
| [
{
"version": "v1",
"created": "Thu, 11 Jan 2018 19:05:38 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Apr 2018 03:56:02 GMT"
}
] | 1,523,577,600,000 | [
[
"Berseth",
"Glen",
""
],
[
"van de Panne",
"Michiel",
""
]
] |
1801.04170 | Andrey Chistyakov | Andrey Chistyakov | Multilayered Model of Speech | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human speech is the most important part of General Artificial Intelligence
and subject of much research. The hypothesis proposed in this article provides
explanation of difficulties that modern science tackles in the field of human
brain simulation. The hypothesis is based on the author's conviction that the
brain of any given person has different ability to process and store
information. Therefore, the approaches that are currently used to create
General Artificial Intelligence have to be altered.
| [
{
"version": "v1",
"created": "Mon, 8 Jan 2018 21:11:54 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Feb 2020 21:09:21 GMT"
}
] | 1,581,465,600,000 | [
[
"Chistyakov",
"Andrey",
""
]
] |
1801.04345 | Nathalia Moraes Do Nascimento | Nathalia Moraes do Nascimento and Carlos Jose Pereira de Lucena | Engineering Cooperative Smart Things based on Embodied Cognition | IEEE 2017 NASA/ESA Conference on Adaptive Hardware and Systems (AHS) | null | 10.1109/AHS.2017.8046366 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of the Internet of Things (IoT) is to transform any thing around us,
such as a trash can or a street light, into a smart thing. A smart thing has
the ability of sensing, processing, communicating and/or actuating. In order to
achieve the goal of a smart IoT application, such as minimizing waste
transportation costs or reducing energy consumption, the smart things in the
application scenario must cooperate with each other without a centralized
control. Inspired by known approaches to design swarm of cooperative and
autonomous robots, we modeled our smart things based on the embodied cognition
concept. Each smart thing is a physical agent with a body composed of a
microcontroller, sensors and actuators, and a brain that is represented by an
artificial neural network. This type of agent is commonly called an embodied
agent. The behavior of these embodied agents is autonomously configured through
an evolutionary algorithm that is triggered according to the application
performance. To illustrate, we have designed three homogeneous prototypes for
smart street lights based on an evolved network. This application has shown
that the proposed approach results in a feasible way of modeling decentralized
smart things with self-developed and cooperative capabilities.
| [
{
"version": "v1",
"created": "Fri, 12 Jan 2018 22:36:34 GMT"
}
] | 1,516,060,800,000 | [
[
"Nascimento",
"Nathalia Moraes do",
""
],
[
"de Lucena",
"Carlos Jose Pereira",
""
]
] |
1801.04346 | Richard Kim | Richard Kim, Max Kleiman-Weiner, Andres Abeliuk, Edmond Awad, Sohan
Dsouza, Josh Tenenbaum, Iyad Rahwan | A Computational Model of Commonsense Moral Decision Making | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new computational model of moral decision making, drawing on a
recent theory of commonsense moral learning via social dynamics. Our model
describes moral dilemmas as a utility function that computes trade-offs in
values over abstract moral dimensions, which provide interpretable parameter
values when implemented in machine-led ethical decision-making. Moreover,
characterizing the social structures of individuals and groups as a
hierarchical Bayesian model, we show that a useful description of an
individual's moral values - as well as a group's shared values - can be
inferred from a limited amount of observed data. Finally, we apply and evaluate
our approach to data from the Moral Machine, a web application that collects
human judgments on moral dilemmas involving autonomous vehicles.
| [
{
"version": "v1",
"created": "Fri, 12 Jan 2018 22:47:22 GMT"
}
] | 1,516,060,800,000 | [
[
"Kim",
"Richard",
""
],
[
"Kleiman-Weiner",
"Max",
""
],
[
"Abeliuk",
"Andres",
""
],
[
"Awad",
"Edmond",
""
],
[
"Dsouza",
"Sohan",
""
],
[
"Tenenbaum",
"Josh",
""
],
[
"Rahwan",
"Iyad",
""
]
] |
1801.04622 | Vatsal Mahajan | Vatsal Mahajan | Top k Memory Candidates in Memory Networks for Common Sense Reasoning | 3 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Successful completion of reasoning task requires the agent to have relevant
prior knowledge or some given context of the world dynamics. Usually, the
information provided to the system for a reasoning task is just the query or
some supporting story, which is often not enough for common reasoning tasks.
The goal here is that, if the information provided along the question is not
sufficient to correctly answer the question, the model should choose k most
relevant documents that can aid its inference process. In this work, the model
dynamically selects top k most relevant memory candidates that can be used to
successfully solve reasoning tasks. Experiments were conducted on a subset of
Winograd Schema Challenge (WSC) problems to show that the proposed model has
the potential for commonsense reasoning. The WSC is a test of machine
intelligence, designed to be an improvement on the Turing test.
| [
{
"version": "v1",
"created": "Sun, 14 Jan 2018 23:43:57 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Nov 2019 09:47:31 GMT"
}
] | 1,574,035,200,000 | [
[
"Mahajan",
"Vatsal",
""
]
] |
1801.05462 | Arend Hintze | Jory Schossau, Larissa Albantakis, Arend Hintze | The Role of Conditional Independence in the Evolution of Intelligent
Systems | Original Abstract submitted to the GECCO conference 2017 Berlin | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Systems are typically made from simple components regardless of their
complexity. While the function of each part is easily understood, higher order
functions are emergent properties and are notoriously difficult to explain. In
networked systems, both digital and biological, each component receives inputs,
performs a simple computation, and creates an output. When these components
have multiple outputs, we intuitively assume that the outputs are causally
dependent on the inputs but are themselves independent of each other given the
state of their shared input. However, this intuition can be violated for
components with probabilistic logic, as these typically cannot be decomposed
into separate logic gates with one output each. This violation of conditional
independence on the past system state is equivalent to instantaneous
interaction --- the idea is that some information between the outputs is not
coming from the inputs and thus must have been created instantaneously. Here we
compare evolved artificial neural systems with and without instantaneous
interaction across several task environments. We show that systems without
instantaneous interactions evolve faster, to higher final levels of
performance, and require fewer logic components to create a densely connected
cognitive machinery.
| [
{
"version": "v1",
"created": "Tue, 16 Jan 2018 19:43:13 GMT"
}
] | 1,516,233,600,000 | [
[
"Schossau",
"Jory",
""
],
[
"Albantakis",
"Larissa",
""
],
[
"Hintze",
"Arend",
""
]
] |
1801.05644 | Olivier Cailloux | Olivier Cailloux and Yves Meinard | A formal framework for deliberated judgment | This is the postprint version of the article published in Theory and
Decision. The text is identical, except for minor wording modifications | null | 10.1007/s11238-019-09722-7 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | While the philosophical literature has extensively studied how decisions
relate to arguments, reasons and justifications, decision theory almost
entirely ignores the latter notions and rather focuses on preference and
belief. In this article, we argue that decision theory can largely benefit from
explicitly taking into account the stance that decision-makers take towards
arguments and counter-arguments. To that end, we elaborate a formal framework
aiming to integrate the role of arguments and argumentation in decision theory
and decision aid. We start from a decision situation, where an individual
requests decision support. In this context, we formally define, as a
commendable basis for decision-aid, this individual's deliberated judgment,
popularized by Rawls. We explain how models of deliberated judgment can be
validated empirically. We then identify conditions upon which the existence of
a valid model can be taken for granted, and analyze how these conditions can be
relaxed. We then explore the significance of our proposed framework for
decision aiding practice. We argue that our concept of deliberated judgment
owes its normative credentials both to its normative foundations (the idea of
rationality based on arguments) and to its reference to empirical reality (the
stance that real, empirical individuals hold towards arguments and
counter-arguments, on due reflection). We then highlight that our framework
opens promising avenues for future research involving both philosophical and
decision theoretic approaches, as well as empirical implementations.
| [
{
"version": "v1",
"created": "Wed, 17 Jan 2018 12:53:13 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Nov 2019 14:39:06 GMT"
}
] | 1,573,430,400,000 | [
[
"Cailloux",
"Olivier",
""
],
[
"Meinard",
"Yves",
""
]
] |
1801.05667 | Gary Marcus | Gary Marcus | Innateness, AlphaZero, and Artificial Intelligence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The concept of innateness is rarely discussed in the context of artificial
intelligence. When it is discussed, or hinted at, it is often the context of
trying to reduce the amount of innate machinery in a given system. In this
paper, I consider as a test case a recent series of papers by Silver et al
(Silver et al., 2017a) on AlphaGo and its successors that have been presented
as an argument that a "even in the most challenging of domains: it is possible
to train to superhuman level, without human examples or guidance", "starting
tabula rasa."
I argue that these claims are overstated, for multiple reasons. I close by
arguing that artificial intelligence needs greater attention to innateness, and
I point to some proposals about what that innateness might look like.
| [
{
"version": "v1",
"created": "Wed, 17 Jan 2018 14:05:21 GMT"
}
] | 1,516,233,600,000 | [
[
"Marcus",
"Gary",
""
]
] |
1801.06689 | Alp M\"uyesser | Necati Alp Muyesser and Kyle Dunovan and Timothy Verstynen | Learning model-based strategies in simple environments with hierarchical
q-networks | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Recent advances in deep learning have allowed artificial agents to rival
human-level performance on a wide range of complex tasks; however, the ability
of these networks to learn generalizable strategies remains a pressing
challenge. This critical limitation is due in part to two factors: the opaque
information representation in deep neural networks and the complexity of the
task environments in which they are typically deployed. Here we propose a novel
Hierarchical Q-Network (HQN) motivated by theories of the hierarchical
organization of the human prefrontal cortex, that attempts to identify lower
dimensional patterns in the value landscape that can be exploited to construct
an internal model of rules in simple environments. We draw on combinatorial
games, where there exists a single optimal strategy for winning that
generalizes across other features of the game, to probe the strategy
generalization of the HQN and other reinforcement learning (RL) agents using
variations of Wythoff's game. Traditional RL approaches failed to reach
satisfactory performance on variants of Wythoff's Game; however, the HQN
learned heuristic-like strategies that generalized across changes in board
configuration. More importantly, the HQN allowed for transparent inspection of
the agent's internal model of the game following training. Our results show how
a biologically inspired hierarchical learner can facilitate learning abstract
rules to promote robust and flexible action policies in simplified training
environments with clearly delineated optimal strategies.
| [
{
"version": "v1",
"created": "Sat, 20 Jan 2018 15:31:35 GMT"
}
] | 1,516,665,600,000 | [
[
"Muyesser",
"Necati Alp",
""
],
[
"Dunovan",
"Kyle",
""
],
[
"Verstynen",
"Timothy",
""
]
] |
1801.07161 | Laura Giordano | Laura Giordano and Valentina Gliozzi | Reasoning about multiple aspects in DLs: Semantics and Closure
Construction | arXiv admin note: text overlap with arXiv:1604.00301 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Starting from the observation that rational closure has the undesirable
property of being an "all or nothing" mechanism, we here propose a
multipreferential semantics, which enriches the preferential semantics
underlying rational closure in order to separately deal with the inheritance of
different properties in an ontology with exceptions. We provide a
multipreference closure mechanism which is sound with respect to the
multipreference semantics.
| [
{
"version": "v1",
"created": "Thu, 18 Jan 2018 20:50:48 GMT"
}
] | 1,516,665,600,000 | [
[
"Giordano",
"Laura",
""
],
[
"Gliozzi",
"Valentina",
""
]
] |
1801.07357 | Yoav Artzi | Claudia Yan, Dipendra Misra, Andrew Bennnett, Aaron Walsman, Yonatan
Bisk and Yoav Artzi | CHALET: Cornell House Agent Learning Environment | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present CHALET, a 3D house simulator with support for navigation and
manipulation. CHALET includes 58 rooms and 10 house configuration, and allows
to easily create new house and room layouts. CHALET supports a range of common
household activities, including moving objects, toggling appliances, and
placing objects inside closeable containers. The environment and actions
available are designed to create a challenging domain to train and evaluate
autonomous agents, including for tasks that combine language, vision, and
planning in a dynamic environment.
| [
{
"version": "v1",
"created": "Tue, 23 Jan 2018 00:22:25 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Sep 2019 21:13:22 GMT"
}
] | 1,568,764,800,000 | [
[
"Yan",
"Claudia",
""
],
[
"Misra",
"Dipendra",
""
],
[
"Bennnett",
"Andrew",
""
],
[
"Walsman",
"Aaron",
""
],
[
"Bisk",
"Yonatan",
""
],
[
"Artzi",
"Yoav",
""
]
] |
1801.07411 | I-Chen Wu | Wen-Jie Tseng, Jr-Chang Chen, I-Chen Wu, Tinghan Wei | Comparison Training for Computer Chinese Chess | Submitted to IEEE Transaction on Games | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes the application of comparison training (CT) for
automatic feature weight tuning, with the final objective of improving the
evaluation functions used in Chinese chess programs. First, we propose an
n-tuple network to extract features, since n-tuple networks require very little
expert knowledge through its large numbers of features, while simulta-neously
allowing easy access. Second, we propose a novel evalua-tion method that
incorporates tapered eval into CT. Experiments show that with the same features
and the same Chinese chess program, the automatically tuned comparison training
feature weights achieved a win rate of 86.58% against the weights that were
hand-tuned. The above trained version was then improved by adding additional
features, most importantly n-tuple features. This improved version achieved a
win rate of 81.65% against the trained version without additional features.
| [
{
"version": "v1",
"created": "Tue, 23 Jan 2018 07:09:26 GMT"
}
] | 1,516,752,000,000 | [
[
"Tseng",
"Wen-Jie",
""
],
[
"Chen",
"Jr-Chang",
""
],
[
"Wu",
"I-Chen",
""
],
[
"Wei",
"Tinghan",
""
]
] |
1801.07440 | Ildefons Magrans de Abril | Ildefons Magrans de Abril and Ryota Kanai | Curiosity-driven reinforcement learning with homeostatic regulation | Presented at the NIPS 2017 Workshop: Cognitively Informed Artificial
Intelligence: Insights From Natural Intelligence | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a curiosity reward based on information theory principles and
consistent with the animal instinct to maintain certain critical parameters
within a bounded range. Our experimental validation shows the added value of
the additional homeostatic drive to enhance the overall information gain of a
reinforcement learning agent interacting with a complex environment using
continuous actions. Our method builds upon two ideas: i) To take advantage of a
new Bellman-like equation of information gain and ii) to simplify the
computation of the local rewards by avoiding the approximation of complex
distributions over continuous states and actions.
| [
{
"version": "v1",
"created": "Tue, 23 Jan 2018 08:52:22 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Feb 2018 02:27:12 GMT"
}
] | 1,518,048,000,000 | [
[
"de Abril",
"Ildefons Magrans",
""
],
[
"Kanai",
"Ryota",
""
]
] |
1801.08175 | Colm V. Gallagher | Colm V. Gallagher, Kevin Leahy, Peter O'Donovan, Ken Bruton, Dominic
T.J. O'Sullivan | Development and application of a machine learning supported methodology
for measurement and verification (M&V) 2.0 | 17 pages. Pre-print submitted to Energy and Buildings. This
manuscript version is made available under the CC-BY-NC-ND 4.0 licence | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The foundations of all methodologies for the measurement and verification
(M&V) of energy savings are based on the same five key principles: accuracy,
completeness, conservatism, consistency and transparency. The most widely
accepted methodologies tend to generalise M&V so as to ensure applicability
across the spectrum of energy conservation measures (ECM's). These do not
provide a rigid calculation procedure to follow. This paper aims to bridge the
gap between high-level methodologies and the practical application of modelling
algorithms, with a focus on the industrial buildings sector. This is achieved
with the development of a novel, machine learning supported methodology for M&V
2.0 which enables accurate quantification of savings.
A novel and computationally efficient feature selection algorithm and
powerful machine learning regression algorithms are employed to maximise the
effectiveness of available data. The baseline period energy consumption is
modelled using artificial neural networks, support vector machines, k-nearest
neighbours and multiple ordinary least squares regression. Improved knowledge
discovery and an expanded boundary of analysis allow more complex energy
systems be analysed, thus increasing the applicability of M&V. A case study in
a large biomedical manufacturing facility is used to demonstrate the
methodology's ability to accurately quantify the savings under real-world
conditions. The ECM was found to result in 604,527 kWh of energy savings with
57% uncertainty at a confidence interval of 68%. 20 baseline energy models are
developed using an exhaustive approach with the optimal model being used to
quantify savings. The range of savings estimated with each model are presented
and the acceptability of uncertainty is reviewed. The case study demonstrates
the ability of the methodology to perform M&V to an acceptable standard in
challenging circumstances.
| [
{
"version": "v1",
"created": "Wed, 24 Jan 2018 20:16:26 GMT"
}
] | 1,516,924,800,000 | [
[
"Gallagher",
"Colm V.",
""
],
[
"Leahy",
"Kevin",
""
],
[
"O'Donovan",
"Peter",
""
],
[
"Bruton",
"Ken",
""
],
[
"O'Sullivan",
"Dominic T. J.",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.