id
stringlengths
9
10
submitter
stringlengths
5
47
authors
stringlengths
5
1.72k
title
stringlengths
11
234
comments
stringlengths
1
491
journal-ref
stringlengths
4
396
doi
stringlengths
13
97
report-no
stringlengths
4
138
categories
stringclasses
1 value
license
stringclasses
9 values
abstract
stringlengths
29
3.66k
versions
listlengths
1
21
update_date
int64
1,180B
1,718B
authors_parsed
sequencelengths
1
98
1704.04775
He Jiang
He Jiang, Jifeng Xuan, Yan Hu
Approximating the Backbone in the Weighted Maximum Satisfiability Problem
14 pages, 1 figure, Proceedings of Advances in Knowledge Discovery and Data Mining 2008 (PAKDD 2008), 2008
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The weighted Maximum Satisfiability problem (weighted MAX-SAT) is a NP-hard problem with numerous applications arising in artificial intelligence. As an efficient tool for heuristic design, the backbone has been applied to heuristics design for many NP-hard problems. In this paper, we investigated the computational complexity for retrieving the backbone in weighted MAX-SAT and developed a new algorithm for solving this problem. We showed that it is intractable to retrieve the full backbone under the assumption that . Moreover, it is intractable to retrieve a fixed fraction of the backbone as well. And then we presented a backbone guided local search (BGLS) with Walksat operator for weighted MAX-SAT. BGLS consists of two phases: the first phase samples the backbone information from local optima and the backbone phase conducts local search under the guideline of backbone. Extensive experimental results on the benchmark showed that BGLS outperforms the existing heuristics in both solution quality and runtime.
[ { "version": "v1", "created": "Sun, 16 Apr 2017 13:23:14 GMT" } ]
1,492,473,600,000
[ [ "Jiang", "He", "" ], [ "Xuan", "Jifeng", "" ], [ "Hu", "Yan", "" ] ]
1704.04912
Manuel Mazzara
Marochko Vladimir, Leonard Johard, Manuel Mazzara
Pseudorehearsal in actor-critic agents
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Catastrophic forgetting has a serious impact in reinforcement learning, as the data distribution is generally sparse and non-stationary over time. The purpose of this study is to investigate whether pseudorehearsal can increase performance of an actor-critic agent with neural-network based policy selection and function approximation in a pole balancing task and compare different pseudorehearsal approaches. We expect that pseudorehearsal assists learning even in such very simple problems, given proper initialization of the rehearsal parameters.
[ { "version": "v1", "created": "Mon, 17 Apr 2017 09:27:52 GMT" } ]
1,492,473,600,000
[ [ "Vladimir", "Marochko", "" ], [ "Johard", "Leonard", "" ], [ "Mazzara", "Manuel", "" ] ]
1704.04977
Marco Cusumano-Towner
Marco F. Cusumano-Towner, Alexey Radul, David Wingate, Vikash K. Mansinghka
Probabilistic programs for inferring the goals of autonomous agents
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intelligent systems sometimes need to infer the probable goals of people, cars, and robots, based on partial observations of their motion. This paper introduces a class of probabilistic programs for formulating and solving these problems. The formulation uses randomized path planning algorithms as the basis for probabilistic models of the process by which autonomous agents plan to achieve their goals. Because these path planning algorithms do not have tractable likelihood functions, new inference algorithms are needed. This paper proposes two Monte Carlo techniques for these "likelihood-free" models, one of which can use likelihood estimates from neural networks to accelerate inference. The paper demonstrates efficacy on three simple examples, each using under 50 lines of probabilistic code.
[ { "version": "v1", "created": "Mon, 17 Apr 2017 14:34:02 GMT" }, { "version": "v2", "created": "Tue, 18 Apr 2017 14:40:03 GMT" } ]
1,492,560,000,000
[ [ "Cusumano-Towner", "Marco F.", "" ], [ "Radul", "Alexey", "" ], [ "Wingate", "David", "" ], [ "Mansinghka", "Vikash K.", "" ] ]
1704.05325
Pierre Parrend
Fabio Guigou (ICube), Pierre Collet (ICube, UNISTRA), Pierre Parrend (ICube)
Anomaly detection and motif discovery in symbolic representations of time series
null
null
10.13140/RG.2.2.20158.69447
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of the Big Data hype and the consistent recollection of event logs and real-time data from sensors, monitoring software and machine configuration has generated a huge amount of time-varying data in about every sector of the industry. Rule-based processing of such data has ceased to be relevant in many scenarios where anomaly detection and pattern mining have to be entirely accomplished by the machine. Since the early 2000s, the de-facto standard for representing time series has been the Symbolic Aggregate approXimation (SAX).In this document, we present a few algorithms using this representation for anomaly detection and motif discovery, also known as pattern mining, in such data. We propose a benchmark of anomaly detection algorithms using data from Cloud monitoring software.
[ { "version": "v1", "created": "Tue, 18 Apr 2017 13:19:50 GMT" } ]
1,492,560,000,000
[ [ "Guigou", "Fabio", "", "ICube" ], [ "Collet", "Pierre", "", "ICube, UNISTRA" ], [ "Parrend", "Pierre", "", "ICube" ] ]
1704.05392
Dmitry Demidov
Galina Rybina, Alexey Mozgachev, Dmitry Demidov
Synergy of all-purpose static solver and temporal reasoning tools in dynamic integrated expert systems
8 pages, 3 figures
"Informatsionno-izmeritelnye i upravlyayushchie sistemy" (Information-measuring and Control Systems) no.8, vol.12, 2014. pp 27-33. ISSN 2070-0814
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper discusses scientific and technological problems of dynamic integrated expert systems development. Extensions of problem-oriented methodology for dynamic integrated expert systems development are considered. Attention is paid to the temporal knowledge representation and processing.
[ { "version": "v1", "created": "Sun, 16 Apr 2017 21:50:23 GMT" } ]
1,492,560,000,000
[ [ "Rybina", "Galina", "" ], [ "Mozgachev", "Alexey", "" ], [ "Demidov", "Dmitry", "" ] ]
1704.05539
Russell Kaplan
Russell Kaplan, Christopher Sauer, Alexander Sosa
Beating Atari with Natural Language Guided Reinforcement Learning
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the first deep reinforcement learning agent that learns to beat Atari games with the aid of natural language instructions. The agent uses a multimodal embedding between environment observations and natural language to self-monitor progress through a list of English instructions, granting itself reward for completing instructions in addition to increasing the game score. Our agent significantly outperforms Deep Q-Networks (DQNs), Asynchronous Advantage Actor-Critic (A3C) agents, and the best agents posted to OpenAI Gym on what is often considered the hardest Atari 2600 environment: Montezuma's Revenge.
[ { "version": "v1", "created": "Tue, 18 Apr 2017 21:31:29 GMT" } ]
1,492,646,400,000
[ [ "Kaplan", "Russell", "" ], [ "Sauer", "Christopher", "" ], [ "Sosa", "Alexander", "" ] ]
1704.05569
Mayank Kejriwal
Rahul Kapoor, Mayank Kejriwal and Pedro Szekely
Using Contexts and Constraints for Improved Geotagging of Human Trafficking Webpages
6 pages, GeoRich 2017 workshop at ACM SIGMOD conference
null
10.1145/3080546.3080547
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extracting geographical tags from webpages is a well-motivated application in many domains. In illicit domains with unusual language models, like human trafficking, extracting geotags with both high precision and recall is a challenging problem. In this paper, we describe a geotag extraction framework in which context, constraints and the openly available Geonames knowledge base work in tandem in an Integer Linear Programming (ILP) model to achieve good performance. In preliminary empirical investigations, the framework improves precision by 28.57% and F-measure by 36.9% on a difficult human trafficking geotagging task compared to a machine learning-based baseline. The method is already being integrated into an existing knowledge base construction system widely used by US law enforcement agencies to combat human trafficking.
[ { "version": "v1", "created": "Wed, 19 Apr 2017 00:52:02 GMT" } ]
1,492,646,400,000
[ [ "Kapoor", "Rahul", "" ], [ "Kejriwal", "Mayank", "" ], [ "Szekely", "Pedro", "" ] ]
1704.06096
Amos Korman
Amos Korman (GANG, IRIF), Yoav Rodeh
The Dependent Doors Problem: An Investigation into Sequential Decisions without Feedback
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the dependent doors problem as an abstraction for situations in which one must perform a sequence of possibly dependent decisions, without receiving feedback information on the effectiveness of previously made actions. Informally, the problem considers a set of $d$ doors that are initially closed, and the aim is to open all of them as fast as possible. To open a door, the algorithm knocks on it and it might open or not according to some probability distribution. This distribution may depend on which other doors are currently open, as well as on which other doors were open during each of the previous knocks on that door. The algorithm aims to minimize the expected time until all doors open. Crucially, it must act at any time without knowing whether or which other doors have already opened. In this work, we focus on scenarios where dependencies between doors are both positively correlated and acyclic.The fundamental distribution of a door describes the probability it opens in the best of conditions (with respect to other doors being open or closed). We show that if in two configurations of $d$ doors corresponding doors share the same fundamental distribution, then these configurations have the same optimal running time up to a universal constant, no matter what are the dependencies between doors and what are the distributions. We also identify algorithms that are optimal up to a universal constant factor. For the case in which all doors share the same fundamental distribution we additionally provide a simpler algorithm, and a formula to calculate its running time. We furthermore analyse the price of lacking feedback for several configurations governed by standard fundamental distributions. In particular, we show that the price is logarithmic in $d$ for memoryless doors, but can potentially grow to be linear in $d$ for other distributions.We then turn our attention to investigate precise bounds. Even for the case of two doors, identifying the optimal sequence is an intriguing combinatorial question. Here, we study the case of two cascading memoryless doors. That is, the first door opens on each knock independently with probability $p\_1$. The second door can only open if the first door is open, in which case it will open on each knock independently with probability $p\_2$. We solve this problem almost completely by identifying algorithms that are optimal up to an additive term of 1.
[ { "version": "v1", "created": "Thu, 20 Apr 2017 11:35:44 GMT" } ]
1,492,732,800,000
[ [ "Korman", "Amos", "", "GANG, IRIF" ], [ "Rodeh", "Yoav", "" ] ]
1704.06300
Niranjani Prasad
Niranjani Prasad, Li-Fang Cheng, Corey Chivers, Michael Draugelis, Barbara E Engelhardt
A Reinforcement Learning Approach to Weaning of Mechanical Ventilation in Intensive Care Units
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The management of invasive mechanical ventilation, and the regulation of sedation and analgesia during ventilation, constitutes a major part of the care of patients admitted to intensive care units. Both prolonged dependence on mechanical ventilation and premature extubation are associated with increased risk of complications and higher hospital costs, but clinical opinion on the best protocol for weaning patients off of a ventilator varies. This work aims to develop a decision support tool that uses available patient information to predict time-to-extubation readiness and to recommend a personalized regime of sedation dosage and ventilator support. To this end, we use off-policy reinforcement learning algorithms to determine the best action at a given patient state from sub-optimal historical ICU data. We compare treatment policies from fitted Q-iteration with extremely randomized trees and with feedforward neural networks, and demonstrate that the policies learnt show promise in recommending weaning protocols with improved outcomes, in terms of minimizing rates of reintubation and regulating physiological stability.
[ { "version": "v1", "created": "Thu, 20 Apr 2017 18:53:51 GMT" } ]
1,492,992,000,000
[ [ "Prasad", "Niranjani", "" ], [ "Cheng", "Li-Fang", "" ], [ "Chivers", "Corey", "" ], [ "Draugelis", "Michael", "" ], [ "Engelhardt", "Barbara E", "" ] ]
1704.06616
Siddharth Karamcheti
Dilip Arumugam, Siddharth Karamcheti, Nakul Gopalan, Lawson L.S. Wong, and Stefanie Tellex
Accurately and Efficiently Interpreting Human-Robot Instructions of Varying Granularities
Updated with final version - Published as Conference Paper in Robotics: Science and Systems 2017
null
10.15607/RSS.2017.XIII.056
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans can ground natural language commands to tasks at both abstract and fine-grained levels of specificity. For instance, a human forklift operator can be instructed to perform a high-level action, like "grab a pallet" or a low-level action like "tilt back a little bit." While robots are also capable of grounding language commands to tasks, previous methods implicitly assume that all commands and tasks reside at a single, fixed level of abstraction. Additionally, methods that do not use multiple levels of abstraction encounter inefficient planning and execution times as they solve tasks at a single level of abstraction with large, intractable state-action spaces closely resembling real world complexity. In this work, by grounding commands to all the tasks or subtasks available in a hierarchical planning framework, we arrive at a model capable of interpreting language at multiple levels of specificity ranging from coarse to more granular. We show that the accuracy of the grounding procedure is improved when simultaneously inferring the degree of abstraction in language used to communicate the task. Leveraging hierarchy also improves efficiency: our proposed approach enables a robot to respond to a command within one second on 90% of our tasks, while baselines take over twenty seconds on half the tasks. Finally, we demonstrate that a real, physical robot can ground commands at multiple levels of abstraction allowing it to efficiently plan different subtasks within the same planning hierarchy.
[ { "version": "v1", "created": "Fri, 21 Apr 2017 16:15:19 GMT" }, { "version": "v2", "created": "Tue, 19 Jun 2018 17:19:29 GMT" } ]
1,529,452,800,000
[ [ "Arumugam", "Dilip", "" ], [ "Karamcheti", "Siddharth", "" ], [ "Gopalan", "Nakul", "" ], [ "Wong", "Lawson L. S.", "" ], [ "Tellex", "Stefanie", "" ] ]
1704.06621
Nasser Ghadiri
Amir Hossein Goudarzi, Nasser Ghadiri
A hybrid spatial data mining approach based on fuzzy topological relations and MOSES evolutionary algorithm
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Making high-quality decisions in strategic spatial planning is heavily dependent on extracting knowledge from vast amounts of data. Although many decision-making problems like developing urban areas require such perception and reasoning, existing methods in this field usually neglect the deep knowledge mined from geographic databases and are based on pure statistical methods. Due to the large volume of data gathered in spatial databases, and the uncertainty of spatial objects, mining association rules for high-level knowledge representation is a challenging task. Few algorithms manage geographical and non-geographical data using topological relations. In this paper, a novel approach for spatial data mining based on the MOSES evolutionary framework is presented which improves the classic genetic programming approach. A hybrid architecture called GGeo is proposed to apply the MOSES mining rules considering fuzzy topological relations from spatial data. The uncertainty and fuzziness aspects are addressed using an enriched model of topological relations by fuzzy region connection calculus. Moreover, to overcome the problem of time-consuming fuzzy topological relationships calculations, this a novel data pre-processing method is offered. GGeo analyses and learns from geographical and non-geographical data and uses topological and distance parameters, and returns a series of arithmetic-spatial formulas as classification rules. The proposed approach is resistant to noisy data, and all its stages run in parallel to increase speed. This approach may be used in different spatial data classification problems as well as representing an appropriate method of data analysis and economic policy making.
[ { "version": "v1", "created": "Fri, 21 Apr 2017 16:30:10 GMT" } ]
1,492,992,000,000
[ [ "Goudarzi", "Amir Hossein", "" ], [ "Ghadiri", "Nasser", "" ] ]
1704.06945
Diego Perez Liebana Dr.
Kamolwan Kunanusont and Simon M. Lucas and Diego Perez-Liebana
General Video Game AI: Learning from Screen Capture
Proceedings of the IEEE Conference on Evolutionary Computation 2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
General Video Game Artificial Intelligence is a general game playing framework for Artificial General Intelligence research in the video-games domain. In this paper, we propose for the first time a screen capture learning agent for General Video Game AI framework. A Deep Q-Network algorithm was applied and improved to develop an agent capable of learning to play different games in the framework. After testing this algorithm using various games of different categories and difficulty levels, the results suggest that our proposed screen capture learning agent has the potential to learn many different games using only a single learning algorithm.
[ { "version": "v1", "created": "Sun, 23 Apr 2017 16:08:06 GMT" } ]
1,493,078,400,000
[ [ "Kunanusont", "Kamolwan", "" ], [ "Lucas", "Simon M.", "" ], [ "Perez-Liebana", "Diego", "" ] ]
1704.07069
Diego Perez Liebana Dr.
Joseph Walton-Rivers and Piers R. Williams and Richard Bartle and Diego Perez-Liebana and Simon M. Lucas
Evaluating and Modelling Hanabi-Playing Agents
Proceedings of the IEEE Conference on Evolutionary Computation (2017)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Agent modelling involves considering how other agents will behave, in order to influence your own actions. In this paper, we explore the use of agent modelling in the hidden-information, collaborative card game Hanabi. We implement a number of rule-based agents, both from the literature and of our own devising, in addition to an Information Set Monte Carlo Tree Search (IS-MCTS) agent. We observe poor results from IS-MCTS, so construct a new, predictor version that uses a model of the agents with which it is paired. We observe a significant improvement in game-playing strength from this agent in comparison to IS-MCTS, resulting from its consideration of what the other agents in a game would do. In addition, we create a flawed rule-based agent to highlight the predictor's capabilities with such an agent.
[ { "version": "v1", "created": "Mon, 24 Apr 2017 07:44:10 GMT" } ]
1,493,078,400,000
[ [ "Walton-Rivers", "Joseph", "" ], [ "Williams", "Piers R.", "" ], [ "Bartle", "Richard", "" ], [ "Perez-Liebana", "Diego", "" ], [ "Lucas", "Simon M.", "" ] ]
1704.07075
Diego Perez Liebana Dr.
Raluca D. Gaina and Jialin Liu and Simon M. Lucas and Diego Perez-Liebana
Analysis of Vanilla Rolling Horizon Evolution Parameters in General Video Game Playing
null
Applications of Evolutionary Computation, EvoApplications, Lecture Notes in Computer Science, vol. 10199, Springer, Cham., p. 418-434, 2017
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Monte Carlo Tree Search techniques have generally dominated General Video Game Playing, but recent research has started looking at Evolutionary Algorithms and their potential at matching Tree Search level of play or even outperforming these methods. Online or Rolling Horizon Evolution is one of the options available to evolve sequences of actions for planning in General Video Game Playing, but no research has been done up to date that explores the capabilities of the vanilla version of this algorithm in multiple games. This study aims to critically analyse the different configurations regarding population size and individual length in a set of 20 games from the General Video Game AI corpus. Distinctions are made between deterministic and stochastic games, and the implications of using superior time budgets are studied. Results show that there is scope for the use of these techniques, which in some configurations outperform Monte Carlo Tree Search, and also suggest that further research in these methods could boost their performance.
[ { "version": "v1", "created": "Mon, 24 Apr 2017 08:01:39 GMT" } ]
1,493,078,400,000
[ [ "Gaina", "Raluca D.", "" ], [ "Liu", "Jialin", "" ], [ "Lucas", "Simon M.", "" ], [ "Perez-Liebana", "Diego", "" ] ]
1704.07183
Steven Prestwich D
Steven Prestwich and Roberto Rossi and Armagan Tarim
Stochastic Constraint Programming as Reinforcement Learning
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic Constraint Programming (SCP) is an extension of Constraint Programming (CP) used for modelling and solving problems involving constraints and uncertainty. SCP inherits excellent modelling abilities and filtering algorithms from CP, but so far it has not been applied to large problems. Reinforcement Learning (RL) extends Dynamic Programming to large stochastic problems, but is problem-specific and has no generic solvers. We propose a hybrid combining the scalability of RL with the modelling and constraint filtering methods of CP. We implement a prototype in a CP system and demonstrate its usefulness on SCP problems.
[ { "version": "v1", "created": "Mon, 24 Apr 2017 12:44:38 GMT" } ]
1,493,078,400,000
[ [ "Prestwich", "Steven", "" ], [ "Rossi", "Roberto", "" ], [ "Tarim", "Armagan", "" ] ]
1704.07466
Freddy Lecue
Freddy Lecue, Jiaoyan Chen, Jeff Pan, Huajun Chen
Learning from Ontology Streams with Semantic Concept Drift
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data stream learning has been largely studied for extracting knowledge structures from continuous and rapid data records. In the semantic Web, data is interpreted in ontologies and its ordered sequence is represented as an ontology stream. Our work exploits the semantics of such streams to tackle the problem of concept drift i.e., unexpected changes in data distribution, causing most of models to be less accurate as time passes. To this end we revisited (i) semantic inference in the context of supervised stream learning, and (ii) models with semantic embeddings. The experiments show accurate prediction with data from Dublin and Beijing.
[ { "version": "v1", "created": "Mon, 24 Apr 2017 21:12:13 GMT" } ]
1,493,164,800,000
[ [ "Lecue", "Freddy", "" ], [ "Chen", "Jiaoyan", "" ], [ "Pan", "Jeff", "" ], [ "Chen", "Huajun", "" ] ]
1704.07555
Marcus Olivecrona
Marcus Olivecrona, Thomas Blaschke, Ola Engkvist, and Hongming Chen
Molecular De Novo Design through Deep Reinforcement Learning
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work introduces a method to tune a sequence-based generative model for molecular de novo design that through augmented episodic likelihood can learn to generate structures with certain specified desirable properties. We demonstrate how this model can execute a range of tasks such as generating analogues to a query structure and generating compounds predicted to be active against a biological target. As a proof of principle, the model is first trained to generate molecules that do not contain sulphur. As a second example, the model is trained to generate analogues to the drug Celecoxib, a technique that could be used for scaffold hopping or library expansion starting from a single molecule. Finally, when tuning the model towards generating compounds predicted to be active against the dopamine receptor type 2, the model generates structures of which more than 95% are predicted to be active, including experimentally confirmed actives that have not been included in either the generative model nor the activity prediction model.
[ { "version": "v1", "created": "Tue, 25 Apr 2017 06:41:21 GMT" }, { "version": "v2", "created": "Tue, 29 Aug 2017 12:31:19 GMT" } ]
1,504,051,200,000
[ [ "Olivecrona", "Marcus", "" ], [ "Blaschke", "Thomas", "" ], [ "Engkvist", "Ola", "" ], [ "Chen", "Hongming", "" ] ]
1704.07899
James Brusey
James Brusey, Diana Hintea, Elena Gaura, Neil Beloe
Reinforcement Learning-based Thermal Comfort Control for Vehicle Cabins
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vehicle climate control systems aim to keep passengers thermally comfortable. However, current systems control temperature rather than thermal comfort and tend to be energy hungry, which is of particular concern when considering electric vehicles. This paper poses energy-efficient vehicle comfort control as a Markov Decision Process, which is then solved numerically using Sarsa({\lambda}) and an empirically validated, single-zone, 1D thermal model of the cabin. The resulting controller was tested in simulation using 200 randomly selected scenarios and found to exceed the performance of bang-bang, proportional, simple fuzzy logic, and commercial controllers with 23%, 43%, 40%, 56% increase, respectively. Compared to the next best performing controller, energy consumption is reduced by 13% while the proportion of time spent thermally comfortable is increased by 23%. These results indicate that this is a viable approach that promises to translate into substantial comfort and energy improvements in the car.
[ { "version": "v1", "created": "Tue, 25 Apr 2017 20:24:17 GMT" }, { "version": "v2", "created": "Tue, 5 Sep 2017 11:02:03 GMT" } ]
1,504,656,000,000
[ [ "Brusey", "James", "" ], [ "Hintea", "Diana", "" ], [ "Gaura", "Elena", "" ], [ "Beloe", "Neil", "" ] ]
1704.07950
Yi Zhou Dr.
Yi Zhou
Structured Production System (extended abstract)
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this extended abstract, we propose Structured Production Systems (SPS), which extend traditional production systems with well-formed syntactic structures. Due to the richness of structures, structured production systems significantly enhance the expressive power as well as the flexibility of production systems, for instance, to handle uncertainty. We show that different rule application strategies can be reduced into the basic one by utilizing structures. Also, many fundamental approaches in computer science, including automata, grammar and logic, can be captured by structured production systems.
[ { "version": "v1", "created": "Wed, 26 Apr 2017 02:39:07 GMT" } ]
1,493,251,200,000
[ [ "Zhou", "Yi", "" ] ]
1704.08111
Steven Meyer
Steven Meyer
A Popperian Falsification of Artificial Intelligence -- Lighthill Defended
12 pages. Version improves discussion of chess and adds sections on when combinatorial explosion may not apply
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The area of computation called artificial intelligence (AI) is falsified by describing a previous 1972 falsification of AI by British mathematical physicist James Lighthill. How Lighthill's arguments continue to apply to current AI is explained. It is argued that AI should use the Popperian scientific method in which it is the duty of scientists to attempt to falsify theories and if theories are falsified to replace or modify them. The paper describes the Popperian method and discusses Paul Nurse's application of the method to cell biology that also involves questions of mechanism and behavior. It is shown how Lighthill's falsifying arguments especially combinatorial explosion continue to apply to modern AI. Various skeptical arguments against the assumptions of AI mostly by physicists especially against Hilbert's philosophical programme that defined knowledge and truth as provable formal sentences. John von Neumann's arguments from natural complexity against neural networks and evolutionary algorithms are discussed. Next the game of chess is discussed to show how modern chess experts have reacted to computer chess programs. It is shown that currently chess masters can defeat any chess program using Kasperov's arguments from his 1997 Deep Blue match and aftermath. The game of 'go' and climate models are discussed to show computer applications where combinatorial explosion may not apply. The paper concludes by advocating studying computation as Peter Naur's Dataology.
[ { "version": "v1", "created": "Sun, 23 Apr 2017 21:16:40 GMT" }, { "version": "v2", "created": "Wed, 18 Apr 2018 17:56:05 GMT" }, { "version": "v3", "created": "Thu, 30 Apr 2020 23:58:09 GMT" } ]
1,588,550,400,000
[ [ "Meyer", "Steven", "" ] ]
1704.08350
Vasanth Sarathy
Vasanth Sarathy and Matthias Scheutz
The MacGyver Test - A Framework for Evaluating Machine Resourcefulness and Creative Problem Solving
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current measures of machine intelligence are either difficult to evaluate or lack the ability to test a robot's problem-solving capacity in open worlds. We propose a novel evaluation framework based on the formal notion of MacGyver Test which provides a practical way for assessing the resilience and resourcefulness of artificial agents.
[ { "version": "v1", "created": "Wed, 26 Apr 2017 21:05:27 GMT" } ]
1,493,337,600,000
[ [ "Sarathy", "Vasanth", "" ], [ "Scheutz", "Matthias", "" ] ]
1704.08464
Zhiwei Lin
Zhiwei Lin, Yi Li, and Xiaolian Guo
Consensus measure of rankings
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A ranking is an ordered sequence of items, in which an item with higher ranking score is more preferred than the items with lower ranking scores. In many information systems, rankings are widely used to represent the preferences over a set of items or candidates. The consensus measure of rankings is the problem of how to evaluate the degree to which the rankings agree. The consensus measure can be used to evaluate rankings in many information systems, as quite often there is not ground truth available for evaluation. This paper introduces a novel approach for consensus measure of rankings by using graph representation, in which the vertices or nodes are the items and the edges are the relationship of items in the rankings. Such representation leads to various algorithms for consensus measure in terms of different aspects of rankings, including the number of common patterns, the number of common patterns with fixed length and the length of the longest common patterns. The proposed measure can be adopted for various types of rankings, such as full rankings, partial rankings and rankings with ties. This paper demonstrates how the proposed approaches can be used to evaluate the quality of rank aggregation and the quality of top-$k$ rankings from Google and Bing search engines.
[ { "version": "v1", "created": "Thu, 27 Apr 2017 07:58:47 GMT" }, { "version": "v2", "created": "Thu, 21 Sep 2017 17:09:37 GMT" } ]
1,506,038,400,000
[ [ "Lin", "Zhiwei", "" ], [ "Li", "Yi", "" ], [ "Guo", "Xiaolian", "" ] ]
1704.08950
Amit Kumar
Amit Kumar, Rahul Dutta, Harbhajan Rai
Intelligent Personal Assistant with Knowledge Navigation
Converted O(N3) solution to viable O(N) solution
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An Intelligent Personal Agent (IPA) is an agent that has the purpose of helping the user to gain information through reliable resources with the help of knowledge navigation techniques and saving time to search the best content. The agent is also responsible for responding to the chat-based queries with the help of Conversation Corpus. We will be testing different methods for optimal query generation. To felicitate the ease of usage of the application, the agent will be able to accept the input through Text (Keyboard), Voice (Speech Recognition) and Server (Facebook) and output responses using the same method. Existing chat bots reply by making changes in the input, but we will give responses based on multiple SRT files. The model will learn using the human dialogs dataset and will be able respond human-like. Responses to queries about famous things (places, people, and words) can be provided using web scraping which will enable the bot to have knowledge navigation features. The agent will even learn from its past experiences supporting semi-supervised learning.
[ { "version": "v1", "created": "Fri, 28 Apr 2017 14:26:12 GMT" } ]
1,493,596,800,000
[ [ "Kumar", "Amit", "" ], [ "Dutta", "Rahul", "" ], [ "Rai", "Harbhajan", "" ] ]
1705.00154
Masataro Asai
Masataro Asai, Alex Fukunaga
Classical Planning in Deep Latent Space: Bridging the Subsymbolic-Symbolic Boundary
This is an extended manuscript of the paper accepted in AAAI-18. The contents of AAAI-18 paper itself is significantly extended from what has been published in Arxiv or previous workshops. Over half of the paper describing (2) is new. Additionally, this manuscript contains the supplemental materials of AAAI-18 submission
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current domain-independent, classical planners require symbolic models of the problem domain and instance as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems such as planners. We propose LatPlan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), and a pair of images representing the initial and the goal states (planning inputs), LatPlan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution. The contribution of this paper is twofold: (1) State Autoencoder, which finds a propositional state representation of the environment using a Variational Autoencoder. It generates a discrete latent vector from the images, based on which a PDDL model can be constructed and then solved by an off-the-shelf planner. (2) Action Autoencoder / Discriminator, a neural architecture which jointly finds the action symbols and the implicit action models (preconditions/effects), and provides a successor function for the implicit graph search. We evaluate LatPlan using image-based versions of 3 planning domains: 8-puzzle, Towers of Hanoi and LightsOut.
[ { "version": "v1", "created": "Sat, 29 Apr 2017 08:22:29 GMT" }, { "version": "v2", "created": "Thu, 9 Nov 2017 04:09:36 GMT" }, { "version": "v3", "created": "Sun, 3 Dec 2017 12:19:39 GMT" } ]
1,512,432,000,000
[ [ "Asai", "Masataro", "" ], [ "Fukunaga", "Alex", "" ] ]
1705.00211
Babatunde Akinkunmi
Babatunde Opeoluwa Akinkunmi and Adesoji A. Adegbola
Two Algorithms for Deciding Coincidence In Double Temporal Recurrence of Eventuality Sequences
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let two sequences of eventualities x (signifying the sequence, x0,x1, x2,...,xn-1) and y (signifying the sequence, y0, y1, y2,..,yn-1) both recur over the same time interval and it is required to determine whether or not a subinterval exists within the said interval which is a common subinterval of the intervals of occurrence of xp and yq. This paper presents two algorithms for solving the problem. the first explores an arbitrary cycle of the double recurrence for the existence of such an interval. its worst case running time is quadratic. The other algorithm is based on the novel notion of gcd-partitions and has a linear worst case running time. If the eventuality sequence pair (W,z) is a gcd-partition for the double recurrence (x, y),then, from a certain property of gcd-partitions, within any cycle of the double recurrence, there exists r and s such that intervals of occurrence of xp and yq are non-disjoint with the interval of co-occurrence of wr and zs. As such, a coincidence between xp and yq occurs within a cycle of the double recurrence if and only if such r and s exist so that the interval of co-occurrence of wr and zs shares a common interval with the common interval of occurrences of xp and yq. The algorithm systematically reduces the number of wr and zs pairs to be explored in the process of finding the existence of the coincidence.
[ { "version": "v1", "created": "Sat, 29 Apr 2017 16:31:58 GMT" }, { "version": "v2", "created": "Tue, 1 Nov 2022 21:43:03 GMT" } ]
1,667,433,600,000
[ [ "Akinkunmi", "Babatunde Opeoluwa", "" ], [ "Adegbola", "Adesoji A.", "" ] ]
1705.00303
Beishui Liao
Beishui Liao and Leendert van der Torre
Defense semantics of argumentation: encoding reasons for accepting arguments
14 pages, first submitted on April 30, 2017; 16 pages, revised in terms of the comments from MIREL2017 on August 03, 2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we show how the defense relation among abstract arguments can be used to encode the reasons for accepting arguments. After introducing a novel notion of defenses and defense graphs, we propose a defense semantics together with a new notion of defense equivalence of argument graphs, and compare defense equivalence with standard equivalence and strong equivalence, respectively. Then, based on defense semantics, we define two kinds of reasons for accepting arguments, i.e., direct reasons and root reasons, and a notion of root equivalence of argument graphs. Finally, we show how the notion of root equivalence can be used in argumentation summarization.
[ { "version": "v1", "created": "Sun, 30 Apr 2017 12:06:28 GMT" }, { "version": "v2", "created": "Wed, 2 Aug 2017 15:46:21 GMT" } ]
1,501,718,400,000
[ [ "Liao", "Beishui", "" ], [ "van der Torre", "Leendert", "" ] ]
1705.00969
Babatunde Akinkunmi
B.O. Akinkunmi
The Problem of Coincidence in A Theory of Temporal Multiple Recurrence
arXiv admin note: substantial text overlap with arXiv:1705.00211
Journal of Applied Logic, 15:46-48, May 2016
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Logical theories have been developed which have allowed temporal reasoning about eventualities (a la Galton) such as states, processes, actions, events, processes and complex eventualities such as sequences and recurrences of other eventualities. This paper presents the problem of coincidence within the framework of a first order logical theory formalising temporal multiple recurrence of two sequences of fixed duration eventualities and presents a solution to it The coincidence problem is described as: if two complex eventualities (or eventuality sequences) consisting respectively of component eventualities x0, x1,....,xr and y0, y1, ..,ys both recur over an interval k and all eventualities are of fixed durations, is there a sub-interval of k over which the incidence xt and yu for t between 0..r and s between 0..s coincide. The solution presented here formalises the intuition that a solution can be found by temporal projection over a cycle of the multiple recurrence of both sequences.
[ { "version": "v1", "created": "Sat, 29 Apr 2017 16:54:18 GMT" } ]
1,493,769,600,000
[ [ "Akinkunmi", "B. O.", "" ] ]
1705.01076
Rafa{\l} Skinderowicz
Rafa{\l} Skinderowicz
An improved Ant Colony System for the Sequential Ordering Problem
30 pages, 8 tables, 11 figures
null
10.1016/j.cor.2017.04.012
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is not rare that the performance of one metaheuristic algorithm can be improved by incorporating ideas taken from another. In this article we present how Simulated Annealing (SA) can be used to improve the efficiency of the Ant Colony System (ACS) and Enhanced ACS when solving the Sequential Ordering Problem (SOP). Moreover, we show how the very same ideas can be applied to improve the convergence of a dedicated local search, i.e. the SOP-3-exchange algorithm. A statistical analysis of the proposed algorithms both in terms of finding suitable parameter values and the quality of the generated solutions is presented based on a series of computational experiments conducted on SOP instances from the well-known TSPLIB and SOPLIB2006 repositories. The proposed ACS-SA and EACS-SA algorithms often generate solutions of better quality than the ACS and EACS, respectively. Moreover, the EACS-SA algorithm combined with the proposed SOP-3-exchange-SA local search was able to find 10 new best solutions for the SOP instances from the SOPLIB2006 repository, thus improving the state-of-the-art results as known from the literature. Overall, the best known or improved solutions were found in 41 out of 48 cases.
[ { "version": "v1", "created": "Tue, 2 May 2017 17:17:26 GMT" } ]
1,493,769,600,000
[ [ "Skinderowicz", "Rafał", "" ] ]
1705.01080
Jialin Liu Ph.D
Kamolwan Kunanusont, Raluca D. Gaina, Jialin Liu, Diego Perez-Liebana, Simon M. Lucas
The N-Tuple Bandit Evolutionary Algorithm for Automatic Game Improvement
8 pages, 9 figure, 2 tables, CEC2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a new evolutionary algorithm that is especially well suited to AI-Assisted Game Design. The approach adopted in this paper is to use observations of AI agents playing the game to estimate the game's quality. Some of best agents for this purpose are General Video Game AI agents, since they can be deployed directly on a new game without game-specific tuning; these agents tend to be based on stochastic algorithms which give robust but noisy results and tend to be expensive to run. This motivates the main contribution of the paper: the development of the novel N-Tuple Bandit Evolutionary Algorithm, where a model is used to estimate the fitness of unsampled points and a bandit approach is used to balance exploration and exploitation of the search space. Initial results on optimising a Space Battle game variant suggest that the algorithm offers far more robust results than the Random Mutation Hill Climber and a Biased Mutation variant, which are themselves known to offer competitive performance across a range of problems. Subjective observations are also given by human players on the nature of the evolved games, which indicate a preference towards games generated by the N-Tuple algorithm.
[ { "version": "v1", "created": "Sat, 18 Mar 2017 09:10:09 GMT" } ]
1,493,769,600,000
[ [ "Kunanusont", "Kamolwan", "" ], [ "Gaina", "Raluca D.", "" ], [ "Liu", "Jialin", "" ], [ "Perez-Liebana", "Diego", "" ], [ "Lucas", "Simon M.", "" ] ]
1705.01172
Gavin Rens
Gavin Rens and Thomas Meyer
Imagining Probabilistic Belief Change as Imaging (Technical Report)
21 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Imaging is a form of probabilistic belief change which could be employed for both revision and update. In this paper, we propose a new framework for probabilistic belief change based on imaging, called Expected Distance Imaging (EDI). EDI is sufficiently general to define Bayesian conditioning and other forms of imaging previously defined in the literature. We argue that, and investigate how, EDI can be used for both revision and update. EDI's definition depends crucially on a weight function whose properties are studied and whose effect on belief change operations is analysed. Finally, four EDI instantiations are proposed, two for revision and two for update, and probabilistic rationality postulates are suggested for their analysis.
[ { "version": "v1", "created": "Tue, 2 May 2017 20:50:59 GMT" } ]
1,493,856,000,000
[ [ "Rens", "Gavin", "" ], [ "Meyer", "Thomas", "" ] ]
1705.01399
Leonardo Anjoletto Ferreira
Leonardo A. Ferreira, Reinaldo A. C. Bianchi, Paulo E. Santos, Ramon Lopez de Mantaras
Answer Set Programming for Non-Stationary Markov Decision Processes
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-stationary domains, where unforeseen changes happen, present a challenge for agents to find an optimal policy for a sequential decision making problem. This work investigates a solution to this problem that combines Markov Decision Processes (MDP) and Reinforcement Learning (RL) with Answer Set Programming (ASP) in a method we call ASP(RL). In this method, Answer Set Programming is used to find the possible trajectories of an MDP, from where Reinforcement Learning is applied to learn the optimal policy of the problem. Results show that ASP(RL) is capable of efficiently finding the optimal solution of an MDP representing non-stationary domains.
[ { "version": "v1", "created": "Wed, 3 May 2017 13:13:51 GMT" } ]
1,493,856,000,000
[ [ "Ferreira", "Leonardo A.", "" ], [ "Bianchi", "Reinaldo A. C.", "" ], [ "Santos", "Paulo E.", "" ], [ "de Mantaras", "Ramon Lopez", "" ] ]
1705.01681
Francisco L\'opez-Ramos
Francisco L\'opez-Ramos, Armando Guarnaschelli, Jos\'e-Fernando Camacho-Vallejo, Laura Hervert-Escobar, Rosa G. Gonz\'alez-Ram\'irez
Tramp Ship Scheduling Problem with Berth Allocation Considerations and Time-dependent Constraints
16 pages, 3 figures, 5 tables, proceedings paper of Mexican International Conference on Artificial Intelligence (MICAI) 2016
null
null
Accepted manuscript id 47
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents a model for the Tramp Ship Scheduling problem including berth allocation considerations, motivated by a real case of a shipping company. The aim is to determine the travel schedule for each vessel considering multiple docking and multiple time windows at the berths. This work is innovative due to the consideration of both spatial and temporal attributes during the scheduling process. The resulting model is formulated as a mixed-integer linear programming problem, and a heuristic method to deal with multiple vessel schedules is also presented. Numerical experimentation is performed to highlight the benefits of the proposed approach and the applicability of the heuristic. Conclusions and recommendations for further research are provided.
[ { "version": "v1", "created": "Thu, 4 May 2017 02:49:26 GMT" } ]
1,493,942,400,000
[ [ "López-Ramos", "Francisco", "" ], [ "Guarnaschelli", "Armando", "" ], [ "Camacho-Vallejo", "José-Fernando", "" ], [ "Hervert-Escobar", "Laura", "" ], [ "González-Ramírez", "Rosa G.", "" ] ]
1705.01817
Christoph Schwering
Christoph Schwering
A Reasoning System for a First-Order Logic of Limited Belief
22 pages, 0 figures, Twenty-sixth International Joint Conference on Artificial Intelligence (IJCAI-17)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Logics of limited belief aim at enabling computationally feasible reasoning in highly expressive representation languages. These languages are often dialects of first-order logic with a weaker form of logical entailment that keeps reasoning decidable or even tractable. While a number of such logics have been proposed in the past, they tend to remain for theoretical analysis only and their practical relevance is very limited. In this paper, we aim to go beyond the theory. Building on earlier work by Liu, Lakemeyer, and Levesque, we develop a logic of limited belief that is highly expressive while remaining decidable in the first-order and tractable in the propositional case and exhibits some characteristics that make it attractive for an implementation. We introduce a reasoning system that employs this logic as representation language and present experimental results that showcase the benefit of limited belief.
[ { "version": "v1", "created": "Thu, 4 May 2017 12:39:27 GMT" } ]
1,493,942,400,000
[ [ "Schwering", "Christoph", "" ] ]
1705.02175
Nikos Katzouris
Nikos Katzouris, Alexander Artikis, Georgios Paliouras
Distributed Online Learning of Event Definitions
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Logic-based event recognition systems infer occurrences of events in time using a set of event definitions in the form of first-order rules. The Event Calculus is a temporal logic that has been used as a basis in event recognition applications, providing among others, direct connections to machine learning, via Inductive Logic Programming (ILP). OLED is a recently proposed ILP system that learns event definitions in the form of Event Calculus theories, in a single pass over a data stream. In this work we present a version of OLED that allows for distributed, online learning. We evaluate our approach on a benchmark activity recognition dataset and show that we can significantly reduce training times, exchanging minimal information between processing nodes.
[ { "version": "v1", "created": "Fri, 5 May 2017 11:40:11 GMT" } ]
1,494,201,600,000
[ [ "Katzouris", "Nikos", "" ], [ "Artikis", "Alexander", "" ], [ "Paliouras", "Georgios", "" ] ]
1705.02476
Mahardhika Pratama Dr
Mahardhika Pratama
PANFIS++: A Generalized Approach to Evolving Learning
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The concept of evolving intelligent system (EIS) provides an effective avenue for data stream mining because it is capable of coping with two prominent issues: online learning and rapidly changing environments. We note at least three uncharted territories of existing EISs: data uncertainty, temporal system dynamic, redundant data streams. This book chapter aims at delivering a concrete solution of this problem with the algorithmic development of a novel learning algorithm, namely PANFIS++. PANFIS++ is a generalized version of the PANFIS by putting forward three important components: 1) An online active learning scenario is developed to overcome redundant data streams. This module allows to actively select data streams for the training process, thereby expediting execution time and enhancing generalization performance, 2) PANFIS++ is built upon an interval type-2 fuzzy system environment, which incorporates the so-called footprint of uncertainty. This component provides a degree of tolerance for data uncertainty. 3) PANFIS++ is structured under a recurrent network architecture with a self-feedback loop. This is meant to tackle the temporal system dynamic. The efficacy of the PANFIS++ has been numerically validated through numerous real-world and synthetic case studies, where it delivers the highest predictive accuracy while retaining the lowest complexity.
[ { "version": "v1", "created": "Sat, 6 May 2017 12:02:15 GMT" } ]
1,494,288,000,000
[ [ "Pratama", "Mahardhika", "" ] ]
1705.02477
Mahardhika Pratama Dr
Mahardhika Pratama, Eric Dimla, Chow Yin Lai, Edwin Lughofer
Metacognitive Learning Approach for Online Tool Condition Monitoring
null
null
10.1007/s10845-017-1348-9
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As manufacturing processes become increasingly automated, so should tool condition monitoring (TCM) as it is impractical to have human workers monitor the state of the tools continuously. Tool condition is crucial to ensure the good quality of products: Worn tools affect not only the surface quality but also the dimensional accuracy, which means higher reject rate of the products. Therefore, there is an urgent need to identify tool failures before it occurs on the fly. While various versions of intelligent tool condition monitoring have been proposed, most of them suffer from a cognitive nature of traditional machine learning algorithms. They focus on the how to learn process without paying attention to other two crucial issues: what to learn, and when to learn. The what to learn and the when to learn provide self regulating mechanisms to select the training samples and to determine time instants to train a model. A novel tool condition monitoring approach based on a psychologically plausible concept, namely the metacognitive scaffolding theory, is proposed and built upon a recently published algorithm, recurrent classifier (rClass). The learning process consists of three phases: what to learn, how to learn, when to learn and makes use of a generalized recurrent network structure as a cognitive component. Experimental studies with real-world manufacturing data streams were conducted where rClass demonstrated the highest accuracy while retaining the lowest complexity over its counterparts.
[ { "version": "v1", "created": "Sat, 6 May 2017 12:16:16 GMT" } ]
1,517,875,200,000
[ [ "Pratama", "Mahardhika", "" ], [ "Dimla", "Eric", "" ], [ "Lai", "Chow Yin", "" ], [ "Lughofer", "Edwin", "" ] ]
1705.02620
Wen Jiang
Dong Wu, Xiang Liu, Feng Xue, Hanqing Zheng, Yehang Shou, Wen Jiang
A New Medical Diagnosis Method Based on Z-Numbers
24 pages, 9 figures, 13 tables
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How to handle uncertainty in medical diagnosis is an open issue. In this paper, a new decision making methodology based on Z-numbers is presented. Firstly, the experts' opinions are represented by Z-numbers. Z-number is an ordered pair of fuzzy numbers denoted as Z = (A, B). Then, a new method for ranking fuzzy numbers is proposed. And based on the proposed fuzzy number ranking method, a novel method is presented to transform the Z-numbers into Basic Probability Assignment (BPA). As a result, the information from different sources is combined by the Dempster' combination rule. The final decision making is more reasonable due to the advantage of information fusion. Finally, two experiments, risk analysis and medical diagnosis, are illustrated to show the efficiency of the proposed methodology.
[ { "version": "v1", "created": "Sun, 7 May 2017 13:29:53 GMT" } ]
1,494,288,000,000
[ [ "Wu", "Dong", "" ], [ "Liu", "Xiang", "" ], [ "Xue", "Feng", "" ], [ "Zheng", "Hanqing", "" ], [ "Shou", "Yehang", "" ], [ "Jiang", "Wen", "" ] ]
1705.03078
Toby Pereira
Toby Pereira
An Anthropic Argument against the Future Existence of Superintelligent Artificial Intelligence
11 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper uses anthropic reasoning to argue for a reduced likelihood that superintelligent AI will come into existence in the future. To make this argument, a new principle is introduced: the Super-Strong Self-Sampling Assumption (SSSSA), building on the Self-Sampling Assumption (SSA) and the Strong Self-Sampling Assumption (SSSA). SSA uses as its sample the relevant observers, whereas SSSA goes further by using observer-moments. SSSSA goes further still and weights each sample proportionally, according to the size of a mind in cognitive terms. SSSSA is required for human observer-samples to be typical, given by how much non-human animals outnumber humans. Given SSSSA, the assumption that humans experience typical observer-samples relies on a future where superintelligent AI does not dominate, which in turn reduces the likelihood of it being created at all.
[ { "version": "v1", "created": "Mon, 8 May 2017 20:37:45 GMT" } ]
1,494,374,400,000
[ [ "Pereira", "Toby", "" ] ]
1705.03260
Joshua Peterson
Joshua C. Peterson, Thomas L. Griffiths
Evidence for the size principle in semantic and perceptual domains
6 pages, 4 figures, To appear in the Proceedings of the 39th Annual Conference of the Cognitive Science Society
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shepard's Universal Law of Generalization offered a compelling case for the first physics-like law in cognitive science that should hold for all intelligent agents in the universe. Shepard's account is based on a rational Bayesian model of generalization, providing an answer to the question of why such a law should emerge. Extending this account to explain how humans use multiple examples to make better generalizations requires an additional assumption, called the size principle: hypotheses that pick out fewer objects should make a larger contribution to generalization. The degree to which this principle warrants similarly law-like status is far from conclusive. Typically, evaluating this principle has not been straightforward, requiring additional assumptions. We present a new method for evaluating the size principle that is more direct, and apply this method to a diverse array of datasets. Our results provide support for the broad applicability of the size principle.
[ { "version": "v1", "created": "Tue, 9 May 2017 10:21:49 GMT" } ]
1,494,374,400,000
[ [ "Peterson", "Joshua C.", "" ], [ "Griffiths", "Thomas L.", "" ] ]
1705.03352
Vaclav Kratochvil
Ji\v{r}ina Vejnarov\'a, V\'aclav Kratochv\'il
Composition of Credal Sets via Polyhedral Geometry
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently introduced composition operator for credal sets is an analogy of such operators in probability, possibility, evidence and valuation-based systems theories. It was designed to construct multidimensional models (in the framework of credal sets) from a system of low- dimensional credal sets. In this paper we study its potential from the computational point of view utilizing methods of polyhedral geometry.
[ { "version": "v1", "created": "Fri, 5 May 2017 14:46:44 GMT" } ]
1,494,374,400,000
[ [ "Vejnarová", "Jiřina", "" ], [ "Kratochvíl", "Václav", "" ] ]
1705.03381
Nicolas Maudet
Leila Amgoud, Elise Bonzon, Marco Correia, Jorge Cruz, J\'er\^ome Delobelle, S\'ebastien Konieczny, Jo\~ao Leite, Alexis Martin, Nicolas Maudet, Srdjan Vesic
A note on the uniqueness of models in social abstract argumentation
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social abstract argumentation is a principled way to assign values to conflicting (weighted) arguments. In this note we discuss the important property of the uniqueness of the model.
[ { "version": "v1", "created": "Tue, 9 May 2017 15:18:13 GMT" } ]
1,494,374,400,000
[ [ "Amgoud", "Leila", "" ], [ "Bonzon", "Elise", "" ], [ "Correia", "Marco", "" ], [ "Cruz", "Jorge", "" ], [ "Delobelle", "Jérôme", "" ], [ "Konieczny", "Sébastien", "" ], [ "Leite", "João", "" ], [ "Martin", "Alexis", "" ], [ "Maudet", "Nicolas", "" ], [ "Vesic", "Srdjan", "" ] ]
1705.03597
Yan Li
Yan Li, Zhaohan Sun
Solving Multi-Objective MDP with Lexicographic Preference: An application to stochastic planning with multiple quantile objective
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In most common settings of Markov Decision Process (MDP), an agent evaluate a policy based on expectation of (discounted) sum of rewards. However in many applications this criterion might not be suitable from two perspective: first, in risk aversion situation expectation of accumulated rewards is not robust enough, this is the case when distribution of accumulated reward is heavily skewed; another issue is that many applications naturally take several objective into consideration when evaluating a policy, for instance in autonomous driving an agent needs to balance speed and safety when choosing appropriate decision. In this paper, we consider evaluating a policy based on a sequence of quantiles it induces on a set of target states, our idea is to reformulate the original problem into a multi-objective MDP problem with lexicographic preference naturally defined. For computation of finding an optimal policy, we proposed an algorithm \textbf{FLMDP} that could solve general multi-objective MDP with lexicographic reward preference.
[ { "version": "v1", "created": "Wed, 10 May 2017 03:13:30 GMT" } ]
1,494,460,800,000
[ [ "Li", "Yan", "" ], [ "Sun", "Zhaohan", "" ] ]
1705.04119
Jin-Kao Hao
Yangming Zhou, Jin-Kao Hao, Fred Glover
Memetic search for identifying critical nodes in sparse graphs
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Critical node problems involve identifying a subset of critical nodes from an undirected graph whose removal results in optimizing a pre-defined measure over the residual graph. As useful models for a variety of practical applications, these problems are computational challenging. In this paper, we study the classic critical node problem (CNP) and introduce an effective memetic algorithm for solving CNP. The proposed algorithm combines a double backbone-based crossover operator (to generate promising offspring solutions), a component-based neighborhood search procedure (to find high-quality local optima) and a rank-based pool updating strategy (to guarantee a healthy population). Specially, the component-based neighborhood search integrates two key techniques, i.e., two-phase node exchange strategy and node weighting scheme. The double backbone-based crossover extends the idea of general backbone-based crossovers. Extensive evaluations on 42 synthetic and real-world benchmark instances show that the proposed algorithm discovers 21 new upper bounds and matches 18 previous best-known upper bounds. We also demonstrate the relevance of our algorithm for effectively solving a variant of the classic CNP, called the cardinality-constrained critical node problem. Finally, we investigate the usefulness of each key algorithmic component.
[ { "version": "v1", "created": "Thu, 11 May 2017 11:43:30 GMT" }, { "version": "v2", "created": "Sat, 7 Oct 2017 13:15:03 GMT" } ]
1,507,593,600,000
[ [ "Zhou", "Yangming", "" ], [ "Hao", "Jin-Kao", "" ], [ "Glover", "Fred", "" ] ]
1705.04351
Rachit Dubey
Rachit Dubey and Thomas L. Griffiths
A rational analysis of curiosity
Conference paper in CogSci 2017
39th Annual Conference of the Cognitive Science Society (CogSci), 2017
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a rational analysis of curiosity, proposing that people's curiosity is driven by seeking stimuli that maximize their ability to make appropriate responses in the future. This perspective offers a way to unify previous theories of curiosity into a single framework. Experimental results confirm our model's predictions, showing how the relationship between curiosity and confidence can change significantly depending on the nature of the environment. Please refer to https://psyarxiv.com/wg5m6/ for a more updated version of this manuscript with a more detailed modeling section with extensive experiments.
[ { "version": "v1", "created": "Thu, 11 May 2017 18:54:10 GMT" }, { "version": "v2", "created": "Sat, 1 Aug 2020 05:15:12 GMT" } ]
1,596,499,200,000
[ [ "Dubey", "Rachit", "" ], [ "Griffiths", "Thomas L.", "" ] ]
1705.04530
Arindam Bhattacharya
Arindam Bhattacharya
A Survey of Question Answering for Math and Science Problem
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Turing test was long considered the measure for artificial intelligence. But with the advances in AI, it has proved to be insufficient measure. We can now aim to mea- sure machine intelligence like we measure human intelligence. One of the widely accepted measure of intelligence is standardized math and science test. In this paper, we explore the progress we have made towards the goal of making a machine smart enough to pass the standardized test. We see the challenges and opportunities posed by the domain, and note that we are quite some ways from actually making a system as smart as a even a middle school scholar.
[ { "version": "v1", "created": "Wed, 10 May 2017 15:28:37 GMT" } ]
1,494,806,400,000
[ [ "Bhattacharya", "Arindam", "" ] ]
1705.04569
Max Ostrowski
Mutsunori Banbara and Benjamin Kaufmann and Max Ostrowski and Torsten Schaub
Clingcon: The Next Generation
Under consideration in Theory and Practice of Logic Programming (TPLP)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the third generation of the constraint answer set system clingcon, combining Answer Set Programming (ASP) with finite domain constraint processing (CP). While its predecessors rely on a black-box approach to hybrid solving by integrating the CP solver gecode, the new clingcon system pursues a lazy approach using dedicated constraint propagators to extend propagation in the underlying ASP solver clasp. No extension is needed for parsing and grounding clingcon's hybrid modeling language since both can be accommodated by the new generic theory handling capabilities of the ASP grounder gringo. As a whole, clingcon 3 is thus an extension of the ASP system clingo 5, which itself relies on the grounder gringo and the solver clasp. The new approach of clingcon offers a seamless integration of CP propagation into ASP solving that benefits from the whole spectrum of clasp's reasoning modes, including for instance multi-shot solving and advanced optimization techniques. This is accomplished by a lazy approach that unfolds the representation of constraints and adds it to that of the logic program only when needed. Although the unfolding is usually dictated by the constraint propagators during solving, it can already be partially (or even totally) done during preprocessing. Moreover, clingcon's constraint preprocessing and propagation incorporate several well established CP techniques that greatly improve its performance. We demonstrate this via an extensive empirical evaluation contrasting, first, the various techniques in the context of CSP solving and, second, the new clingcon system with other hybrid ASP systems. Under consideration in Theory and Practice of Logic Programming (TPLP)
[ { "version": "v1", "created": "Fri, 12 May 2017 13:57:31 GMT" } ]
1,494,806,400,000
[ [ "Banbara", "Mutsunori", "" ], [ "Kaufmann", "Benjamin", "" ], [ "Ostrowski", "Max", "" ], [ "Schaub", "Torsten", "" ] ]
1705.04665
Richard Valenzano
Richard Anthony Valenzano and Danniel Sihui Yang
A Formal Characterization of the Local Search Topology of the Gap Heuristic
Technical report providing proofs of statements appearing in a "An Analysis and Enhancement of the Gap Heuristic for the Pancake Puzzle" by Richard Anthony Valenzano and Danniel Yang. This paper appeared at the 2017 Symposium on Combinatorial Search
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The pancake puzzle is a classic optimization problem that has become a standard benchmark for heuristic search algorithms. In this paper, we provide full proofs regarding the local search topology of the gap heuristic for the pancake puzzle. First, we show that in any non-goal state in which there is no move that will decrease the number of gaps, there is a move that will keep the number of gaps constant. We then classify any state in which the number of gaps cannot be decreased in a single action into two groups: those requiring 2 actions to decrease the number of gaps, and those which require 3 actions to decrease the number of gaps.
[ { "version": "v1", "created": "Fri, 12 May 2017 17:28:43 GMT" } ]
1,494,806,400,000
[ [ "Valenzano", "Richard Anthony", "" ], [ "Yang", "Danniel Sihui", "" ] ]
1705.04712
Denis Ponomaryov
Denis Ponomaryov and Mikhail Soutchanski
Progression of Decomposed Local-Effect Action Theories
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many tasks related to reasoning about consequences of a logical theory, it is desirable to decompose the theory into a number of weakly-related or independent components. However, a theory may represent knowledge that is subject to change, as a result of executing actions that have effects on some of the initial properties mentioned in the theory. Having once computed a decomposition of a theory, it is advantageous to know whether a decomposition has to be computed again in the newly-changed theory (obtained from taking into account changes resulting from execution of an action). In the paper, we address this problem in the scope of the situation calculus, where a change of an initial theory is related to the notion of progression. Progression provides a form of forward reasoning; it relies on forgetting values of those properties, which are subject to change, and computing new values for them. We consider decomposability and inseparability, two component properties known from the literature, and contribute by 1) studying the conditions when these properties are preserved and 2) when they are lost wrt progression and the related operation of forgetting. To show the latter, we demonstrate the boundaries using a number of negative examples. To show the former, we identify cases when these properties are preserved under forgetting and progression of initial theories in local-effect basic action theories of the situation calculus. Our paper contributes to bridging two different communities in Knowledge Representation, namely research on modularity and research on reasoning about actions.
[ { "version": "v1", "created": "Fri, 12 May 2017 18:36:21 GMT" } ]
1,494,892,800,000
[ [ "Ponomaryov", "Denis", "" ], [ "Soutchanski", "Mikhail", "" ] ]
1705.04719
Denis Ponomaryov
Yevgeny Kazakov and Denis Ponomaryov
On the Complexity of Semantic Integration of OWL Ontologies
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new mechanism for integration of OWL ontologies using semantic import relations. In contrast to the standard OWL importing, we do not require all axioms of the imported ontologies to be taken into account for reasoning tasks, but only their logical implications over a chosen signature. This property comes natural in many ontology integration scenarios, especially when the number of ontologies is large. In this paper, we study the complexity of reasoning over ontologies with semantic import relations and establish a range of tight complexity bounds for various fragments of OWL.
[ { "version": "v1", "created": "Fri, 12 May 2017 18:54:16 GMT" } ]
1,494,892,800,000
[ [ "Kazakov", "Yevgeny", "" ], [ "Ponomaryov", "Denis", "" ] ]
1705.04885
Jose Fontanari
Jos\'e F. Fontanari
Awareness improves problem-solving performance
null
Cogn Syst Res, 45C (2017) 52-58
10.1016/j.cogsys.2017.05.003
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The brain's self-monitoring of activities, including internal activities -- a functionality that we refer to as awareness -- has been suggested as a key element of consciousness. Here we investigate whether the presence of an inner-eye-like process (monitor) that supervises the activities of a number of subsystems (operative agents) engaged in the solution of a problem can improve the problem-solving efficiency of the system. The problem is to find the global maximum of a NK fitness landscape and the performance is measured by the time required to find that maximum. The operative agents explore blindly the fitness landscape and the monitor provides them with feedback on the quality (fitness) of the proposed solutions. This feedback is then used by the operative agents to bias their searches towards the fittest regions of the landscape. We find that a weak feedback between the monitor and the operative agents improves the performance of the system, regardless of the difficulty of the problem, which is gauged by the number of local maxima in the landscape. For easy problems (i.e., landscapes without local maxima), the performance improves monotonically as the feedback strength increases, but for difficult problems, there is an optimal value of the feedback strength beyond which the system performance degrades very rapidly.
[ { "version": "v1", "created": "Sat, 13 May 2017 20:40:24 GMT" } ]
1,496,016,000,000
[ [ "Fontanari", "José F.", "" ] ]
1705.05098
Lahari Poddar
Lahari Poddar, Wynne Hsu, Mong Li Lee
Quantifying Aspect Bias in Ordinal Ratings using a Bayesian Approach
Accepted for publication in IJCAI 2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
User opinions expressed in the form of ratings can influence an individual's view of an item. However, the true quality of an item is often obfuscated by user biases, and it is not obvious from the observed ratings the importance different users place on different aspects of an item. We propose a probabilistic modeling of the observed aspect ratings to infer (i) each user's aspect bias and (ii) latent intrinsic quality of an item. We model multi-aspect ratings as ordered discrete data and encode the dependency between different aspects by using a latent Gaussian structure. We handle the Gaussian-Categorical non-conjugacy using a stick-breaking formulation coupled with P\'{o}lya-Gamma auxiliary variable augmentation for a simple, fully Bayesian inference. On two real world datasets, we demonstrate the predictive ability of our model and its effectiveness in learning explainable user biases to provide insights towards a more reliable product quality estimation.
[ { "version": "v1", "created": "Mon, 15 May 2017 07:35:59 GMT" }, { "version": "v2", "created": "Wed, 24 May 2017 08:47:24 GMT" } ]
1,495,670,400,000
[ [ "Poddar", "Lahari", "" ], [ "Hsu", "Wynne", "" ], [ "Lee", "Mong Li", "" ] ]
1705.05316
Minas Dasygenis Dr.
Minas Dasygenis and Kostas Stergiou
Exploiting the Pruning Power of Strong Local Consistencies Through Parallelization
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Local consistencies stronger than arc consistency have received a lot of attention since the early days of CSP research. %because of the strong pruning they can achieve. However, they have not been widely adopted by CSP solvers. This is because applying such consistencies can sometimes result in considerably smaller search tree sizes and therefore in important speed-ups, but in other cases the search space reduction may be small, causing severe run time penalties. Taking advantage of recent advances in parallelization, we propose a novel approach for the application of strong local consistencies (SLCs) that can improve their performance by largely preserving the speed-ups they offer in cases where they are successful, and eliminating the run time penalties in cases where they are unsuccessful. This approach is presented in the form of two search algorithms. Both algorithms consist of a master search process, which is a typical CSP solver, and a number of slave processes, with each one implementing a SLC method. The first algorithm runs the different SLCs synchronously at each node of the search tree explored in the master process, while the second one can run them asynchronously at different nodes of the search tree. Experimental results demonstrate the benefits of the proposed method.
[ { "version": "v1", "created": "Mon, 15 May 2017 16:28:00 GMT" } ]
1,494,892,800,000
[ [ "Dasygenis", "Minas", "" ], [ "Stergiou", "Kostas", "" ] ]
1705.05326
Michael Huth
Paul Beaumont and Michael Huth
Constrained Bayesian Networks: Theory, Optimization, and Applications
43 pages, 18 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop the theory and practice of an approach to modelling and probabilistic inference in causal networks that is suitable when application-specific or analysis-specific constraints should inform such inference or when little or no data for the learning of causal network structure or probability values at nodes are available. Constrained Bayesian Networks generalize a Bayesian Network such that probabilities can be symbolic, arithmetic expressions and where the meaning of the network is constrained by finitely many formulas from the theory of the reals. A formal semantics for constrained Bayesian Networks over first-order logic of the reals is given, which enables non-linear and non-convex optimisation algorithms that rely on decision procedures for this logic, and supports the composition of several constrained Bayesian Networks. A non-trivial case study in arms control, where few or no data are available to assess the effectiveness of an arms inspection process, evaluates our approach. An open-access prototype implementation of these foundations and their algorithms uses the SMT solver Z3 as decision procedure, leverages an open-source package for Bayesian inference to symbolic computation, and is evaluated experimentally.
[ { "version": "v1", "created": "Mon, 15 May 2017 16:48:12 GMT" } ]
1,494,892,800,000
[ [ "Beaumont", "Paul", "" ], [ "Huth", "Michael", "" ] ]
1705.05515
GyongIl Ryang
Jon JaeGyong, Mun JongHui, Ryang GyongIl
A Method for Determining Weights of Criterias and Alternative of Fuzzy Group Decision Making Problem
12 pages, 3 tables
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we constructed a model to determine weights of criterias and presented a solution for determining the optimal alternative by using the constructed model and relationship analysis between criterias in fuzzy group decision-making problem with different forms of preference information of decision makers on criterias.
[ { "version": "v1", "created": "Tue, 16 May 2017 03:10:56 GMT" } ]
1,494,979,200,000
[ [ "JaeGyong", "Jon", "" ], [ "JongHui", "Mun", "" ], [ "GyongIl", "Ryang", "" ] ]
1705.05551
Katsunari Shibata
Katsunari Shibata and Yuki Goto
New Reinforcement Learning Using a Chaotic Neural Network for Emergence of "Thinking" - "Exploration" Grows into "Thinking" through Learning -
The Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM) 2017, 5 pages, 6 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Expectation for the emergence of higher functions is getting larger in the framework of end-to-end reinforcement learning using a recurrent neural network. However, the emergence of "thinking" that is a typical higher function is difficult to realize because "thinking" needs non fixed-point, flow-type attractors with both convergence and transition dynamics. Furthermore, in order to introduce "inspiration" or "discovery" in "thinking", not completely random but unexpected transition should be also required. By analogy to "chaotic itinerancy", we have hypothesized that "exploration" grows into "thinking" through learning by forming flow-type attractors on chaotic random-like dynamics. It is expected that if rational dynamics are learned in a chaotic neural network (ChNN), coexistence of rational state transition, inspiration-like state transition and also random-like exploration for unknown situation can be realized. Based on the above idea, we have proposed new reinforcement learning using a ChNN as an actor. The positioning of exploration is completely different from the conventional one. The chaotic dynamics inside the ChNN produces exploration factors by itself. Since external random numbers for stochastic action selection are not used, exploration factors cannot be isolated from the output. Therefore, the learning method is also completely different from the conventional one. At each non-feedback connection, one variable named causality trace takes in and maintains the input through the connection according to the change in its output. Using the trace and TD error, the weight is updated. In this paper, as the result of a recent simple task to see whether the new learning works or not, it is shown that a robot with two wheels and two visual sensors reaches a target while avoiding an obstacle after learning though there are still many rooms for improvement.
[ { "version": "v1", "created": "Tue, 16 May 2017 06:54:04 GMT" } ]
1,494,979,200,000
[ [ "Shibata", "Katsunari", "" ], [ "Goto", "Yuki", "" ] ]
1705.05637
Jakub Kowalski
Bartosz Kostka, Jaroslaw Kwiecien, Jakub Kowalski, Pawel Rychlikowski
Text-based Adventures of the Golovin AI Agent
null
null
10.1109/CIG.2017.8080433
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The domain of text-based adventure games has been recently established as a new challenge of creating the agent that is both able to understand natural language, and acts intelligently in text-described environments. In this paper, we present our approach to tackle the problem. Our agent, named Golovin, takes advantage of the limited game domain. We use genre-related corpora (including fantasy books and decompiled games) to create language models suitable to this domain. Moreover, we embed mechanisms that allow us to specify, and separately handle, important tasks as fighting opponents, managing inventory, and navigating on the game map. We validated usefulness of these mechanisms, measuring agent's performance on the set of 50 interactive fiction games. Finally, we show that our agent plays on a level comparable to the winner of the last year Text-Based Adventure AI Competition.
[ { "version": "v1", "created": "Tue, 16 May 2017 10:55:08 GMT" } ]
1,554,163,200,000
[ [ "Kostka", "Bartosz", "" ], [ "Kwiecien", "Jaroslaw", "" ], [ "Kowalski", "Jakub", "" ], [ "Rychlikowski", "Pawel", "" ] ]
1705.05756
Witold Rudnicki
Krzysztof Mnich and Witold R. Rudnicki
All-relevant feature selection using multidimensional filters with exhaustive search
27 pages, 11 figures, 3 tables
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a method for identification of the informative variables in the information system with discrete decision variables. It is targeted specifically towards discovery of the variables that are non-informative when considered alone, but are informative when the synergistic interactions between multiple variables are considered. To this end, the mutual entropy of all possible k-tuples of variables with decision variable is computed. Then, for each variable the maximal information gain due to interactions with other variables is obtained. For non-informative variables this quantity conforms to the well known statistical distributions. This allows for discerning truly informative variables from non-informative ones. For demonstration of the approach, the method is applied to several synthetic datasets that involve complex multidimensional interactions between variables. It is capable of identifying most important informative variables, even in the case when the dimensionality of the analysis is smaller than the true dimensionality of the problem. What is more, the high sensitivity of the algorithm allows for detection of the influence of nuisance variables on the response variable.
[ { "version": "v1", "created": "Tue, 16 May 2017 15:11:10 GMT" } ]
1,494,979,200,000
[ [ "Mnich", "Krzysztof", "" ], [ "Rudnicki", "Witold R.", "" ] ]
1705.05769
Varun Ojha
Varun Kumar Ojha, Vaclav Snasel, Ajith Abraham
Multiobjective Programming for Type-2 Hierarchical Fuzzy Inference Trees
null
IEEE Transactions on Fuzzy Systems 2017
10.1109/TFUZZ.2017.2698399
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a design of hierarchical fuzzy inference tree (HFIT). An HFIT produces an optimum treelike structure, i.e., a natural hierarchical structure that accommodates simplicity by combining several low-dimensional fuzzy inference systems (FISs). Such a natural hierarchical structure provides a high degree of approximation accuracy. The construction of HFIT takes place in two phases. Firstly, a nondominated sorting based multiobjective genetic programming (MOGP) is applied to obtain a simple tree structure (a low complexity model) with a high accuracy. Secondly, the differential evolution algorithm is applied to optimize the obtained tree's parameters. In the derived tree, each node acquires a different input's combination, where the evolutionary process governs the input's combination. Hence, HFIT nodes are heterogeneous in nature, which leads to a high diversity among the rules generated by the HFIT. Additionally, the HFIT provides an automatic feature selection because it uses MOGP for the tree's structural optimization that accepts inputs only relevant to the knowledge contained in data. The HFIT was studied in the context of both type-1 and type-2 FISs, and its performance was evaluated through six application problems. Moreover, the proposed multiobjective HFIT was compared both theoretically and empirically with recently proposed FISs methods from the literature, such as McIT2FIS, TSCIT2FNN, SIT2FNN, RIT2FNS-WB, eT2FIS, MRIT2NFS, IT2FNN-SVR, etc. From the obtained results, it was found that the HFIT provided less complex and highly accurate models compared to the models produced by the most of other methods. Hence, the proposed HFIT is an efficient and competitive alternative to the other FISs for function approximation and feature selection.
[ { "version": "v1", "created": "Tue, 16 May 2017 15:34:19 GMT" } ]
1,494,979,200,000
[ [ "Ojha", "Varun Kumar", "" ], [ "Snasel", "Vaclav", "" ], [ "Abraham", "Ajith", "" ] ]
1705.05983
Chien-Ping Lu
Chien-Ping Lu
AI, Native Supercomputing and The Revival of Moore's Law
17 pages, 13 figures; to be published in IEEE APSIPA Transaction on Signal and Information Processing as an invited paper on Industrial Technology Advances
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Based on Alan Turing's proposition on AI and computing machinery, which shaped Computing as we know it today, the new AI computing machinery should comprise a universal computer and a universal learning machine. The later should understand linear algebra natively to overcome the slowdown of Moore's law. In such a universal learnig machine, a computing unit does not need to keep the legacy of a universal computing core. The data can be distributed to the computing units, and the results can be collected from them through Collective Streaming, reminiscent of Collective Communication in Supercomputing. It is not necessary to use a GPU-like deep memory hierarchy, nor a TPU-like fine-grain mesh.
[ { "version": "v1", "created": "Wed, 17 May 2017 02:15:27 GMT" }, { "version": "v2", "created": "Tue, 23 May 2017 16:30:39 GMT" } ]
1,495,584,000,000
[ [ "Lu", "Chien-Ping", "" ] ]
1705.05986
Srinivasan Parthasarathy
Yanjie Fu, Charu Aggarwal, Srinivasan Parthasarathy, Deepak S. Turaga, Hui Xiong
REMIX: Automated Exploration for Interactive Outlier Detection
To appear in KDD 2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Outlier detection is the identification of points in a dataset that do not conform to the norm. Outlier detection is highly sensitive to the choice of the detection algorithm and the feature subspace used by the algorithm. Extracting domain-relevant insights from outliers needs systematic exploration of these choices since diverse outlier sets could lead to complementary insights. This challenge is especially acute in an interactive setting, where the choices must be explored in a time-constrained manner. In this work, we present REMIX, the first system to address the problem of outlier detection in an interactive setting. REMIX uses a novel mixed integer programming (MIP) formulation for automatically selecting and executing a diverse set of outlier detectors within a time limit. This formulation incorporates multiple aspects such as (i) an upper limit on the total execution time of detectors (ii) diversity in the space of algorithms and features, and (iii) meta-learning for evaluating the cost and utility of detectors. REMIX provides two distinct ways for the analyst to consume its results: (i) a partitioning of the detectors explored by REMIX into perspectives through low-rank non-negative matrix factorization; each perspective can be easily visualized as an intuitive heatmap of experiments versus outliers, and (ii) an ensembled set of outliers which combines outlier scores from all detectors. We demonstrate the benefits of REMIX through extensive empirical validation on real-world data.
[ { "version": "v1", "created": "Wed, 17 May 2017 02:17:48 GMT" } ]
1,495,065,600,000
[ [ "Fu", "Yanjie", "" ], [ "Aggarwal", "Charu", "" ], [ "Parthasarathy", "Srinivasan", "" ], [ "Turaga", "Deepak S.", "" ], [ "Xiong", "Hui", "" ] ]
1705.06342
Thommen Karimpanal George
Thommen George Karimpanal, Erik Wilhelm
Identification and Off-Policy Learning of Multiple Objectives Using Adaptive Clustering
Accepted in Neurocomputing: Special Issue on Multiobjective Reinforcement Learning: Theory and Applications, 24 pages, 6 figures
Neurocomputing 263, 39-47, 2017
10.1016/j.neucom.2017.04.074
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we present a methodology that enables an agent to make efficient use of its exploratory actions by autonomously identifying possible objectives in its environment and learning them in parallel. The identification of objectives is achieved using an online and unsupervised adaptive clustering algorithm. The identified objectives are learned (at least partially) in parallel using Q-learning. Using a simulated agent and environment, it is shown that the converged or partially converged value function weights resulting from off-policy learning can be used to accumulate knowledge about multiple objectives without any additional exploration. We claim that the proposed approach could be useful in scenarios where the objectives are initially unknown or in real world scenarios where exploration is typically a time and energy intensive process. The implications and possible extensions of this work are also briefly discussed.
[ { "version": "v1", "created": "Wed, 17 May 2017 20:55:15 GMT" } ]
1,547,164,800,000
[ [ "Karimpanal", "Thommen George", "" ], [ "Wilhelm", "Erik", "" ] ]
1705.07095
Ondrej Kuzelka
Ondrej Kuzelka, Jesse Davis, Steven Schockaert
Induction of Interpretable Possibilistic Logic Theories from Relational Data
Longer version of a paper appearing in IJCAI 2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The field of Statistical Relational Learning (SRL) is concerned with learning probabilistic models from relational data. Learned SRL models are typically represented using some kind of weighted logical formulas, which make them considerably more interpretable than those obtained by e.g. neural networks. In practice, however, these models are often still difficult to interpret correctly, as they can contain many formulas that interact in non-trivial ways and weights do not always have an intuitive meaning. To address this, we propose a new SRL method which uses possibilistic logic to encode relational models. Learned models are then essentially stratified classical theories, which explicitly encode what can be derived with a given level of certainty. Compared to Markov Logic Networks (MLNs), our method is faster and produces considerably more interpretable models.
[ { "version": "v1", "created": "Fri, 19 May 2017 17:12:07 GMT" } ]
1,495,411,200,000
[ [ "Kuzelka", "Ondrej", "" ], [ "Davis", "Jesse", "" ], [ "Schockaert", "Steven", "" ] ]
1705.07105
Charalampos Nikolaou
Charalampos Nikolaou and Egor V. Kostylev and George Konstantinidis and Mark Kaminski and Bernardo Cuenca Grau and Ian Horrocks
The Bag Semantics of Ontology-Based Data Access
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ontology-based data access (OBDA) is a popular approach for integrating and querying multiple data sources by means of a shared ontology. The ontology is linked to the sources using mappings, which assign views over the data to ontology predicates. Motivated by the need for OBDA systems supporting database-style aggregate queries, we propose a bag semantics for OBDA, where duplicate tuples in the views defined by the mappings are retained, as is the case in standard databases. We show that bag semantics makes conjunctive query answering in OBDA coNP-hard in data complexity. To regain tractability, we consider a rather general class of queries and show its rewritability to a generalisation of the relational calculus to bags.
[ { "version": "v1", "created": "Fri, 19 May 2017 17:33:28 GMT" } ]
1,495,411,200,000
[ [ "Nikolaou", "Charalampos", "" ], [ "Kostylev", "Egor V.", "" ], [ "Konstantinidis", "George", "" ], [ "Kaminski", "Mark", "" ], [ "Grau", "Bernardo Cuenca", "" ], [ "Horrocks", "Ian", "" ] ]
1705.07177
Mikael Henaff
Mikael Henaff, William F. Whitney, Yann LeCun
Model-Based Planning with Discrete and Continuous Actions
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Action planning using learned and differentiable forward models of the world is a general approach which has a number of desirable properties, including improved sample complexity over model-free RL methods, reuse of learned models across different tasks, and the ability to perform efficient gradient-based optimization in continuous action spaces. However, this approach does not apply straightforwardly when the action space is discrete. In this work, we show that it is in fact possible to effectively perform planning via backprop in discrete action spaces, using a simple paramaterization of the actions vectors on the simplex combined with input noise when training the forward model. Our experiments show that this approach can match or outperform model-free RL and discrete planning methods on gridworld navigation tasks in terms of performance and/or planning time while using limited environment interactions, and can additionally be used to perform model-based control in a challenging new task where the action space combines discrete and continuous actions. We furthermore propose a policy distillation approach which yields a fast policy network which can be used at inference time, removing the need for an iterative planning procedure.
[ { "version": "v1", "created": "Fri, 19 May 2017 20:38:49 GMT" }, { "version": "v2", "created": "Wed, 4 Apr 2018 06:34:26 GMT" } ]
1,522,886,400,000
[ [ "Henaff", "Mikael", "" ], [ "Whitney", "William F.", "" ], [ "LeCun", "Yann", "" ] ]
1705.07339
Jin-Kao Hao
Yi Zhou and Jin-Kao Hao
Combining tabu search and graph reduction to solve the maximum balanced biclique problem
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Maximum Balanced Biclique Problem is a well-known graph model with relevant applications in diverse domains. This paper introduces a novel algorithm, which combines an effective constraint-based tabu search procedure and two dedicated graph reduction techniques. We verify the effectiveness of the algorithm on 30 classical random benchmark graphs and 25 very large real-life sparse graphs from the popular Koblenz Network Collection (KONECT). The results show that the algorithm improves the best-known results (new lower bounds) for 10 classical benchmarks and obtains the optimal solutions for 14 KONECT instances.
[ { "version": "v1", "created": "Sat, 20 May 2017 17:47:31 GMT" } ]
1,495,497,600,000
[ [ "Zhou", "Yi", "" ], [ "Hao", "Jin-Kao", "" ] ]
1705.07381
Luis Pineda
Luis Pineda and Shlomo Zilberstein
Generalizing the Role of Determinization in Probabilistic Planning
null
null
null
UM-CS-2017-006
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The stochastic shortest path problem (SSP) is a highly expressive model for probabilistic planning. The computational hardness of SSPs has sparked interest in determinization-based planners that can quickly solve large problems. However, existing methods employ a simplistic approach to determinization. In particular, they ignore the possibility of tailoring the determinization to the specific characteristics of the target domain. In this work we examine this question, by showing that learning a good determinization for a planning domain can be done efficiently and can improve performance. Moreover, we show how to directly incorporate probabilistic reasoning into the planning problem when a good determinization is not sufficient by itself. Based on these insights, we introduce a planner, FF-LAO*, that outperforms state-of-the-art probabilistic planners on several well-known competition benchmarks.
[ { "version": "v1", "created": "Sun, 21 May 2017 02:39:02 GMT" }, { "version": "v2", "created": "Sat, 29 Jul 2017 14:25:10 GMT" } ]
1,501,545,600,000
[ [ "Pineda", "Luis", "" ], [ "Zilberstein", "Shlomo", "" ] ]
1705.07429
Sergey Paramonov
Sergey Paramonov, Christian Bessiere, Anton Dries, Luc De Raedt
Sketched Answer Set Programming
15 pages, 11 figures; to appear in ICTAI 2018
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Answer Set Programming (ASP) is a powerful modeling formalism for combinatorial problems. However, writing ASP models is not trivial. We propose a novel method, called Sketched Answer Set Programming (SkASP), aiming at supporting the user in resolving this issue. The user writes an ASP program while marking uncertain parts open with question marks. In addition, the user provides a number of positive and negative examples of the desired program behaviour. The sketched model is rewritten into another ASP program, which is solved by traditional methods. As a result, the user obtains a functional and reusable ASP program modelling her problem. We evaluate our approach on 21 well known puzzles and combinatorial problems inspired by Karp's 21 NP-complete problems and demonstrate a use-case for a database application based on ASP.
[ { "version": "v1", "created": "Sun, 21 May 2017 11:03:53 GMT" }, { "version": "v2", "created": "Wed, 22 Aug 2018 09:52:51 GMT" } ]
1,534,982,400,000
[ [ "Paramonov", "Sergey", "" ], [ "Bessiere", "Christian", "" ], [ "Dries", "Anton", "" ], [ "De Raedt", "Luc", "" ] ]
1705.07460
Min Xu
Min Xu
Experience enrichment based task independent reward model
4 pages, 1 figure
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For most reinforcement learning approaches, the learning is performed by maximizing an accumulative reward that is expectedly and manually defined for specific tasks. However, in real world, rewards are emergent phenomena from the complex interactions between agents and environments. In this paper, we propose an implicit generic reward model for reinforcement learning. Unlike those rewards that are manually defined for specific tasks, such implicit reward is task independent. It only comes from the deviation from the agents' previous experiences.
[ { "version": "v1", "created": "Sun, 21 May 2017 15:19:20 GMT" } ]
1,495,497,600,000
[ [ "Xu", "Min", "" ] ]
1705.07615
John Aslanides
John Aslanides
AIXIjs: A Software Demo for General Reinforcement Learning
Masters thesis. Australian National University, October 2016. 97 pp
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Reinforcement learning is a general and powerful framework with which to study and implement artificial intelligence. Recent advances in deep learning have enabled RL algorithms to achieve impressive performance in restricted domains such as playing Atari video games (Mnih et al., 2015) and, recently, the board game Go (Silver et al., 2016). However, we are still far from constructing a generally intelligent agent. Many of the obstacles and open questions are conceptual: What does it mean to be intelligent? How does one explore and learn optimally in general, unknown environments? What, in fact, does it mean to be optimal in the general sense? The universal Bayesian agent AIXI (Hutter, 2005) is a model of a maximally intelligent agent, and plays a central role in the sub-field of general reinforcement learning (GRL). Recently, AIXI has been shown to be flawed in important ways; it doesn't explore enough to be asymptotically optimal (Orseau, 2010), and it can perform poorly with certain priors (Leike and Hutter, 2015). Several variants of AIXI have been proposed to attempt to address these shortfalls: among them are entropy-seeking agents (Orseau, 2011), knowledge-seeking agents (Orseau et al., 2013), Bayes with bursts of exploration (Lattimore, 2013), MDL agents (Leike, 2016a), Thompson sampling (Leike et al., 2016), and optimism (Sunehag and Hutter, 2015). We present AIXIjs, a JavaScript implementation of these GRL agents. This implementation is accompanied by a framework for running experiments against various environments, similar to OpenAI Gym (Brockman et al., 2016), and a suite of interactive demos that explore different properties of the agents, similar to REINFORCEjs (Karpathy, 2015). We use AIXIjs to present numerous experiments illustrating fundamental properties of, and differences between, these agents.
[ { "version": "v1", "created": "Mon, 22 May 2017 08:56:54 GMT" } ]
1,495,497,600,000
[ [ "Aslanides", "John", "" ] ]
1705.07961
Irina Georgescu
Irina Georgescu
Compatible extensions and consistent closures: a fuzzy approach
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper $\ast$--compatible extensions of fuzzy relations are studied, generalizing some results obtained by Duggan in case of crisp relations. From this general result are obtained as particular cases fuzzy versions of some important extension theorems for crisp relations (Szpilrajn, Hansson, Suzumura). Two notions of consistent closure of a fuzzy relation are introduced.
[ { "version": "v1", "created": "Mon, 22 May 2017 19:27:19 GMT" } ]
1,495,584,000,000
[ [ "Georgescu", "Irina", "" ] ]
1705.07996
Neil Lawrence
Neil D. Lawrence
Living Together: Mind and Machine Intelligence
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we consider the nature of the machine intelligences we have created in the context of our human intelligence. We suggest that the fundamental difference between human and machine intelligence comes down to \emph{embodiment factors}. We define embodiment factors as the ratio between an entity's ability to communicate information vs compute information. We speculate on the role of embodiment factors in driving our own intelligence and consciousness. We briefly review dual process models of cognition and cast machine intelligence within that framework, characterising it as a dominant System Zero, which can drive behaviour through interfacing with us subconsciously. Driven by concerns about the consequence of such a system we suggest prophylactic courses of action that could be considered. Our main conclusion is that it is \emph{not} sentient intelligence we should fear but \emph{non-sentient} intelligence.
[ { "version": "v1", "created": "Mon, 22 May 2017 20:49:43 GMT" } ]
1,495,584,000,000
[ [ "Lawrence", "Neil D.", "" ] ]
1705.08200
Chaoyang Song
Fang Wan and Chaoyang Song
Logical Learning Through a Hybrid Neural Network with Auxiliary Inputs
11 pages, 9 figures, 4 tables
Front. Robot. AI, 30 July 2018
10.3389/frobt.2018.00086
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human reasoning process is seldom a one-way process from an input leading to an output. Instead, it often involves a systematic deduction by ruling out other possible outcomes as a self-checking mechanism. In this paper, we describe the design of a hybrid neural network for logical learning that is similar to the human reasoning through the introduction of an auxiliary input, namely the indicators, that act as the hints to suggest logical outcomes. We generate these indicators by digging into the hidden information buried underneath the original training data for direct or indirect suggestions. We used the MNIST data to demonstrate the design and use of these indicators in a convolutional neural network. We trained a series of such hybrid neural networks with variations of the indicators. Our results show that these hybrid neural networks are very robust in generating logical outcomes with inherently higher prediction accuracy than the direct use of the original input and output in apparent models. Such improved predictability with reassured logical confidence is obtained through the exhaustion of all possible indicators to rule out all illogical outcomes, which is not available in the apparent models. Our logical learning process can effectively cope with the unknown unknowns using a full exploitation of all existing knowledge available for learning. The design and implementation of the hints, namely the indicators, become an essential part of artificial intelligence for logical learning. We also introduce an ongoing application setup for this hybrid neural network in an autonomous grasping robot, namely as_DeepClaw, aiming at learning an optimized grasping pose through logical learning.
[ { "version": "v1", "created": "Tue, 23 May 2017 12:11:30 GMT" } ]
1,583,798,400,000
[ [ "Wan", "Fang", "" ], [ "Song", "Chaoyang", "" ] ]
1705.08218
Xiaojian Wu
Xiaojian Wu, Yexiang Xue, Bart Selman, Carla P. Gomes
XOR-Sampling for Network Design with Correlated Stochastic Events
In Proceedings of the Twenty-sixth International Joint Conference on Artificial Intelligence (IJCAI-17). The first two authors contribute equally
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many network optimization problems can be formulated as stochastic network design problems in which edges are present or absent stochastically. Furthermore, protective actions can guarantee that edges will remain present. We consider the problem of finding the optimal protection strategy under a budget limit in order to maximize some connectivity measurements of the network. Previous approaches rely on the assumption that edges are independent. In this paper, we consider a more realistic setting where multiple edges are not independent due to natural disasters or regional events that make the states of multiple edges stochastically correlated. We use Markov Random Fields to model the correlation and define a new stochastic network design framework. We provide a novel algorithm based on Sample Average Approximation (SAA) coupled with a Gibbs or XOR sampler. The experimental results on real road network data show that the policies produced by SAA with the XOR sampler have higher quality and lower variance compared to SAA with Gibbs sampler.
[ { "version": "v1", "created": "Tue, 23 May 2017 12:50:36 GMT" }, { "version": "v2", "created": "Wed, 24 May 2017 01:38:57 GMT" } ]
1,495,670,400,000
[ [ "Wu", "Xiaojian", "" ], [ "Xue", "Yexiang", "" ], [ "Selman", "Bart", "" ], [ "Gomes", "Carla P.", "" ] ]
1705.08245
Vincent Huang
Vincent Huang, Tobias Ley, Martha Vlachou-Konchylaki, Wenfeng Hu
Enhanced Experience Replay Generation for Efficient Reinforcement Learning
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Applying deep reinforcement learning (RL) on real systems suffers from slow data sampling. We propose an enhanced generative adversarial network (EGAN) to initialize an RL agent in order to achieve faster learning. The EGAN utilizes the relation between states and actions to enhance the quality of data samples generated by a GAN. Pre-training the agent with the EGAN shows a steeper learning curve with a 20% improvement of training time in the beginning of learning, compared to no pre-training, and an improvement compared to training with GAN by about 5% with smaller variations. For real time systems with sparse and slow data sampling the EGAN could be used to speed up the early phases of the training process.
[ { "version": "v1", "created": "Tue, 23 May 2017 13:36:00 GMT" }, { "version": "v2", "created": "Mon, 29 May 2017 14:24:08 GMT" } ]
1,496,102,400,000
[ [ "Huang", "Vincent", "" ], [ "Ley", "Tobias", "" ], [ "Vlachou-Konchylaki", "Martha", "" ], [ "Hu", "Wenfeng", "" ] ]
1705.08320
Svetlin Penkov
Svetlin Penkov and Subramanian Ramamoorthy
Explaining Transition Systems through Program Induction
submitted to Neural Information Processing Systems 2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explaining and reasoning about processes which underlie observed black-box phenomena enables the discovery of causal mechanisms, derivation of suitable abstract representations and the formulation of more robust predictions. We propose to learn high level functional programs in order to represent abstract models which capture the invariant structure in the observed data. We introduce the $\pi$-machine (program-induction machine) -- an architecture able to induce interpretable LISP-like programs from observed data traces. We propose an optimisation procedure for program learning based on backpropagation, gradient descent and A* search. We apply the proposed method to three problems: system identification of dynamical systems, explaining the behaviour of a DQN agent and learning by demonstration in a human-robot interaction scenario. Our experimental results show that the $\pi$-machine can efficiently induce interpretable programs from individual data traces.
[ { "version": "v1", "created": "Tue, 23 May 2017 14:38:28 GMT" } ]
1,501,113,600,000
[ [ "Penkov", "Svetlin", "" ], [ "Ramamoorthy", "Subramanian", "" ] ]
1705.08439
Thomas Anthony
Thomas Anthony, Zheng Tian, David Barber
Thinking Fast and Slow with Deep Learning and Tree Search
v1 to v2: - Add a value function in MCTS - Some MCTS hyper-parameters changed - Repetition of experiments: improved accuracy and errors shown. (note the reduction in effect size for the tpt/cat experiment) - Results from a longer training run, including changes in expert strength in training - Comparison to MoHex. v3: clarify independence of ExIt and AG0. v4: see appendix E
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequential decision making problems, such as structured prediction, robotic control, and game playing, require a combination of planning policies and generalisation of those plans. In this paper, we present Expert Iteration (ExIt), a novel reinforcement learning algorithm which decomposes the problem into separate planning and generalisation tasks. Planning new policies is performed by tree search, while a deep neural network generalises those plans. Subsequently, tree search is improved by using the neural network policy to guide search, increasing the strength of new plans. In contrast, standard deep Reinforcement Learning algorithms rely on a neural network not only to generalise plans, but to discover them too. We show that ExIt outperforms REINFORCE for training a neural network to play the board game Hex, and our final tree search agent, trained tabula rasa, defeats MoHex 1.0, the most recent Olympiad Champion player to be publicly released.
[ { "version": "v1", "created": "Tue, 23 May 2017 17:48:51 GMT" }, { "version": "v2", "created": "Sat, 4 Nov 2017 17:37:18 GMT" }, { "version": "v3", "created": "Fri, 10 Nov 2017 10:01:16 GMT" }, { "version": "v4", "created": "Sun, 3 Dec 2017 10:56:00 GMT" } ]
1,512,432,000,000
[ [ "Anthony", "Thomas", "" ], [ "Tian", "Zheng", "" ], [ "Barber", "David", "" ] ]
1705.08440
Mieczys{\l}aw K{\l}opotek
M.Michalewicz, S.T.Wierzcho\'n, M.A. K{\l}opotek
Knowledge Acquisition, Representation \& Manipulation in Decision Support Systems
Intelligent Information Systems Proceedings of a Workshop held in August\'ow, Poland, 7-11 June, 1993, pages 210- 238
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a methodology and discuss some implementation issues for a project on statistical/expert approach to data analysis and knowledge acquisition. We discuss some general assumptions underlying the project. Further, the requirements for a user-friendly computer assistant are specified along with the nature of tools aiding the researcher. Next we show some aspects of belief network approach and Dempster-Shafer (DST) methodology introduced in practice to system SEAD. Specifically we present the application of DS methodology to belief revision problem. Further a concept of an interface to probabilistic and DS belief networks enabling a user to understand the communication with a belief network based reasoning system is presented
[ { "version": "v1", "created": "Tue, 23 May 2017 17:51:58 GMT" } ]
1,495,584,000,000
[ [ "Michalewicz", "M.", "" ], [ "Wierzchoń", "S. T.", "" ], [ "Kłopotek", "M. A.", "" ] ]
1705.08492
Yan Zhao
Yan Zhao, Xiao Fang, and David Simchi-Levi
Uplift Modeling with Multiple Treatments and General Response Types
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Randomized experiments have been used to assist decision-making in many areas. They help people select the optimal treatment for the test population with certain statistical guarantee. However, subjects can show significant heterogeneity in response to treatments. The problem of customizing treatment assignment based on subject characteristics is known as uplift modeling, differential response analysis, or personalized treatment learning in literature. A key feature for uplift modeling is that the data is unlabeled. It is impossible to know whether the chosen treatment is optimal for an individual subject because response under alternative treatments is unobserved. This presents a challenge to both the training and the evaluation of uplift models. In this paper we describe how to obtain an unbiased estimate of the key performance metric of an uplift model, the expected response. We present a new uplift algorithm which creates a forest of randomized trees. The trees are built with a splitting criterion designed to directly optimize their uplift performance based on the proposed evaluation method. Both the evaluation method and the algorithm apply to arbitrary number of treatments and general response types. Experimental results on synthetic data and industry-provided data show that our algorithm leads to significant performance improvement over other applicable methods.
[ { "version": "v1", "created": "Tue, 23 May 2017 19:20:18 GMT" } ]
1,495,670,400,000
[ [ "Zhao", "Yan", "" ], [ "Fang", "Xiao", "" ], [ "Simchi-Levi", "David", "" ] ]
1705.08509
Pouria Amirian Dr.
Pouria Amirian, Anahid Basiri, Jeremy Morley
Predictive Analytics for Enhancing Travel Time Estimation in Navigation Apps of Apple, Google, and Microsoft
null
null
10.1145/3003965.3003976
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The explosive growth of the location-enabled devices coupled with the increasing use of Internet services has led to an increasing awareness of the importance and usage of geospatial information in many applications. The navigation apps (often called Maps), use a variety of available data sources to calculate and predict the travel time as well as several options for routing in public transportation, car or pedestrian modes. This paper evaluates the pedestrian mode of Maps apps in three major smartphone operating systems (Android, iOS and Windows Phone). In the paper, we will show that the Maps apps on iOS, Android and Windows Phone in pedestrian mode, predict travel time without learning from the individual's movement profile. In addition, we will exemplify that those apps suffer from a specific data quality issue which relates to the absence of information about location and type of pedestrian crossings. Finally, we will illustrate learning from movement profile of individuals using various predictive analytics models to improve the accuracy of travel time estimation.
[ { "version": "v1", "created": "Tue, 23 May 2017 19:54:19 GMT" } ]
1,495,670,400,000
[ [ "Amirian", "Pouria", "" ], [ "Basiri", "Anahid", "" ], [ "Morley", "Jeremy", "" ] ]
1705.08961
Roni Stern
Roni Stern and Brendan Juba
Efficient, Safe, and Probably Approximately Complete Learning of Action Models
null
International Joint Conference on Artificial Intelligence (IJCAI) 2017
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we explore the theoretical boundaries of planning in a setting where no model of the agent's actions is given. Instead of an action model, a set of successfully executed plans are given and the task is to generate a plan that is safe, i.e., guaranteed to achieve the goal without failing. To this end, we show how to learn a conservative model of the world in which actions are guaranteed to be applicable. This conservative model is then given to an off-the-shelf classical planner, resulting in a plan that is guaranteed to achieve the goal. However, this reduction from a model-free planning to a model-based planning is not complete: in some cases a plan will not be found even when such exists. We analyze the relation between the number of observed plans and the likelihood that our conservative approach will indeed fail to solve a solvable problem. Our analysis show that the number of trajectories needed scales gracefully.
[ { "version": "v1", "created": "Wed, 24 May 2017 20:38:52 GMT" } ]
1,495,756,800,000
[ [ "Stern", "Roni", "" ], [ "Juba", "Brendan", "" ] ]
1705.08968
Artur Garcez
Ivan Donadello, Luciano Serafini, Artur d'Avila Garcez
Logic Tensor Networks for Semantic Image Interpretation
14 pages, 2 figures, IJCAI 2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semantic Image Interpretation (SII) is the task of extracting structured semantic descriptions from images. It is widely agreed that the combined use of visual data and background knowledge is of great importance for SII. Recently, Statistical Relational Learning (SRL) approaches have been developed for reasoning under uncertainty and learning in the presence of data and rich knowledge. Logic Tensor Networks (LTNs) are an SRL framework which integrates neural networks with first-order fuzzy logic to allow (i) efficient learning from noisy data in the presence of logical constraints, and (ii) reasoning with logical formulas describing general properties of the data. In this paper, we develop and apply LTNs to two of the main tasks of SII, namely, the classification of an image's bounding boxes and the detection of the relevant part-of relations between objects. To the best of our knowledge, this is the first successful application of SRL to such SII tasks. The proposed approach is evaluated on a standard image processing benchmark. Experiments show that the use of background knowledge in the form of logical constraints can improve the performance of purely data-driven approaches, including the state-of-the-art Fast Region-based Convolutional Neural Networks (Fast R-CNN). Moreover, we show that the use of logical background knowledge adds robustness to the learning system when errors are present in the labels of the training data.
[ { "version": "v1", "created": "Wed, 24 May 2017 21:34:14 GMT" } ]
1,495,756,800,000
[ [ "Donadello", "Ivan", "" ], [ "Serafini", "Luciano", "" ], [ "Garcez", "Artur d'Avila", "" ] ]
1705.09045
Ashley Edwards
Ashley D. Edwards, Srijan Sood, and Charles L. Isbell Jr
Cross-Domain Perceptual Reward Functions
A shorter version of this paper was accepted to RLDM (http://rldm.org/rldm2017/)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In reinforcement learning, we often define goals by specifying rewards within desirable states. One problem with this approach is that we typically need to redefine the rewards each time the goal changes, which often requires some understanding of the solution in the agents environment. When humans are learning to complete tasks, we regularly utilize alternative sources that guide our understanding of the problem. Such task representations allow one to specify goals on their own terms, thus providing specifications that can be appropriately interpreted across various environments. This motivates our own work, in which we represent goals in environments that are different from the agents. We introduce Cross-Domain Perceptual Reward (CDPR) functions, learned rewards that represent the visual similarity between an agents state and a cross-domain goal image. We report results for learning the CDPRs with a deep neural network and using them to solve two tasks with deep reinforcement learning.
[ { "version": "v1", "created": "Thu, 25 May 2017 04:54:36 GMT" }, { "version": "v2", "created": "Wed, 7 Jun 2017 15:44:37 GMT" }, { "version": "v3", "created": "Tue, 25 Jul 2017 15:40:28 GMT" } ]
1,501,027,200,000
[ [ "Edwards", "Ashley D.", "" ], [ "Sood", "Srijan", "" ], [ "Isbell", "Charles L.", "Jr" ] ]
1705.09058
Yihui He
Yihui He, Ming Xiang
An Empirical Analysis of Approximation Algorithms for the Euclidean Traveling Salesman Problem
4 pages, 5 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With applications to many disciplines, the traveling salesman problem (TSP) is a classical computer science optimization problem with applications to industrial engineering, theoretical computer science, bioinformatics, and several other disciplines. In recent years, there have been a plethora of novel approaches for approximate solutions ranging from simplistic greedy to cooperative distributed algorithms derived from artificial intelligence. In this paper, we perform an evaluation and analysis of cornerstone algorithms for the Euclidean TSP. We evaluate greedy, 2-opt, and genetic algorithms. We use several datasets as input for the algorithms including a small dataset, a mediumsized dataset representing cities in the United States, and a synthetic dataset consisting of 200 cities to test algorithm scalability. We discover that the greedy and 2-opt algorithms efficiently calculate solutions for smaller datasets. Genetic algorithm has the best performance for optimality for medium to large datasets, but generally have longer runtime. Our implementations is public available.
[ { "version": "v1", "created": "Thu, 25 May 2017 06:21:39 GMT" } ]
1,495,756,800,000
[ [ "He", "Yihui", "" ], [ "Xiang", "Ming", "" ] ]
1705.09218
Mohamed Siala Dr
Begum Genc, Mohamed Siala, Barry O'Sullivan, Gilles Simonin
Finding Robust Solutions to Stable Marriage
IJCAI 2017 proceedings
null
10.24963/ijcai.2017/88
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the notion of robustness in stable matching problems. We first define robustness by introducing (a,b)-supermatches. An $(a,b)$-supermatch is a stable matching in which if $a$ pairs break up it is possible to find another stable matching by changing the partners of those $a$ pairs and at most $b$ other pairs. In this context, we define the most robust stable matching as a $(1,b)$-supermatch where b is minimum. We show that checking whether a given stable matching is a $(1,b)$-supermatch can be done in polynomial time. Next, we use this procedure to design a constraint programming model, a local search approach, and a genetic algorithm to find the most robust stable matching. Our empirical evaluation on large instances show that local search outperforms the other approaches.
[ { "version": "v1", "created": "Wed, 24 May 2017 07:49:52 GMT" }, { "version": "v2", "created": "Sun, 20 Aug 2017 12:25:42 GMT" }, { "version": "v3", "created": "Fri, 27 Oct 2017 13:53:56 GMT" } ]
1,509,321,600,000
[ [ "Genc", "Begum", "" ], [ "Siala", "Mohamed", "" ], [ "O'Sullivan", "Barry", "" ], [ "Simonin", "Gilles", "" ] ]
1705.09545
Mark Lewis
Fred Glover, Mark Lewis, Gary Kochenberger
Logical and Inequality Implications for Reducing the Size and Complexity of Quadratic Unconstrained Binary Optimization Problems
30 pages + 6 pages of Appendices
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The quadratic unconstrained binary optimization (QUBO) problem arises in diverse optimization applications ranging from Ising spin problems to classical problems in graph theory and binary discrete optimization. The use of preprocessing to transform the graph representing the QUBO problem into a smaller equivalent graph is important for improving solution quality and time for both exact and metaheuristic algorithms and is a step towards mapping large scale QUBO to hardware graphs used in quantum annealing computers. In an earlier paper (Lewis and Glover, 2016) a set of rules was introduced that achieved significant QUBO reductions as verified through computational testing. Here this work is extended with additional rules that provide further reductions that succeed in exactly solving 10% of the benchmark QUBO problems. An algorithm and associated data structures to efficiently implement the entire set of rules is detailed and computational experiments are reported that demonstrate their efficacy.
[ { "version": "v1", "created": "Fri, 26 May 2017 11:59:49 GMT" } ]
1,496,016,000,000
[ [ "Glover", "Fred", "" ], [ "Lewis", "Mark", "" ], [ "Kochenberger", "Gary", "" ] ]
1705.09811
Torsten Schaub
Martin Gebser and Roland Kaminski and Benjamin Kaufmann and Torsten Schaub
Multi-shot ASP solving with clingo
Under consideration for publication in Theory and Practice of Logic Programming (TPLP)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new flexible paradigm of grounding and solving in Answer Set Programming (ASP), which we refer to as multi-shot ASP solving, and present its implementation in the ASP system clingo. Multi-shot ASP solving features grounding and solving processes that deal with continuously changing logic programs. In doing so, they remain operative and accommodate changes in a seamless way. For instance, such processes allow for advanced forms of search, as in optimization or theory solving, or interaction with an environment, as in robotics or query-answering. Common to them is that the problem specification evolves during the reasoning process, either because data or constraints are added, deleted, or replaced. This evolutionary aspect adds another dimension to ASP since it brings about state changing operations. We address this issue by providing an operational semantics that characterizes grounding and solving processes in multi-shot ASP solving. This characterization provides a semantic account of grounder and solver states along with the operations manipulating them. The operative nature of multi-shot solving avoids redundancies in relaunching grounder and solver programs and benefits from the solver's learning capacities. clingo accomplishes this by complementing ASP's declarative input language with control capacities. On the declarative side, a new directive allows for structuring logic programs into named and parameterizable subprograms. The grounding and integration of these subprograms into the solving process is completely modular and fully controllable from the procedural side. To this end, clingo offers a new application programming interface that is conveniently accessible via scripting languages.
[ { "version": "v1", "created": "Sat, 27 May 2017 11:52:40 GMT" }, { "version": "v2", "created": "Tue, 20 Mar 2018 16:43:53 GMT" } ]
1,521,590,400,000
[ [ "Gebser", "Martin", "" ], [ "Kaminski", "Roland", "" ], [ "Kaufmann", "Benjamin", "" ], [ "Schaub", "Torsten", "" ] ]
1705.09844
Mark Lewis
Mark Lewis, Fred Glover
Quadratic Unconstrained Binary Optimization Problem Preprocessing: Theory and Empirical Analysis
Benchmark problems used are available from the first author
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Quadratic Unconstrained Binary Optimization problem (QUBO) has become a unifying model for representing a wide range of combinatorial optimization problems, and for linking a variety of disciplines that face these problems. A new class of quantum annealing computer that maps QUBO onto a physical qubit network structure with specific size and edge density restrictions is generating a growing interest in ways to transform the underlying QUBO structure into an equivalent graph having fewer nodes and edges. In this paper we present rules for reducing the size of the QUBO matrix by identifying variables whose value at optimality can be predetermined. We verify that the reductions improve both solution quality and time to solution and, in the case of metaheuristic methods where optimal solutions cannot be guaranteed, the quality of solutions obtained within reasonable time limits. We discuss the general QUBO structural characteristics that can take advantage of these reduction techniques and perform careful experimental design and analysis to identify and quantify the specific characteristics most affecting reduction. The rules make it possible to dramatically improve solution times on a new set of problems using both the exact Cplex solver and a tabu search metaheuristic.
[ { "version": "v1", "created": "Sat, 27 May 2017 17:09:56 GMT" } ]
1,496,102,400,000
[ [ "Lewis", "Mark", "" ], [ "Glover", "Fred", "" ] ]
1705.09879
Patrick Rodler
Patrick Rodler and Wolfgang Schmid and Konstantin Schekotihin
Inexpensive Cost-Optimized Measurement Proposal for Sequential Model-Based Diagnosis
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we present strategies for (optimal) measurement selection in model-based sequential diagnosis. In particular, assuming a set of leading diagnoses being given, we show how queries (sets of measurements) can be computed and optimized along two dimensions: expected number of queries and cost per query. By means of a suitable decoupling of two optimizations and a clever search space reduction the computations are done without any inference engine calls. For the full search space, we give a method requiring only a polynomial number of inferences and guaranteeing query properties existing methods cannot provide. Evaluation results using real-world problems indicate that the new method computes (virtually) optimal queries instantly independently of the size and complexity of the considered diagnosis problems.
[ { "version": "v1", "created": "Sun, 28 May 2017 00:47:29 GMT" } ]
1,496,102,400,000
[ [ "Rodler", "Patrick", "" ], [ "Schmid", "Wolfgang", "" ], [ "Schekotihin", "Konstantin", "" ] ]
1705.09970
Steven Holtzen
Steven Holtzen and Todd Millstein and Guy Van den Broeck
Probabilistic Program Abstractions
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Abstraction is a fundamental tool for reasoning about complex systems. Program abstraction has been utilized to great effect for analyzing deterministic programs. At the heart of program abstraction is the relationship between a concrete program, which is difficult to analyze, and an abstract program, which is more tractable. Program abstractions, however, are typically not probabilistic. We generalize non-deterministic program abstractions to probabilistic program abstractions by explicitly quantifying the non-deterministic choices. Our framework upgrades key definitions and properties of abstractions to the probabilistic context. We also discuss preliminary ideas for performing inference on probabilistic abstractions and general probabilistic programs.
[ { "version": "v1", "created": "Sun, 28 May 2017 17:53:01 GMT" }, { "version": "v2", "created": "Fri, 14 Jul 2017 15:46:25 GMT" } ]
1,500,249,600,000
[ [ "Holtzen", "Steven", "" ], [ "Millstein", "Todd", "" ], [ "Broeck", "Guy Van den", "" ] ]
1705.09990
Smitha Milli
Smitha Milli, Dylan Hadfield-Menell, Anca Dragan, Stuart Russell
Should Robots be Obedient?
Accepted to IJCAI 2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intuitively, obedience -- following the order that a human gives -- seems like a good property for a robot to have. But, we humans are not perfect and we may give orders that are not best aligned to our preferences. We show that when a human is not perfectly rational then a robot that tries to infer and act according to the human's underlying preferences can always perform better than a robot that simply follows the human's literal order. Thus, there is a tradeoff between the obedience of a robot and the value it can attain for its owner. We investigate how this tradeoff is impacted by the way the robot infers the human's preferences, showing that some methods err more on the side of obedience than others. We then analyze how performance degrades when the robot has a misspecified model of the features that the human cares about or the level of rationality of the human. Finally, we study how robots can start detecting such model misspecification. Overall, our work suggests that there might be a middle ground in which robots intelligently decide when to obey human orders, but err on the side of obedience.
[ { "version": "v1", "created": "Sun, 28 May 2017 20:51:19 GMT" } ]
1,496,102,400,000
[ [ "Milli", "Smitha", "" ], [ "Hadfield-Menell", "Dylan", "" ], [ "Dragan", "Anca", "" ], [ "Russell", "Stuart", "" ] ]
1705.10044
Ryuta Arisaka
Ryuta Arisaka, Ken Satoh
Abstract Argumentation / Persuasion / Dynamics
Arisaka R., Satoh K. (2018) Abstract Argumentation / Persuasion / Dynamics. In: Miller T., Oren N., Sakurai Y., Noda I., Savarimuthu B., Cao Son T. (eds) PRIMA 2018: Principles and Practice of Multi-Agent Systems. PRIMA 2018. Lecture Notes in Computer Science, vol 11224. Springer, Cham
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The act of persuasion, a key component in rhetoric argumentation, may be viewed as a dynamics modifier. We extend Dung's frameworks with acts of persuasion among agents, and consider interactions among attack, persuasion and defence that have been largely unheeded so far. We characterise basic notions of admissibilities in this framework, and show a way of enriching them through, effectively, CTL (computation tree logic) encoding, which also permits importation of the theoretical results known to the logic into our argumentation frameworks. Our aim is to complement the growing interest in coordination of static and dynamic argumentation.
[ { "version": "v1", "created": "Mon, 29 May 2017 06:14:56 GMT" }, { "version": "v2", "created": "Fri, 2 Jun 2017 05:37:28 GMT" }, { "version": "v3", "created": "Wed, 7 Nov 2018 08:28:40 GMT" } ]
1,541,635,200,000
[ [ "Arisaka", "Ryuta", "" ], [ "Satoh", "Ken", "" ] ]
1705.10201
Leigh Sheneman
Leigh Sheneman and Arend Hintze
Machine Learned Learning Machines
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
There are two common approaches for optimizing the performance of a machine: genetic algorithms and machine learning. A genetic algorithm is applied over many generations whereas machine learning works by applying feedback until the system meets a performance threshold. Though these are methods that typically operate separately, we combine evolutionary adaptation and machine learning into one approach. Our focus is on machines that can learn during their lifetime, but instead of equipping them with a machine learning algorithm we aim to let them evolve their ability to learn by themselves. We use evolvable networks of probabilistic and deterministic logic gates, known as Markov Brains, as our computational model organism. The ability of Markov Brains to learn is augmented by a novel adaptive component that can change its computational behavior based on feedback. We show that Markov Brains can indeed evolve to incorporate these feedback gates to improve their adaptability to variable environments. By combining these two methods, we now also implemented a computational model that can be used to study the evolution of learning.
[ { "version": "v1", "created": "Mon, 29 May 2017 14:07:33 GMT" }, { "version": "v2", "created": "Thu, 31 Aug 2017 15:53:28 GMT" } ]
1,504,224,000,000
[ [ "Sheneman", "Leigh", "" ], [ "Hintze", "Arend", "" ] ]
1705.10217
Javier \'Alvez
Javier \'Alvez and Paqui Lucio and German Rigau
Black-box Testing of First-Order Logic Ontologies Using WordNet
59 pages,14 figures, 6 tables
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial Intelligence aims to provide computer programs with commonsense knowledge to reason about our world. This paper offers a new practical approach towards automated commonsense reasoning with first-order logic (FOL) ontologies. We propose a new black-box testing methodology of FOL SUMO-based ontologies by exploiting WordNet and its mapping into SUMO. Our proposal includes a method for the (semi-)automatic creation of a very large benchmark of competency questions and a procedure for its automated evaluation by using automated theorem provers (ATPs). Applying different quality criteria, our testing proposal enables a successful evaluation of a) the competency of several translations of SUMO into FOL and b) the performance of various automated ATPs. Finally, we also provide a fine-grained and complete analysis of the commonsense reasoning competency of current FOL SUMO-based ontologies.
[ { "version": "v1", "created": "Mon, 29 May 2017 14:41:20 GMT" }, { "version": "v2", "created": "Thu, 22 Mar 2018 13:28:14 GMT" }, { "version": "v3", "created": "Fri, 23 Mar 2018 14:43:13 GMT" } ]
1,522,022,400,000
[ [ "Álvez", "Javier", "" ], [ "Lucio", "Paqui", "" ], [ "Rigau", "German", "" ] ]
1705.10219
Javier \'Alvez
Javier \'Alvez and Montserrat Hermo and Paqui Lucio and German Rigau
Automatic White-Box Testing of First-Order Logic Ontologies
38 pages, 5 tables
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Formal ontologies are axiomatizations in a logic-based formalism. The development of formal ontologies, and their important role in the Semantic Web area, is generating considerable research on the use of automated reasoning techniques and tools that help in ontology engineering. One of the main aims is to refine and to improve axiomatizations for enabling automated reasoning tools to efficiently infer reliable information. Defects in the axiomatization can not only cause wrong inferences, but can also hinder the inference of expected information, either by increasing the computational cost of, or even preventing, the inference. In this paper, we introduce a novel, fully automatic white-box testing framework for first-order logic ontologies. Our methodology is based on the detection of inference-based redundancies in the given axiomatization. The application of the proposed testing method is fully automatic since a) the automated generation of tests is guided only by the syntax of axioms and b) the evaluation of tests is performed by automated theorem provers. Our proposal enables the detection of defects and serves to certify the grade of suitability --for reasoning purposes-- of every axiom. We formally define the set of tests that are generated from any axiom and prove that every test is logically related to redundancies in the axiom from which the test has been generated. We have implemented our method and used this implementation to automatically detect several non-trivial defects that were hidden in various first-order logic ontologies. Throughout the paper we provide illustrative examples of these defects, explain how they were found, and how each proof --given by an automated theorem-prover-- provides useful hints on the nature of each defect. Additionally, by correcting all the detected defects, we have obtained an improved version of one of the tested ontologies: Adimen-SUMO.
[ { "version": "v1", "created": "Mon, 29 May 2017 14:42:48 GMT" }, { "version": "v2", "created": "Tue, 26 Jun 2018 19:23:02 GMT" }, { "version": "v3", "created": "Wed, 30 Jan 2019 08:14:56 GMT" } ]
1,548,892,800,000
[ [ "Álvez", "Javier", "" ], [ "Hermo", "Montserrat", "" ], [ "Lucio", "Paqui", "" ], [ "Rigau", "German", "" ] ]
1705.10308
Mieczys{\l}aw K{\l}opotek
Mieczys{\l}aw K{\l}opotek
Learning Belief Network Structure From Data under Causal Insufficiency
A short version of this paper appeared in [Klopotek:94m] M.A. K{\l}opotek: Learning Belief Network Structure From Data under Causal Insufficiency. [in:] F. Bergadano, L.DeRaed Eds.: Machine Learning ECML-94 , Proc. 13th European Conference on Machine Learning, Catania, Italy, 6-8 April 1994, Lecture Notes in Artificial Intelligence 784, Springer-Verlag, 1994, pp. 379-382
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Though a belief network (a representation of the joint probability distribution, see [3]) and a causal network (a representation of causal relationships [14]) are intended to mean different things, they are closely related. Both assume an underlying dag (directed acyclic graph) structure of relations among variables and if Markov condition and faithfulness condition [15] are met, then a causal network is in fact a belief network. The difference comes to appearance when we recover belief network and causal network structure from data. A causal network structure may be impossible to recover completely from data as not all directions of causal links may be uniquely determined [15]. Fortunately, if we deal with causally sufficient sets of variables (that is whenever significant influence variables are not omitted from observation), then there exists the possibility to identify the family of belief networks a causal network belongs to [16]. Regrettably, to our knowledge, a similar result is not directly known for causally insufficient sets of variables. Spirtes, Glymour and Scheines developed a CI algorithm to handle this situation, but it leaves some important questions open. The big open question is whether or not the bidirectional edges (that is indications of a common cause) are the only ones necessary to develop a belief network out of the product of CI, or must there be some other hidden variables added (e.g. by guessing). This paper is devoted to settling this question.
[ { "version": "v1", "created": "Mon, 29 May 2017 17:58:13 GMT" } ]
1,496,102,400,000
[ [ "Kłopotek", "Mieczysław", "" ] ]
1705.10443
Victor Silva
Victor do Nascimento Silva and Luiz Chaimowicz
MOBA: a New Arena for Game AI
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Games have always been popular testbeds for Artificial Intelligence (AI). In the last decade, we have seen the rise of the Multiple Online Battle Arena (MOBA) games, which are the most played games nowadays. In spite of this, there are few works that explore MOBA as a testbed for AI Research. In this paper we present and discuss the main features and opportunities offered by MOBA games to Game AI Research. We describe the various challenges faced along the game and also propose a discrete model that can be used to better understand and explore the game. With this, we aim to encourage the use of MOBA as a novel research platform for Game AI.
[ { "version": "v1", "created": "Tue, 30 May 2017 03:12:03 GMT" } ]
1,496,188,800,000
[ [ "Silva", "Victor do Nascimento", "" ], [ "Chaimowicz", "Luiz", "" ] ]
1705.10557
John Aslanides
John Aslanides, Jan Leike, Marcus Hutter
Universal Reinforcement Learning Algorithms: Survey and Experiments
8 pages, 6 figures, Twenty-sixth International Joint Conference on Artificial Intelligence (IJCAI-17)
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Many state-of-the-art reinforcement learning (RL) algorithms typically assume that the environment is an ergodic Markov Decision Process (MDP). In contrast, the field of universal reinforcement learning (URL) is concerned with algorithms that make as few assumptions as possible about the environment. The universal Bayesian agent AIXI and a family of related URL algorithms have been developed in this setting. While numerous theoretical optimality results have been proven for these agents, there has been no empirical investigation of their behavior to date. We present a short and accessible survey of these URL algorithms under a unified notation and framework, along with results of some experiments that qualitatively illustrate some properties of the resulting policies, and their relative performance on partially-observable gridworld environments. We also present an open-source reference implementation of the algorithms which we hope will facilitate further understanding of, and experimentation with, these ideas.
[ { "version": "v1", "created": "Tue, 30 May 2017 11:41:00 GMT" } ]
1,496,188,800,000
[ [ "Aslanides", "John", "" ], [ "Leike", "Jan", "" ], [ "Hutter", "Marcus", "" ] ]
1705.10720
Stuart Armstrong
Stuart Armstrong and Benjamin Levinstein
Low Impact Artificial Intelligences
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are many goals for an AI that could become dangerous if the AI becomes superintelligent or otherwise powerful. Much work on the AI control problem has been focused on constructing AI goals that are safe even for such AIs. This paper looks at an alternative approach: defining a general concept of `low impact'. The aim is to ensure that a powerful AI which implements low impact will not modify the world extensively, even if it is given a simple or dangerous goal. The paper proposes various ways of defining and grounding low impact, and discusses methods for ensuring that the AI can still be allowed to have a (desired) impact despite the restriction. The end of the paper addresses known issues with this approach and avenues for future research.
[ { "version": "v1", "created": "Tue, 30 May 2017 16:15:16 GMT" } ]
1,496,188,800,000
[ [ "Armstrong", "Stuart", "" ], [ "Levinstein", "Benjamin", "" ] ]
1705.10726
Naveen Sundar Govindarajulu
Naveen Sundar Govindarajulu, Selmer Bringsjord
Strength Factors: An Uncertainty System for a Quantified Modal Logic
Presented on August 20, 2017 at the Logical Foundations for Uncertainty and Machine Learning Workshop @ IJCAI 2017 in Melbourne, Australia
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new system S for handling uncertainty in a quantified modal logic (first-order modal logic). The system is based on both probability theory and proof theory. The system is derived from Chisholm's epistemology. We concretize Chisholm's system by grounding his undefined and primitive (i.e. foundational) concept of reasonablenes in probability and proof theory. S can be useful in systems that have to interact with humans and provide justifications for their uncertainty. As a demonstration of the system, we apply the system to provide a solution to the lottery paradox. Another advantage of the system is that it can be used to provide uncertainty values for counterfactual statements. Counterfactuals are statements that an agent knows for sure are false. Among other cases, counterfactuals are useful when systems have to explain their actions to users. Uncertainties for counterfactuals fall out naturally from our system. Efficient reasoning in just simple first-order logic is a hard problem. Resolution-based first-order reasoning systems have made significant progress over the last several decades in building systems that have solved non-trivial tasks (even unsolved conjectures in mathematics). We present a sketch of a novel algorithm for reasoning that extends first-order resolution. Finally, while there have been many systems of uncertainty for propositional logics, first-order logics and propositional modal logics, there has been very little work in building systems of uncertainty for first-order modal logics. The work described below is in progress; and once finished will address this lack.
[ { "version": "v1", "created": "Tue, 30 May 2017 16:24:18 GMT" }, { "version": "v2", "created": "Mon, 28 May 2018 06:07:18 GMT" } ]
1,527,552,000,000
[ [ "Govindarajulu", "Naveen Sundar", "" ], [ "Bringsjord", "Selmer", "" ] ]
1705.10834
Thommen George Karimpanal
Thommen George Karimpanal, Roland Bouffanais
Experience Replay Using Transition Sequences
23 pages, 6 figures
Frontiers in Neurorobotics 12 (2018) 32
10.3389/fnbot.2018.00032
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experience replay is one of the most commonly used approaches to improve the sample efficiency of reinforcement learning algorithms. In this work, we propose an approach to select and replay sequences of transitions in order to accelerate the learning of a reinforcement learning agent in an off-policy setting. In addition to selecting appropriate sequences, we also artificially construct transition sequences using information gathered from previous agent-environment interactions. These sequences, when replayed, allow value function information to trickle down to larger sections of the state/state-action space, thereby making the most of the agent's experience. We demonstrate our approach on modified versions of standard reinforcement learning tasks such as the mountain car and puddle world problems and empirically show that it enables better learning of value functions as compared to other forms of experience replay. Further, we briefly discuss some of the possible extensions to this work, as well as applications and situations where this approach could be particularly useful.
[ { "version": "v1", "created": "Tue, 30 May 2017 19:24:09 GMT" }, { "version": "v2", "created": "Fri, 13 Sep 2019 01:13:39 GMT" } ]
1,664,409,600,000
[ [ "Karimpanal", "Thommen George", "" ], [ "Bouffanais", "Roland", "" ] ]