id
stringlengths
9
10
submitter
stringlengths
5
47
authors
stringlengths
5
1.72k
title
stringlengths
11
234
comments
stringlengths
1
491
journal-ref
stringlengths
4
396
doi
stringlengths
13
97
report-no
stringlengths
4
138
categories
stringclasses
1 value
license
stringclasses
9 values
abstract
stringlengths
29
3.66k
versions
listlengths
1
21
update_date
int64
1,180B
1,718B
authors_parsed
sequencelengths
1
98
1610.08853
Ahmed Alaa
Ahmed M. Alaa, Jinsung Yoon, Scott Hu, and Mihaela van der Schaar
Personalized Risk Scoring for Critical Care Prognosis using Mixtures of Gaussian Processes
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective: In this paper, we develop a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs; the proposed risk scoring system ensures timely intensive care unit (ICU) admissions for clinically deteriorating patients. Methods: The risk scoring system learns a set of latent patient subtypes from the offline electronic health record data, and trains a mixture of Gaussian Process (GP) experts, where each expert models the physiological data streams associated with a specific patient subtype. Transfer learning techniques are used to learn the relationship between a patient's latent subtype and her static admission information (e.g. age, gender, transfer status, ICD-9 codes, etc). Results: Experiments conducted on data from a heterogeneous cohort of 6,321 patients admitted to Ronald Reagan UCLA medical center show that our risk score significantly and consistently outperforms the currently deployed risk scores, such as the Rothman index, MEWS, APACHE and SOFA scores, in terms of timeliness, true positive rate (TPR), and positive predictive value (PPV). Conclusion: Our results reflect the importance of adopting the concepts of personalized medicine in critical care settings; significant accuracy and timeliness gains can be achieved by accounting for the patients' heterogeneity. Significance: The proposed risk scoring methodology can confer huge clinical and social benefits on more than 200,000 critically ill inpatient who exhibit cardiac arrests in the US every year.
[ { "version": "v1", "created": "Thu, 27 Oct 2016 15:54:04 GMT" } ]
1,477,612,800,000
[ [ "Alaa", "Ahmed M.", "" ], [ "Yoon", "Jinsung", "" ], [ "Hu", "Scott", "" ], [ "van der Schaar", "Mihaela", "" ] ]
1610.09064
Himabindu Lakkaraju
Himabindu Lakkaraju, Ece Kamar, Rich Caruana, Eric Horvitz
Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration
To appear in AAAI 2017; Presented at NIPS Workshop on Reliability in ML, 2016
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predictive models deployed in the real world may assign incorrect labels to instances with high confidence. Such errors or unknown unknowns are rooted in model incompleteness, and typically arise because of the mismatch between training data and the cases encountered at test time. As the models are blind to such errors, input from an oracle is needed to identify these failures. In this paper, we formulate and address the problem of informed discovery of unknown unknowns of any given predictive model where unknown unknowns occur due to systematic biases in the training data. We propose a model-agnostic methodology which uses feedback from an oracle to both identify unknown unknowns and to intelligently guide the discovery. We employ a two-phase approach which first organizes the data into multiple partitions based on the feature similarity of instances and the confidence scores assigned by the predictive model, and then utilizes an explore-exploit strategy for discovering unknown unknowns across these partitions. We demonstrate the efficacy of our framework by varying the underlying causes of unknown unknowns across various applications. To the best of our knowledge, this paper presents the first algorithmic approach to the problem of discovering unknown unknowns of predictive models.
[ { "version": "v1", "created": "Fri, 28 Oct 2016 02:55:14 GMT" }, { "version": "v2", "created": "Tue, 6 Dec 2016 03:01:21 GMT" }, { "version": "v3", "created": "Sat, 10 Dec 2016 06:02:38 GMT" } ]
1,481,587,200,000
[ [ "Lakkaraju", "Himabindu", "" ], [ "Kamar", "Ece", "" ], [ "Caruana", "Rich", "" ], [ "Horvitz", "Eric", "" ] ]
1611.00183
Bas Van Stein
Bas van Stein, Matthijs van Leeuwen and Thomas B\"ack
Local Subspace-Based Outlier Detection using Global Neighbourhoods
Short version accepted at IEEE BigData 2016
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Outlier detection in high-dimensional data is a challenging yet important task, as it has applications in, e.g., fraud detection and quality control. State-of-the-art density-based algorithms perform well because they 1) take the local neighbourhoods of data points into account and 2) consider feature subspaces. In highly complex and high-dimensional data, however, existing methods are likely to overlook important outliers because they do not explicitly take into account that the data is often a mixture distribution of multiple components. We therefore introduce GLOSS, an algorithm that performs local subspace outlier detection using global neighbourhoods. Experiments on synthetic data demonstrate that GLOSS more accurately detects local outliers in mixed data than its competitors. Moreover, experiments on real-world data show that our approach identifies relevant outliers overlooked by existing methods, confirming that one should keep an eye on the global perspective even when doing local outlier detection.
[ { "version": "v1", "created": "Tue, 1 Nov 2016 11:22:26 GMT" } ]
1,478,044,800,000
[ [ "van Stein", "Bas", "" ], [ "van Leeuwen", "Matthijs", "" ], [ "Bäck", "Thomas", "" ] ]
1611.00549
Oliver Cliff
Oliver M. Cliff and Mikhail Prokopenko and Robert Fitch
Inferring Coupling of Distributed Dynamical Systems via Transfer Entropy
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we are interested in structure learning for a set of spatially distributed dynamical systems, where individual subsystems are coupled via latent variables and observed through a filter. We represent this model as a directed acyclic graph (DAG) that characterises the unidirectional coupling between subsystems. Standard approaches to structure learning are not applicable in this framework due to the hidden variables, however we can exploit the properties of certain dynamical systems to formulate exact methods based on state space reconstruction. We approach the problem by using reconstruction theorems to analytically derive a tractable expression for the KL-divergence of a candidate DAG from the observed dataset. We show this measure can be decomposed as a function of two information-theoretic measures, transfer entropy and stochastic interaction. We then present two mathematically robust scoring functions based on transfer entropy and statistical independence tests. These results support the previously held conjecture that transfer entropy can be used to infer effective connectivity in complex networks.
[ { "version": "v1", "created": "Wed, 2 Nov 2016 11:23:54 GMT" } ]
1,478,131,200,000
[ [ "Cliff", "Oliver M.", "" ], [ "Prokopenko", "Mikhail", "" ], [ "Fitch", "Robert", "" ] ]
1611.00576
Florentin Smarandache
W. B. Vasantha Kandasamy, Ilanthenral K, Florentin Smarandache
Strong Neutrosophic Graphs and Subgraph Topological Subspaces
226 pages, many graphs, Europa Belgique, 2016
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this book authors for the first time introduce the notion of strong neutrosophic graphs. They are very different from the usual graphs and neutrosophic graphs. Using these new structures special subgraph topological spaces are defined. Further special lattice graph of subgraphs of these graphs are defined and described. Several interesting properties using subgraphs of a strong neutrosophic graph are obtained. Several open conjectures are proposed. These new class of strong neutrosophic graphs will certainly find applications in Neutrosophic Cognitive Maps (NCM), Neutrosophic Relational Maps (NRM) and Neutrosophic Relational Equations (NRE) with appropriate modifications.
[ { "version": "v1", "created": "Sun, 30 Oct 2016 15:10:55 GMT" } ]
1,478,131,200,000
[ [ "Kandasamy", "W. B. Vasantha", "" ], [ "K", "Ilanthenral", "" ], [ "Smarandache", "Florentin", "" ] ]
1611.00685
Jan Feyereisl
Marek Rosa, Jan Feyereisl and The GoodAI Collective
A Framework for Searching for General Artificial Intelligence
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is a significant lack of unified approaches to building generally intelligent machines. The majority of current artificial intelligence research operates within a very narrow field of focus, frequently without considering the importance of the 'big picture'. In this document, we seek to describe and unify principles that guide the basis of our development of general artificial intelligence. These principles revolve around the idea that intelligence is a tool for searching for general solutions to problems. We define intelligence as the ability to acquire skills that narrow this search, diversify it and help steer it to more promising areas. We also provide suggestions for studying, measuring, and testing the various skills and abilities that a human-level intelligent machine needs to acquire. The document aims to be both implementation agnostic, and to provide an analytic, systematic, and scalable way to generate hypotheses that we believe are needed to meet the necessary conditions in the search for general artificial intelligence. We believe that such a framework is an important stepping stone for bringing together definitions, highlighting open problems, connecting researchers willing to collaborate, and for unifying the arguably most significant search of this century.
[ { "version": "v1", "created": "Wed, 2 Nov 2016 17:02:14 GMT" } ]
1,478,131,200,000
[ [ "Rosa", "Marek", "" ], [ "Feyereisl", "Jan", "" ], [ "Collective", "The GoodAI", "" ] ]
1611.01080
Giuliano Armano
Giuliano Armano
Probabilistic Modeling of Progressive Filtering
The article entitled Modeling Progressive Filtering, published on Fundamenta Informaticae (Vol. 138, Issue 3, pp. 285-320, July 2015), has been derived from this extended report
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Progressive filtering is a simple way to perform hierarchical classification, inspired by the behavior that most humans put into practice while attempting to categorize an item according to an underlying taxonomy. Each node of the taxonomy being associated with a different category, one may visualize the categorization process by looking at the item going downwards through all the nodes that accept it as belonging to the corresponding category. This paper is aimed at modeling the progressive filtering technique from a probabilistic perspective, in a hierarchical text categorization setting. As a result, the designer of a system based on progressive filtering should be facilitated in the task of devising, training, and testing it.
[ { "version": "v1", "created": "Thu, 3 Nov 2016 16:31:32 GMT" } ]
1,478,217,600,000
[ [ "Armano", "Giuliano", "" ] ]
1611.02154
Meisam Hejazi Nia
Meisam Hejazi Nia, Brian Ratchford
Bayesian Non-parametric model to Target Gamification Notifications Using Big Data
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I suggest an approach that helps the online marketers to target their Gamification elements to users by modifying the order of the list of tasks that they send to users. It is more realistic and flexible as it allows the model to learn more parameters when the online marketers collect more data. The targeting approach is scalable and quick, and it can be used over streaming data.
[ { "version": "v1", "created": "Fri, 4 Nov 2016 04:40:23 GMT" } ]
1,478,563,200,000
[ [ "Nia", "Meisam Hejazi", "" ], [ "Ratchford", "Brian", "" ] ]
1611.02439
Sarah Alice Gaggl
Sarah Alice Gaggl, Juan Carlos Nieves, Hannes Strass
Proceedings of the First International Workshop on Argumentation in Logic Programming and Non-Monotonic Reasoning (Arg-LPNMR 2016)
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This volume contains the papers presented at Arg-LPNMR 2016: First International Workshop on Argumentation in Logic Programming and Nonmonotonic Reasoning held on July 8-10, 2016 in New York City, NY.
[ { "version": "v1", "created": "Tue, 8 Nov 2016 09:17:08 GMT" } ]
1,478,649,600,000
[ [ "Gaggl", "Sarah Alice", "" ], [ "Nieves", "Juan Carlos", "" ], [ "Strass", "Hannes", "" ] ]
1611.02453
Thorsten Wissmann
Carsten Lutz and Frank Wolter
The Data Complexity of Description Logic Ontologies
null
Logical Methods in Computer Science, Volume 13, Issue 4 (November 13, 2017) lmcs:2203
10.23638/LMCS-13(4:7)2017
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze the data complexity of ontology-mediated querying where the ontologies are formulated in a description logic (DL) of the ALC family and queries are conjunctive queries, positive existential queries, or acyclic conjunctive queries. Our approach is non-uniform in the sense that we aim to understand the complexity of each single ontology instead of for all ontologies formulated in a certain language. While doing so, we quantify over the queries and are interested, for example, in the question whether all queries can be evaluated in polynomial time w.r.t. a given ontology. Our results include a PTime/coNP-dichotomy for ontologies of depth one in the description logic ALCFI, the same dichotomy for ALC- and ALCI-ontologies of unrestricted depth, and the non-existence of such a dichotomy for ALCF-ontologies. For the latter DL, we additionally show that it is undecidable whether a given ontology admits PTime query evaluation. We also consider the connection between PTime query evaluation and rewritability into (monadic) Datalog.
[ { "version": "v1", "created": "Tue, 8 Nov 2016 09:52:54 GMT" }, { "version": "v2", "created": "Tue, 24 Oct 2017 09:19:25 GMT" }, { "version": "v3", "created": "Fri, 10 Nov 2017 09:38:00 GMT" } ]
1,687,392,000,000
[ [ "Lutz", "Carsten", "" ], [ "Wolter", "Frank", "" ] ]
1611.02646
Tatiana Makhalova
Sergei O. Kuznetsov, Tatiana Makhalova
On interestingness measures of formal concepts
20 pages, 5 figures, 3 tables
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Formal concepts and closed itemsets proved to be of big importance for knowledge discovery, both as a tool for concise representation of association rules and a tool for clustering and constructing domain taxonomies and ontologies. Exponential explosion makes it difficult to consider the whole concept lattice arising from data, one needs to select most useful and interesting concepts. In this paper interestingness measures of concepts are considered and compared with respect to various aspects, such as efficiency of computation and applicability to noisy data and performing ranking correlation.
[ { "version": "v1", "created": "Tue, 8 Nov 2016 18:26:24 GMT" }, { "version": "v2", "created": "Wed, 19 Apr 2017 18:19:22 GMT" } ]
1,492,732,800,000
[ [ "Kuznetsov", "Sergei O.", "" ], [ "Makhalova", "Tatiana", "" ] ]
1611.02885
Martin Diller
Martin Diller, Anthony Hunter
Encoding monotonic multi-set preferences using CI-nets: preliminary report
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
CP-nets and their variants constitute one of the main AI approaches for specifying and reasoning about preferences. CI-nets, in particular, are a CP-inspired formalism for representing ordinal preferences over sets of goods, which are typically required to be monotonic. Considering also that goods often come in multi-sets rather than sets, a natural question is whether CI-nets can be used more or less directly to encode preferences over multi-sets. We here provide some initial ideas on how to achieve this, in the sense that at least a restricted form of reasoning on our framework, which we call "confined reasoning", can be efficiently reduced to reasoning on CI-nets. Our framework nevertheless allows for encoding preferences over multi-sets with unbounded multiplicities. We also show the extent to which it can be used to represent preferences where multiplicites of the goods are not stated explicitly ("purely qualitative preferences") as well as a potential use of our generalization of CI-nets as a component of a recent system for evidence aggregation.
[ { "version": "v1", "created": "Wed, 9 Nov 2016 10:56:42 GMT" } ]
1,478,736,000,000
[ [ "Diller", "Martin", "" ], [ "Hunter", "Anthony", "" ] ]
1611.03398
Christophe Lecoutre
Frederic Boussemart and Christophe Lecoutre and Gilles Audemard and C\'edric Piette
XCSP3: An Integrated Format for Benchmarking Combinatorial Constrained Problems
238 pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
We propose a major revision of the format XCSP 2.1, called XCSP3, to build integrated representations of combinatorial constrained problems. This new format is able to deal with mono/multi optimization, many types of variables, cost functions, reification, views, annotations, variable quantification, distributed, probabilistic and qualitative reasoning. The new format is made compact, highly readable, and rather easy to parse. Interestingly, it captures the structure of the problem models, through the possibilities of declaring arrays of variables, and identifying syntactic and semantic groups of constraints. The number of constraints is kept under control by introducing a limited set of basic constraint forms, and producing almost automatically some of their variations through lifting, restriction, sliding, logical combination and relaxation mechanisms. As a result, XCSP3 encompasses practically all constraints that can be found in major constraint solvers developed by the CP community. A website, which is developed conjointly with the format, contains many models and series of instances. The user can make sophisticated queries for selecting instances from very precise criteria. The objective of XCSP3 is to ease the effort required to test and compare different algorithms by providing a common test-bed of combinatorial constrained instances.
[ { "version": "v1", "created": "Thu, 10 Nov 2016 17:00:56 GMT" }, { "version": "v2", "created": "Fri, 6 Apr 2018 09:06:18 GMT" }, { "version": "v3", "created": "Sat, 16 Jan 2021 12:18:55 GMT" }, { "version": "v4", "created": "Mon, 7 Nov 2022 10:26:49 GMT" } ]
1,667,865,600,000
[ [ "Boussemart", "Frederic", "" ], [ "Lecoutre", "Christophe", "" ], [ "Audemard", "Gilles", "" ], [ "Piette", "Cédric", "" ] ]
1611.03977
Kui Yu
Kui Yu, Jiuyong Li, Lin Liu
A Review on Algorithms for Constraint-based Causal Discovery
This paper has been withdrawn by the author due to further improvement
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Causal discovery studies the problem of mining causal relationships between variables from data, which is of primary interest in science. During the past decades, significant amount of progresses have been made toward this fundamental data mining paradigm. Recent years, as the availability of abundant large-sized and complex observational data, the constrain-based approaches have gradually attracted a lot of interest and have been widely applied to many diverse real-world problems due to the fast running speed and easy generalizing to the problem of causal insufficiency. In this paper, we aim to review the constraint-based causal discovery algorithms. Firstly, we discuss the learning paradigm of the constraint-based approaches. Secondly and primarily, the state-of-the-art constraint-based casual inference algorithms are surveyed with the detailed analysis. Thirdly, several related open-source software packages and benchmark data repositories are briefly summarized. As a conclusion, some open problems in constraint-based causal discovery are outlined for future research.
[ { "version": "v1", "created": "Sat, 12 Nov 2016 09:25:38 GMT" }, { "version": "v2", "created": "Thu, 24 Nov 2016 22:33:25 GMT" } ]
1,480,291,200,000
[ [ "Yu", "Kui", "" ], [ "Li", "Jiuyong", "" ], [ "Liu", "Lin", "" ] ]
1611.04146
Quan Liu
Quan Liu, Hui Jiang, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, Yu Hu
Commonsense Knowledge Enhanced Embeddings for Solving Pronoun Disambiguation Problems in Winograd Schema Challenge
Winograd Schema Challenge, Pronoun Disambiguation Problems, Neural Embedding Methods, Commonsense Knowledge
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose commonsense knowledge enhanced embeddings (KEE) for solving the Pronoun Disambiguation Problems (PDP). The PDP task we investigate in this paper is a complex coreference resolution task which requires the utilization of commonsense knowledge. This task is a standard first round test set in the 2016 Winograd Schema Challenge. In this task, traditional linguistic features that are useful for coreference resolution, e.g. context and gender information, are no longer effective anymore. Therefore, the KEE models are proposed to provide a general framework to make use of commonsense knowledge for solving the PDP problems. Since the PDP task doesn't have training data, the KEE models would be used during the unsupervised feature extraction process. To evaluate the effectiveness of the KEE models, we propose to incorporate various commonsense knowledge bases, including ConceptNet, WordNet, and CauseCom, into the KEE training process. We achieved the best performance by applying the proposed methods to the 2016 Winograd Schema Challenge. In addition, experiments conducted on the standard PDP task indicate that, the proposed KEE models could solve the PDP problems by achieving 66.7% accuracy, which is a new state-of-the-art performance.
[ { "version": "v1", "created": "Sun, 13 Nov 2016 15:38:32 GMT" }, { "version": "v2", "created": "Thu, 22 Dec 2016 02:27:16 GMT" } ]
1,482,451,200,000
[ [ "Liu", "Quan", "" ], [ "Jiang", "Hui", "" ], [ "Ling", "Zhen-Hua", "" ], [ "Zhu", "Xiaodan", "" ], [ "Wei", "Si", "" ], [ "Hu", "Yu", "" ] ]
1611.04363
Yujie Qian
Yujie Qian, Jie Tang, Kan Wu
Weakly Learning to Match Experts in Online Community
IJCAI 2018
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In online question-and-answer (QA) websites like Quora, one central issue is to find (invite) users who are able to provide answers to a given question and at the same time would be unlikely to say "no" to the invitation. The challenge is how to trade off the matching degree between users' expertise and the question topic, and the likelihood of positive response from the invited users. In this paper, we formally formulate the problem and develop a weakly supervised factor graph (WeakFG) model to address the problem. The model explicitly captures expertise matching degree between questions and users. To model the likelihood that an invited user is willing to answer a specific question, we incorporate a set of correlations based on social identity theory into the WeakFG model. We use two different genres of datasets: QA-Expert and Paper-Reviewer, to validate the proposed model. Our experimental results show that the proposed model can significantly outperform (+1.5-10.7% by MAP) the state-of-the-art algorithms for matching users (experts) with community questions. We have also developed an online system to further demonstrate the advantages of the proposed method.
[ { "version": "v1", "created": "Mon, 14 Nov 2016 12:46:24 GMT" }, { "version": "v2", "created": "Mon, 7 May 2018 21:35:10 GMT" } ]
1,525,824,000,000
[ [ "Qian", "Yujie", "" ], [ "Tang", "Jie", "" ], [ "Wu", "Kan", "" ] ]
1611.05190
Carmine Dodaro
Carmine Dodaro, Philip Gasteiger, Nicola Leone, Benjamin Musitsch, Francesco Ricca, and Konstantin Schekotihin
Driving CDCL Search
Paper presented at the 1st Workshop on Trends and Applications of Answer Set Programming (TAASP 2016), Klagenfurt, Austria, 26 September 2016, 15 pages, LaTeX, 5 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The CDCL algorithm is the leading solution adopted by state-of-the-art solvers for SAT, SMT, ASP, and others. Experiments show that the performance of CDCL solvers can be significantly boosted by embedding domain-specific heuristics, especially on large real-world problems. However, a proper integration of such criteria in off-the-shelf CDCL implementations is not obvious. In this paper, we distill the key ingredients that drive the search of CDCL solvers, and propose a general framework for designing and implementing new heuristics. We implemented our strategy in an ASP solver, and we experimented on two industrial domains. On hard problem instances, state-of-the-art implementations fail to find any solution in acceptable time, whereas our implementation is very successful and finds all solutions.
[ { "version": "v1", "created": "Wed, 16 Nov 2016 09:13:26 GMT" } ]
1,479,340,800,000
[ [ "Dodaro", "Carmine", "" ], [ "Gasteiger", "Philip", "" ], [ "Leone", "Nicola", "" ], [ "Musitsch", "Benjamin", "" ], [ "Ricca", "Francesco", "" ], [ "Schekotihin", "Konstantin", "" ] ]
1611.05735
Yaniv Altshuler
Yaniv Altshuler, Alex Pentland, Shlomo Bekhor, Yoram Shiftan, Alfred Bruckstein
Optimal Dynamic Coverage Infrastructure for Large-Scale Fleets of Reconnaissance UAVs
35 pages, 19 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current state of the art in the field of UAV activation relies solely on human operators for the design and adaptation of the drones' flying routes. Furthermore, this is being done today on an individual level (one vehicle per operators), with some exceptions of a handful of new systems, that are comprised of a small number of self-organizing swarms, manually guided by a human operator. Drones-based monitoring is of great importance in variety of civilian domains, such as road safety, homeland security, and even environmental control. In its military aspect, efficiently detecting evading targets by a fleet of unmanned drones has an ever increasing impact on the ability of modern armies to engage in warfare. The latter is true both traditional symmetric conflicts among armies as well as asymmetric ones. Be it a speeding driver, a polluting trailer or a covert convoy, the basic challenge remains the same -- how can its detection probability be maximized using as little number of drones as possible. In this work we propose a novel approach for the optimization of large scale swarms of reconnaissance drones -- capable of producing on-demand optimal coverage strategies for any given search scenario. Given an estimation cost of the threat's potential damages, as well as types of monitoring drones available and their comparative performance, our proposed method generates an analytically provable strategy, stating the optimal number and types of drones to be deployed, in order to cost-efficiently monitor a pre-defined region for targets maneuvering using a given roads networks. We demonstrate our model using a unique dataset of the Israeli transportation network, on which different deployment schemes for drones deployment are evaluated.
[ { "version": "v1", "created": "Thu, 17 Nov 2016 15:28:14 GMT" } ]
1,479,427,200,000
[ [ "Altshuler", "Yaniv", "" ], [ "Pentland", "Alex", "" ], [ "Bekhor", "Shlomo", "" ], [ "Shiftan", "Yoram", "" ], [ "Bruckstein", "Alfred", "" ] ]
1611.05740
Wacha Bounliphone
Wacha Bounliphone, Eugene Belilovsky, Arthur Tenenhaus, Ioannis Antonoglou, Arthur Gretton, Matthew B. Blashcko
Fast Non-Parametric Tests of Relative Dependency and Similarity
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce two novel non-parametric statistical hypothesis tests. The first test, called the relative test of dependency, enables us to determine whether one source variable is significantly more dependent on a first target variable or a second. Dependence is measured via the Hilbert-Schmidt Independence Criterion (HSIC). The second test, called the relative test of similarity, is use to determine which of the two samples from arbitrary distributions is significantly closer to a reference sample of interest and the relative measure of similarity is based on the Maximum Mean Discrepancy (MMD). To construct these tests, we have used as our test statistics the difference of HSIC statistics and of MMD statistics, respectively. The resulting tests are consistent and unbiased, and have favorable convergence properties. The effectiveness of the relative dependency test is demonstrated on several real-world problems: we identify languages groups from a multilingual parallel corpus, and we show that tumor location is more dependent on gene expression than chromosome imbalance. We also demonstrate the performance of the relative test of similarity over a broad selection of model comparisons problems in deep generative models.
[ { "version": "v1", "created": "Thu, 17 Nov 2016 15:36:31 GMT" } ]
1,479,427,200,000
[ [ "Bounliphone", "Wacha", "" ], [ "Belilovsky", "Eugene", "" ], [ "Tenenhaus", "Arthur", "" ], [ "Antonoglou", "Ioannis", "" ], [ "Gretton", "Arthur", "" ], [ "Blashcko", "Matthew B.", "" ] ]
1611.06108
Daniil Galaktionov
Daniil Galaktionov, Miguel R. Luaces, \'Angeles S. Places
Navigational Rule Derivation: An algorithm to determine the effect of traffic signs on road networks
This research has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk{\l}odowska-Curie Actions H2020-MSCA-RISE-2015 BIRDS GA No. 690941. in PACIS 2016 Online Proceedings
Proceeding of the 20th Pacific Asia Conference on Information Systems (PACIS 2016). Association for Information Systems. AIS Electronic Library (AISeL). Paper 94. ISBN: 9789860491029
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present an algorithm to build a road network map enriched with traffic rules such as one-way streets and forbidden turns, based on the interpretation of already detected and classified traffic signs. Such algorithm helps to automatize the elaboration of maps for commercial navigation systems. Our solution is based on simulating navigation along the road network, determining at each point of interest the visibility of the signs and their effect on the roads. We test our approach in a small urban network and discuss various ways to generalize it to support more complex environments.
[ { "version": "v1", "created": "Thu, 17 Nov 2016 18:39:44 GMT" } ]
1,479,772,800,000
[ [ "Galaktionov", "Daniil", "" ], [ "Luaces", "Miguel R.", "" ], [ "Places", "Ángeles S.", "" ] ]
1611.06174
Ondrej Kuzelka
Ondrej Kuzelka, Jesse Davis, Steven Schockaert
Stratified Knowledge Bases as Interpretable Probabilistic Models (Extended Abstract)
Presented at NIPS 2016 Workshop on Interpretable Machine Learning in Complex Systems
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we advocate the use of stratified logical theories for representing probabilistic models. We argue that such encodings can be more interpretable than those obtained in existing frameworks such as Markov logic networks. Among others, this allows for the use of domain experts to improve learned models by directly removing, adding, or modifying logical formulas.
[ { "version": "v1", "created": "Fri, 18 Nov 2016 17:51:56 GMT" } ]
1,479,686,400,000
[ [ "Kuzelka", "Ondrej", "" ], [ "Davis", "Jesse", "" ], [ "Schockaert", "Steven", "" ] ]
1611.07478
Scott Lundberg
Scott Lundberg and Su-In Lee
An unexpected unity among methods for interpreting model predictions
Presented at NIPS 2016 Workshop on Interpretable Machine Learning in Complex Systems
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding why a model made a certain prediction is crucial in many data science fields. Interpretable predictions engender appropriate trust and provide insight into how the model may be improved. However, with large modern datasets the best accuracy is often achieved by complex models even experts struggle to interpret, which creates a tension between accuracy and interpretability. Recently, several methods have been proposed for interpreting predictions from complex models by estimating the importance of input features. Here, we present how a model-agnostic additive representation of the importance of input features unifies current methods. This representation is optimal, in the sense that it is the only set of additive values that satisfies important properties. We show how we can leverage these properties to create novel visual explanations of model predictions. The thread of unity that this representation weaves through the literature indicates that there are common principles to be learned about the interpretation of model predictions that apply in many scenarios.
[ { "version": "v1", "created": "Tue, 22 Nov 2016 19:30:28 GMT" }, { "version": "v2", "created": "Wed, 23 Nov 2016 06:44:36 GMT" }, { "version": "v3", "created": "Thu, 8 Dec 2016 08:24:15 GMT" } ]
1,481,241,600,000
[ [ "Lundberg", "Scott", "" ], [ "Lee", "Su-In", "" ] ]
1611.08037
Lantao Liu
Zhibei Ma, Kai Yin, Lantao Liu, Gaurav S. Sukhatme
A Spatio-Temporal Representation for the Orienteering Problem with Time-Varying Profits
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider an orienteering problem (OP) where an agent needs to visit a series (possibly a subset) of depots, from which the maximal accumulated profits are desired within given limited time budget. Different from most existing works where the profits are assumed to be static, in this work we investigate a variant that has arbitrary time-dependent profits. Specifically, the profits to be collected change over time and they follow different (e.g., independent) time-varying functions. The problem is of inherent nonlinearity and difficult to solve by existing methods. To tackle the challenge, we present a simple and effective framework that incorporates time-variations into the fundamental planning process. Specifically, we propose a deterministic spatio-temporal representation where both spatial description and temporal logic are unified into one routing topology. By employing existing basic sorting and searching algorithms, the routing solutions can be computed in an extremely efficient way. The proposed method is easy to implement and extensive numerical results show that our approach is time efficient and generates near-optimal solutions.
[ { "version": "v1", "created": "Thu, 24 Nov 2016 00:07:56 GMT" }, { "version": "v2", "created": "Sun, 2 Jul 2017 04:56:41 GMT" } ]
1,499,126,400,000
[ [ "Ma", "Zhibei", "" ], [ "Yin", "Kai", "" ], [ "Liu", "Lantao", "" ], [ "Sukhatme", "Gaurav S.", "" ] ]
1611.08103
Guangming Lang
Guangming Lang
Double-quantitative $\gamma^{\ast}-$fuzzy coverings approximation operators
It enriches the fuzzy covering rough set theory
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In digital-based information boom, the fuzzy covering rough set model is an important mathematical tool for artificial intelligence, and how to build the bridge between the fuzzy covering rough set theory and Pawlak's model is becoming a hot research topic. In this paper, we first present the $\gamma-$fuzzy covering based probabilistic and grade approximation operators and double-quantitative approximation operators. We also study the relationships among the three types of $\gamma-$fuzzy covering based approximation operators. Second, we propose the $\gamma^{\ast}-$fuzzy coverings based multi-granulation probabilistic and grade lower and upper approximation operators and multi-granulation double-quantitative lower and upper approximation operators. We also investigate the relationships among these types of $\gamma-$fuzzy coverings based approximation operators. Finally, we employ several examples to illustrate how to construct the lower and upper approximations of fuzzy sets with the absolute and relative quantitative information.
[ { "version": "v1", "created": "Thu, 24 Nov 2016 09:06:57 GMT" } ]
1,480,291,200,000
[ [ "Lang", "Guangming", "" ] ]
1611.08219
Dylan Hadfield-Menell
Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, Stuart Russell
The Off-Switch Game
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is clear that one of the primary tools we can use to mitigate the potential risk from a misbehaving AI system is the ability to turn the system off. As the capabilities of AI systems improve, it is important to ensure that such systems do not adopt subgoals that prevent a human from switching them off. This is a challenge because many formulations of rational agents create strong incentives for self-preservation. This is not caused by a built-in instinct, but because a rational agent will maximize expected utility and cannot achieve whatever objective it has been given if it is dead. Our goal is to study the incentives an agent has to allow itself to be switched off. We analyze a simple game between a human H and a robot R, where H can press R's off switch but R can disable the off switch. A traditional agent takes its reward function for granted: we show that such agents have an incentive to disable the off switch, except in the special case where H is perfectly rational. Our key insight is that for R to want to preserve its off switch, it needs to be uncertain about the utility associated with the outcome, and to treat H's actions as important observations about that utility. (R also has no incentive to switch itself off in this setting.) We conclude that giving machines an appropriate level of uncertainty about their objectives leads to safer designs, and we argue that this setting is a useful generalization of the classical AI paradigm of rational agents.
[ { "version": "v1", "created": "Thu, 24 Nov 2016 15:23:48 GMT" }, { "version": "v2", "created": "Thu, 25 May 2017 17:05:16 GMT" }, { "version": "v3", "created": "Fri, 16 Jun 2017 01:41:59 GMT" } ]
1,497,830,400,000
[ [ "Hadfield-Menell", "Dylan", "" ], [ "Dragan", "Anca", "" ], [ "Abbeel", "Pieter", "" ], [ "Russell", "Stuart", "" ] ]
1611.08374
Bj{\o}rn Magnus Mathisen
Bj{\o}rn Magnus Mathisen, Peter Haro, B{\aa}rd Hanssen, Sara Bj\"ork, St{\aa}le Walderhaug
Decision Support Systems in Fisheries and Aquaculture: A systematic review
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Decision support systems help decision makers make better decisions in the face of complex decision problems (e.g. investment or policy decisions). Fisheries and Aquaculture is a domain where decision makers face such decisions since they involve factors from many different scientific fields. No systematic overview of literature describing decision support systems and their application in fisheries and aquaculture has been conducted. This paper summarizes scientific literature that describes decision support systems applied to the domain of Fisheries and Aquaculture. We use an established systematic mapping survey method to conduct our literature mapping. Our research questions are: What decision support systems for fisheries and aquaculture exists? What are the most investigated fishery and aquaculture decision support systems topics and how have these changed over time? Do any current DSS for fisheries provide real- time analytics? Do DSSes in Fisheries and Aquaculture build their models using machine learning done on captured and grounded data? The paper then detail how we employ the systematic mapping method in answering these questions. This results in 27 papers being identified as relevant and gives an exposition on the primary methods concluded in the study for designing a decision support system. We provide an analysis of the research done in the studies collected. We discovered that most literature does not consider multiple aspects for multiple stakeholders in their work. In addition we observed that little or no work has been done with real-time analysis in these decision support systems.
[ { "version": "v1", "created": "Fri, 25 Nov 2016 08:13:51 GMT" } ]
1,480,291,200,000
[ [ "Mathisen", "Bjørn Magnus", "" ], [ "Haro", "Peter", "" ], [ "Hanssen", "Bård", "" ], [ "Björk", "Sara", "" ], [ "Walderhaug", "Ståle", "" ] ]
1611.08499
Nhien Pham Hoang Bao
Nhien Pham Hoang Bao, Hiroyuki Iida
An Analysis of Tournament Structure
10 pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper explores a novel way for analyzing the tournament structures to find a best suitable one for the tournament under consideration. It concerns about three aspects such as tournament conducting cost, competitiveness development and ranking precision. It then proposes a new method using progress tree to detect potential throwaway matches. The analysis performed using the proposed method reveals the strengths and weaknesses of tournament structures. As a conclusion, single elimination is best if we want to qualify one winner only, all matches conducted are exciting in term of competitiveness. Double elimination with proper seeding system is a better choice if we want to qualify more winners. A reasonable number of extra matches need to be conducted in exchange of being able to qualify top four winners. Round-robin gives reliable ranking precision for all participants. However, its conduction cost is very high, and it fails to maintain competitiveness development.
[ { "version": "v1", "created": "Wed, 16 Nov 2016 07:09:16 GMT" } ]
1,480,291,200,000
[ [ "Bao", "Nhien Pham Hoang", "" ], [ "Iida", "Hiroyuki", "" ] ]
1611.08555
Florentin Smarandache
Florentin Smarandache, Surapati Pramanik (Editors)
New Trends in Neutrosophic Theory and Applications
424 pages
Pons asbl, Brussels, 2016
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neutrosophic theory and applications have been expanding in all directions at an astonishing rate especially after the introduction the journal entitled Neutrosophic Sets and Systems. New theories, techniques, algorithms have been rapidly developed. One of the most striking trends in the neutrosophic theory is the hybridization of neutrosophic set with other potential sets such as rough set, bipolar set, soft set, hesitant fuzzy set, etc. The different hybrid structure such as rough neutrosophic set, single valued neutrosophic rough set, bipolar neutrosophic set, single valued neutrosophic hesitant fuzzy set, etc. are proposed in the literature in a short period of time. Neutrosophic set has been a very important tool in all various areas of data mining, decision making, e-learning, engineering, medicine, social science, and some more. The book New Trends in Neutrosophic Theories and Applications focuses on theories, methods, algorithms for decision making and also applications involving neutrosophic information. Some topics deal with data mining, decision making, e-learning, graph theory, medical diagnosis, probability theory, topology, and some more.
[ { "version": "v1", "created": "Wed, 23 Nov 2016 19:16:49 GMT" } ]
1,480,291,200,000
[ [ "Smarandache", "Florentin", "", "Editors" ], [ "Pramanik", "Surapati", "", "Editors" ] ]
1611.08572
Till Mossakowski
Till Mossakowski and Fabian Neuhaus
Bipolar Weighted Argumentation Graphs
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper discusses the semantics of weighted argumentation graphs that are biplor, i.e. contain both attacks and support graphs. The work builds on previous work by Amgoud, Ben-Naim et. al., which presents and compares several semantics for argumentation graphs that contain only supports or only attacks relationships, respectively.
[ { "version": "v1", "created": "Fri, 25 Nov 2016 20:04:17 GMT" }, { "version": "v2", "created": "Fri, 23 Dec 2016 08:33:34 GMT" } ]
1,482,710,400,000
[ [ "Mossakowski", "Till", "" ], [ "Neuhaus", "Fabian", "" ] ]
1611.08908
Thierry Petit
Thierry Petit
"Model and Run" Constraint Networks with a MILP Engine
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Constraint Programming (CP) users need significant expertise in order to model their problems appropriately, notably to select propagators and search strategies. This puts the brakes on a broader uptake of CP. In this paper, we introduce MICE, a complete Java CP modeler that can use any Mixed Integer Linear Programming (MILP) solver as a solution technique. Our aim is to provide an alternative tool for democratizing the "CP-style" modeling thanks to its simplicity of use, with reasonable solving capabilities. Our contributions include new decompositions of (reified) constraints and constraints on numerical variables.
[ { "version": "v1", "created": "Sun, 27 Nov 2016 20:43:27 GMT" } ]
1,480,377,600,000
[ [ "Petit", "Thierry", "" ] ]
1611.08944
Jan Leike
Jan Leike
Nonparametric General Reinforcement Learning
PhD thesis
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Reinforcement learning (RL) problems are often phrased in terms of Markov decision processes (MDPs). In this thesis we go beyond MDPs and consider RL in environments that are non-Markovian, non-ergodic and only partially observable. Our focus is not on practical algorithms, but rather on the fundamental underlying problems: How do we balance exploration and exploitation? How do we explore optimally? When is an agent optimal? We follow the nonparametric realizable paradigm. We establish negative results on Bayesian RL agents, in particular AIXI. We show that unlucky or adversarial choices of the prior cause the agent to misbehave drastically. Therefore Legg-Hutter intelligence and balanced Pareto optimality, which depend crucially on the choice of the prior, are entirely subjective. Moreover, in the class of all computable environments every policy is Pareto optimal. This undermines all existing optimality properties for AIXI. However, there are Bayesian approaches to general RL that satisfy objective optimality guarantees: We prove that Thompson sampling is asymptotically optimal in stochastic environments in the sense that its value converges to the value of the optimal policy. We connect asymptotic optimality to regret given a recoverability assumption on the environment that allows the agent to recover from mistakes. Hence Thompson sampling achieves sublinear regret in these environments. Our results culminate in a formal solution to the grain of truth problem: A Bayesian agent acting in a multi-agent environment learns to predict the other agents' policies if its prior assigns positive probability to them (the prior contains a grain of truth). We construct a large but limit computable class containing a grain of truth and show that agents based on Thompson sampling over this class converge to play Nash equilibria in arbitrary unknown computable multi-agent environments.
[ { "version": "v1", "created": "Mon, 28 Nov 2016 00:36:40 GMT" } ]
1,480,377,600,000
[ [ "Leike", "Jan", "" ] ]
1611.09351
Jan Bergstra
Jan A. Bergstra
Adams Conditioning and Likelihood Ratio Transfer Mediated Inference
Based on reviewer's comments many minor improvements have been made
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian inference as applied in a legal setting is about belief transfer and involves a plurality of agents and communication protocols. A forensic expert (FE) may communicate to a trier of fact (TOF) first its value of a certain likelihood ratio with respect to FE's belief state as represented by a probability function on FE's proposition space. Subsequently FE communicates its recently acquired confirmation that a certain evidence proposition is true. Then TOF performs likelihood ratio transfer mediated reasoning thereby revising their own belief state. The logical principles involved in likelihood transfer mediated reasoning are discussed in a setting where probabilistic arithmetic is done within a meadow, and with Adams conditioning placed in a central role.
[ { "version": "v1", "created": "Sat, 26 Nov 2016 22:31:02 GMT" }, { "version": "v2", "created": "Sun, 4 Dec 2016 10:07:29 GMT" }, { "version": "v3", "created": "Sun, 11 Dec 2016 11:17:38 GMT" }, { "version": "v4", "created": "Tue, 18 Dec 2018 23:09:23 GMT" }, { "version": "v5", "created": "Fri, 16 Aug 2019 09:55:15 GMT" } ]
1,566,172,800,000
[ [ "Bergstra", "Jan A.", "" ] ]
1612.00092
Christian Walder Dr
Christian Walder and Dongwoo Kim
Computer Assisted Composition with Recurrent Neural Networks
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequence modeling with neural networks has lead to powerful models of symbolic music data. We address the problem of exploiting these models to reach creative musical goals, by combining with human input. To this end we generalise previous work, which sampled Markovian sequence models under the constraint that the sequence belong to the language of a given finite state machine provided by the human. We consider more expressive non-Markov models, thereby requiring approximate sampling which we provide in the form of an efficient sequential Monte Carlo method. In addition we provide and compare with a beam search strategy for conditional probability maximisation. Our algorithms are capable of convincingly re-harmonising famous musical works. To demonstrate this we provide visualisations, quantitative experiments, a human listening test and audio examples. We find both the sampling and optimisation procedures to be effective, yet complementary in character. For the case of highly permissive constraint sets, we find that sampling is to be preferred due to the overly regular nature of the optimisation based results. The generality of our algorithms permits countless other creative applications.
[ { "version": "v1", "created": "Thu, 1 Dec 2016 00:49:19 GMT" }, { "version": "v2", "created": "Fri, 29 Sep 2017 23:38:35 GMT" } ]
1,506,988,800,000
[ [ "Walder", "Christian", "" ], [ "Kim", "Dongwoo", "" ] ]
1612.00094
Paul Weng
Hugo Gilbert and Paul Weng and Yan Xu
Optimizing Quantiles in Preference-based Markov Decision Processes
Long version of AAAI 2017 paper. arXiv admin note: text overlap with arXiv:1611.00862
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the Markov decision process model, policies are usually evaluated by expected cumulative rewards. As this decision criterion is not always suitable, we propose in this paper an algorithm for computing a policy optimal for the quantile criterion. Both finite and infinite horizons are considered. Finally we experimentally evaluate our approach on random MDPs and on a data center control problem.
[ { "version": "v1", "created": "Thu, 1 Dec 2016 00:55:23 GMT" } ]
1,480,636,800,000
[ [ "Gilbert", "Hugo", "" ], [ "Weng", "Paul", "" ], [ "Xu", "Yan", "" ] ]
1612.00104
Xiaojian Wu
Xiaojian Wu, Akshat Kumar, Daniel Sheldon, Shlomo Zilberstein
Robust Optimization for Tree-Structured Stochastic Network Design
AAAI 2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic network design is a general framework for optimizing network connectivity. It has several applications in computational sustainability including spatial conservation planning, pre-disaster network preparation, and river network optimization. A common assumption in previous work has been made that network parameters (e.g., probability of species colonization) are precisely known, which is unrealistic in real- world settings. We therefore address the robust river network design problem where the goal is to optimize river connectivity for fish movement by removing barriers. We assume that fish passability probabilities are known only imprecisely, but are within some interval bounds. We then develop a planning approach that computes the policies with either high robust ratio or low regret. Empirically, our approach scales well to large river networks. We also provide insights into the solutions generated by our robust approach, which has significantly higher robust ratio than the baseline solution with mean parameter estimates.
[ { "version": "v1", "created": "Thu, 1 Dec 2016 01:21:21 GMT" } ]
1,480,636,800,000
[ [ "Wu", "Xiaojian", "" ], [ "Kumar", "Akshat", "" ], [ "Sheldon", "Daniel", "" ], [ "Zilberstein", "Shlomo", "" ] ]
1612.00240
Kleanthi Georgala
Kleanthi Georgala, Micheal Hoffmann and Axel-Cyrille Ngonga Ngomo
An Evaluation of Models for Runtime Approximation in Link Discovery
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time-efficient link discovery is of central importance to implement the vision of the Semantic Web. Some of the most rapid Link Discovery approaches rely internally on planning to execute link specifications. In newer works, linear models have been used to estimate the runtime the fastest planners. However, no other category of models has been studied for this purpose so far. In this paper, we study non-linear runtime estimation functions for runtime estimation. In particular, we study exponential and mixed models for the estimation of the runtimes of planners. To this end, we evaluate three different models for runtime on six datasets using 400 link specifications. We show that exponential and mixed models achieve better fits when trained but are only to be preferred in some cases. Our evaluation also shows that the use of better runtime approximation models has a positive impact on the overall execution of link specifications.
[ { "version": "v1", "created": "Thu, 1 Dec 2016 13:33:03 GMT" } ]
1,480,636,800,000
[ [ "Georgala", "Kleanthi", "" ], [ "Hoffmann", "Micheal", "" ], [ "Ngomo", "Axel-Cyrille Ngonga", "" ] ]
1612.00742
Michael Gr. Voskoglou Prof. Dr.
Michael Gr. Voskoglou
Comparison of the COG Defuzzification Technique and Its Variations to the GPA Index
11 pages, 5 figures, 2 tables
American Journal of Computational and Applied Mathematics, 6(5), 187-193, 2016
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Center of Gravity (COG) method is one of the most popular defuzzification techniques of fuzzy mathematics. In earlier works the COG technique was properly adapted to be used as an assessment model (RFAM)and several variations of it (GRFAM, TFAM and TpFAM)were also constructed for the same purpose. In this paper the outcomes of all these models are compared to the corresponding outcomes of a traditional assessment method of the bi-valued logic, the Grade Point Average (GPA) Index. Examples are also presented illustrating our results.
[ { "version": "v1", "created": "Wed, 30 Nov 2016 07:53:15 GMT" } ]
1,480,896,000,000
[ [ "Voskoglou", "Michael Gr.", "" ] ]
1612.00916
Pierre-Luc Bacon
Pierre-Luc Bacon, Doina Precup
A Matrix Splitting Perspective on Planning with Options
The results presented in the previous version of this paper were found be applicable only to "gating execution" and not "call-and-return". We made this distinction clear in the text and added an extension to the call-and-return model
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that the Bellman operator underlying the options framework leads to a matrix splitting, an approach traditionally used to speed up convergence of iterative solvers for large linear systems of equations. Based on standard comparison theorems for matrix splittings, we then show how the asymptotic rate of convergence varies as a function of the inherent timescales of the options. This new perspective highlights a trade-off between asymptotic performance and the cost of computation associated with building a good set of options.
[ { "version": "v1", "created": "Sat, 3 Dec 2016 02:57:36 GMT" }, { "version": "v2", "created": "Mon, 10 Jul 2017 19:28:32 GMT" } ]
1,499,817,600,000
[ [ "Bacon", "Pierre-Luc", "" ], [ "Precup", "Doina", "" ] ]
1612.01120
Fabio Cozman
Fabio Gagliardi Cozman, Denis Deratani Mau\'a
The Complexity of Bayesian Networks Specified by Propositional and Relational Languages
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine the complexity of inference in Bayesian networks specified by logical languages. We consider representations that range from fragments of propositional logic to function-free first-order logic with equality; in doing so we cover a variety of plate models and of probabilistic relational models. We study the complexity of inferences when network, query and domain are the input (the inferential and the combined complexity), when the network is fixed and query and domain are the input (the query/data complexity), and when the network and query are fixed and the domain is the input (the domain complexity). We draw connections with probabilistic databases and liftability results, and obtain complexity classes that range from polynomial to exponential levels.
[ { "version": "v1", "created": "Sun, 4 Dec 2016 13:51:55 GMT" }, { "version": "v2", "created": "Tue, 6 Dec 2016 02:00:14 GMT" }, { "version": "v3", "created": "Fri, 6 Jan 2017 13:07:30 GMT" } ]
1,483,920,000,000
[ [ "Cozman", "Fabio Gagliardi", "" ], [ "Mauá", "Denis Deratani", "" ] ]
1612.01608
Julian Togelius
Julian Togelius
AI Researchers, Video Games Are Your Friends!
in Studies in Computational Intelligence Studies in Computational Intelligence, Volume 669 2017. Springer
null
10.1007/978-3-319-48506-5_1
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
If you are an artificial intelligence researcher, you should look to video games as ideal testbeds for the work you do. If you are a video game developer, you should look to AI for the technology that makes completely new types of games possible. This chapter lays out the case for both of these propositions. It asks the question "what can video games do for AI", and discusses how in particular general video game playing is the ideal testbed for artificial general intelligence research. It then asks the question "what can AI do for video games", and lays out a vision for what video games might look like if we had significantly more advanced AI at our disposal. The chapter is based on my keynote at IJCCI 2015, and is written in an attempt to be accessible to a broad audience.
[ { "version": "v1", "created": "Tue, 6 Dec 2016 00:46:57 GMT" } ]
1,481,068,800,000
[ [ "Togelius", "Julian", "" ] ]
1612.01691
Arthur Mah\'eo
Arthur Mah\'eo, Tommaso Urli, Philip Kilby
Fleet Size and Mix Split-Delivery Vehicle Routing
Rich Vehicle Routing, Split Delivery, Fleet Size and Mix, Mixed Integer Programming, Constraint Programming
null
null
EP166439
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the classic Vehicle Routing Problem (VRP) a fleet of of vehicles has to visit a set of customers while minimising the operations' costs. We study a rich variant of the VRP featuring split deliveries, an heterogeneous fleet, and vehicle-commodity incompatibility constraints. Our goal is twofold: define the cheapest routing and the most adequate fleet. To do so, we split the problem into two interdependent components: a fleet design component and a routing component. First, we define two Mixed Integer Programming (MIP) formulations for each component. Then we discuss several improvements in the form of valid cuts and symmetry breaking constraints. The main contribution of this paper is a comparison of the four resulting models for this Rich VRP. We highlight their strengths and weaknesses with extensive experiments. Finally, we explore a lightweight integration with Constraint Programming (CP). We use a fast CP model which gives good solutions and use the solution to warm-start our models.
[ { "version": "v1", "created": "Tue, 6 Dec 2016 07:46:41 GMT" } ]
1,481,068,800,000
[ [ "Mahéo", "Arthur", "" ], [ "Urli", "Tommaso", "" ], [ "Kilby", "Philip", "" ] ]
1612.01857
Alexa Gopaulsingh Mrs.
Alexa Gopaulsingh
On a Well-behaved Relational Generalisation of Rough Set Approximations
12 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine non-dual relational extensions of rough set approximations and find an extension which satisfies surprisingly many of the usual rough set properties. We then use this definition to give an explanation for an observation made by Samanta and Chakraborty in their recent paper [P. Samanta and M.K. Chakraborty. Interface of rough set systems and modal logics: A survey. Transactions on Rough Sets XIX, pages 114-137, 2015].
[ { "version": "v1", "created": "Mon, 5 Dec 2016 08:53:16 GMT" }, { "version": "v2", "created": "Wed, 7 Dec 2016 15:04:20 GMT" } ]
1,481,155,200,000
[ [ "Gopaulsingh", "Alexa", "" ] ]
1612.01941
Paolo Dragone
Stefano Teso and Paolo Dragone and Andrea Passerini
Coactive Critiquing: Elicitation of Preferences and Features
AAAI'17
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When faced with complex choices, users refine their own preference criteria as they explore the catalogue of options. In this paper we propose an approach to preference elicitation suited for this scenario. We extend Coactive Learning, which iteratively collects manipulative feedback, to optionally query example critiques. User critiques are integrated into the learning model by dynamically extending the feature space. Our formulation natively supports constructive learning tasks, where the option catalogue is generated on-the-fly. We present an upper bound on the average regret suffered by the learner. Our empirical analysis highlights the promise of our approach.
[ { "version": "v1", "created": "Tue, 6 Dec 2016 18:32:40 GMT" } ]
1,481,068,800,000
[ [ "Teso", "Stefano", "" ], [ "Dragone", "Paolo", "" ], [ "Passerini", "Andrea", "" ] ]
1612.02088
Shuai Ma
Shuai Ma and Jia Yuan Yu
Transition-based versus State-based Reward Functions for MDPs with Value-at-Risk
55th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In reinforcement learning, the reward function on current state and action is widely used. When the objective is about the expectation of the (discounted) total reward only, it works perfectly. However, if the objective involves the total reward distribution, the result will be wrong. This paper studies Value-at-Risk (VaR) problems in short- and long-horizon Markov decision processes (MDPs) with two reward functions, which share the same expectations. Firstly we show that with VaR objective, when the real reward function is transition-based (with respect to action and both current and next states), the simplified (state-based, with respect to action and current state only) reward function will change the VaR. Secondly, for long-horizon MDPs, we estimate the VaR function with the aid of spectral theory and the central limit theorem. Thirdly, since the estimation method is for a Markov reward process with the reward function on current state only, we present a transformation algorithm for the Markov reward process with the reward function on current and next states, in order to estimate the VaR function with an intact total reward distribution.
[ { "version": "v1", "created": "Wed, 7 Dec 2016 01:17:26 GMT" }, { "version": "v2", "created": "Sat, 10 Dec 2016 16:32:47 GMT" }, { "version": "v3", "created": "Mon, 27 Feb 2017 23:50:15 GMT" }, { "version": "v4", "created": "Thu, 29 Nov 2018 22:50:03 GMT" } ]
1,543,795,200,000
[ [ "Ma", "Shuai", "" ], [ "Yu", "Jia Yuan", "" ] ]
1612.02255
Armando Vieira
Armando Vieira
Knowledge Representation in Graphs using Convolutional Neural Networks
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge Graphs (KG) constitute a flexible representation of complex relationships between entities particularly useful for biomedical data. These KG, however, are very sparse with many missing edges (facts) and the visualisation of the mesh of interactions nontrivial. Here we apply a compositional model to embed nodes and relationships into a vectorised semantic space to perform graph completion. A visualisation tool based on Convolutional Neural Networks and Self-Organised Maps (SOM) is proposed to extract high-level insights from the KG. We apply this technique to a subset of CTD, containing interactions of compounds with human genes / proteins and show that the performance is comparable to the one obtained by structural models.
[ { "version": "v1", "created": "Wed, 7 Dec 2016 14:10:56 GMT" } ]
1,481,155,200,000
[ [ "Vieira", "Armando", "" ] ]
1612.02587
Juerg Kohlas
Juerg Kohlas
Inverses, Conditionals and Compositional Operators in Separative Valuation Algebra
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compositional models were introduce by Jirousek and Shenoy in the general framework of valuation-based systems. They based their theory on an axiomatic system of valuations involving not only the operations of combination and marginalisation, but also of removal. They claimed that this systems covers besides the classical case of discrete probability distributions, also the cases of Gaussian densities and belief functions, and many other systems. Whereas their results on the compositional operator are correct, the axiomatic basis is not sufficient to cover the examples claimed above. We propose here a different axiomatic system of valuation algebras, which permits a rigorous mathematical theory of compositional operators in valuation-based systems and covers all the examples mentioned above. It extends the classical theory of inverses in semigroup theory and places thereby the present theory into its proper mathematical frame. Also this theory sheds light on the different structures of valuation-based systems, like regular algebras (represented by probability potentials), canncellative algebras (Gaussian potentials) and general separative algebras (density functions).
[ { "version": "v1", "created": "Thu, 8 Dec 2016 10:34:16 GMT" } ]
1,481,241,600,000
[ [ "Kohlas", "Juerg", "" ] ]
1612.02757
Adam Earle
Andrew M. Saxe, Adam Earle, Benjamin Rosman
Hierarchy through Composition with Linearly Solvable Markov Decision Processes
9 pages, 3 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hierarchical architectures are critical to the scalability of reinforcement learning methods. Current hierarchical frameworks execute actions serially, with macro-actions comprising sequences of primitive actions. We propose a novel alternative to these control hierarchies based on concurrent execution of many actions in parallel. Our scheme uses the concurrent compositionality provided by the linearly solvable Markov decision process (LMDP) framework, which naturally enables a learning agent to draw on several macro-actions simultaneously to solve new tasks. We introduce the Multitask LMDP module, which maintains a parallel distributed representation of tasks and may be stacked to form deep hierarchies abstracted in space and time.
[ { "version": "v1", "created": "Thu, 8 Dec 2016 18:25:31 GMT" } ]
1,481,241,600,000
[ [ "Saxe", "Andrew M.", "" ], [ "Earle", "Adam", "" ], [ "Rosman", "Benjamin", "" ] ]
1612.02904
Davoud Mougouei
Davoud Mougouei and David Powers
GOTM: a Goal-Oriented Framework for Capturing Uncertainty of Medical Treatments
Idea Paper
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been widely recognized that uncertainty is an inevitable aspect of diagnosis and treatment of medical disorders. Such uncertainties hence, need to be considered in computerized medical models. The existing medical modeling techniques however, have mainly focused on capturing uncertainty associated with diagnosis of medical disorders while ignoring uncertainty of treatments. To tackle this issue, we have proposed using a fuzzy-based modeling and description technique for capturing uncertainties in treatment plans. We have further contributed a formal framework which allows for goal-oriented modeling and analysis of medical treatments.
[ { "version": "v1", "created": "Fri, 9 Dec 2016 04:02:34 GMT" }, { "version": "v2", "created": "Thu, 22 Oct 2020 05:51:20 GMT" } ]
1,603,411,200,000
[ [ "Mougouei", "Davoud", "" ], [ "Powers", "David", "" ] ]
1612.03055
Jessa Bekker
Jessa Bekker, Arjen Hommersom, Martijn Lappenschaar, Jesse Davis
Measuring Adverse Drug Effects on Multimorbity using Tractable Bayesian Networks
Machine Learning for Health @ NIPS 2016
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Managing patients with multimorbidity often results in polypharmacy: the prescription of multiple drugs. However, the long-term effects of specific combinations of drugs and diseases are typically unknown. In particular, drugs prescribed for one condition may result in adverse effects for the other. To investigate which types of drugs may affect the further progression of multimorbidity, we query models of diseases and prescriptions that are learned from primary care data. State-of-the-art tractable Bayesian network representations, on which such complex queries can be computed efficiently, are employed for these large medical networks. Our results confirm that prescriptions may lead to unintended negative consequences in further development of multimorbidity in cardiovascular diseases. Moreover, a drug treatment for one disease group may affect diseases of another group.
[ { "version": "v1", "created": "Fri, 9 Dec 2016 15:25:03 GMT" } ]
1,481,500,800,000
[ [ "Bekker", "Jessa", "" ], [ "Hommersom", "Arjen", "" ], [ "Lappenschaar", "Martijn", "" ], [ "Davis", "Jesse", "" ] ]
1612.03353
Seiji Isotani
Judson Bandeira, Ig Ibert Bittencourt, Patricia Espinheira and Seiji Isotani
FOCA: A Methodology for Ontology Evaluation
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling an ontology is a hard and time-consuming task. Although methodologies are useful for ontologists to create good ontologies, they do not help with the task of evaluating the quality of the ontology to be reused. For these reasons, it is imperative to evaluate the quality of the ontology after constructing it or before reusing it. Few studies usually present only a set of criteria and questions, but no guidelines to evaluate the ontology. The effort to evaluate an ontology is very high as there is a huge dependence on the evaluator's expertise to understand the criteria and questions in depth. Moreover, the evaluation is still very subjective. This study presents a novel methodology for ontology evaluation, taking into account three fundamental principles: i) it is based on the Goal, Question, Metric approach for empirical evaluation; ii) the goals of the methodologies are based on the roles of knowledge representations combined with specific evaluation criteria; iii) each ontology is evaluated according to the type of ontology. The methodology was empirically evaluated using different ontologists and ontologies of the same domain. The main contributions of this study are: i) defining a step-by-step approach to evaluate the quality of an ontology; ii) proposing an evaluation based on the roles of knowledge representations; iii) the explicit difference of the evaluation according to the type of the ontology iii) a questionnaire to evaluate the ontologies; iv) a statistical model that automatically calculates the quality of the ontologies.
[ { "version": "v1", "created": "Sat, 10 Dec 2016 22:38:42 GMT" }, { "version": "v2", "created": "Sat, 2 Sep 2017 18:21:55 GMT" } ]
1,504,569,600,000
[ [ "Bandeira", "Judson", "" ], [ "Bittencourt", "Ig Ibert", "" ], [ "Espinheira", "Patricia", "" ], [ "Isotani", "Seiji", "" ] ]
1612.03801
Stig Petersen
Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich K\"uttler, Andrew Lefrancq, Simon Green, V\'ictor Vald\'es, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg and Stig Petersen
DeepMind Lab
11 pages, 8 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
DeepMind Lab is a first-person 3D game platform designed for research and development of general artificial intelligence and machine learning systems. DeepMind Lab can be used to study how autonomous artificial agents may learn complex tasks in large, partially observed, and visually diverse worlds. DeepMind Lab has a simple and flexible API enabling creative task-designs and novel AI-designs to be explored and quickly iterated upon. It is powered by a fast and widely recognised game engine, and tailored for effective use by the research community.
[ { "version": "v1", "created": "Mon, 12 Dec 2016 17:32:49 GMT" }, { "version": "v2", "created": "Tue, 13 Dec 2016 12:19:48 GMT" } ]
1,481,673,600,000
[ [ "Beattie", "Charles", "" ], [ "Leibo", "Joel Z.", "" ], [ "Teplyashin", "Denis", "" ], [ "Ward", "Tom", "" ], [ "Wainwright", "Marcus", "" ], [ "Küttler", "Heinrich", "" ], [ "Lefrancq", "Andrew", "" ], [ "Green", "Simon", "" ], [ "Valdés", "Víctor", "" ], [ "Sadik", "Amir", "" ], [ "Schrittwieser", "Julian", "" ], [ "Anderson", "Keith", "" ], [ "York", "Sarah", "" ], [ "Cant", "Max", "" ], [ "Cain", "Adam", "" ], [ "Bolton", "Adrian", "" ], [ "Gaffney", "Stephen", "" ], [ "King", "Helen", "" ], [ "Hassabis", "Demis", "" ], [ "Legg", "Shane", "" ], [ "Petersen", "Stig", "" ] ]
1612.04469
Kenrick Kenrick
Kenrick
Web-based Argumentation
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Assumption-Based Argumentation (ABA) is an argumentation framework that has been proposed in the late 20th century. Since then, there was still no solver implemented in a programming language which is easy to setup and no solver have been interfaced to the web, which impedes the interests of the public. This project aims to implement an ABA solver in a modern programming language that performs reasonably well and interface it to the web for easier access by the public. This project has demonstrated the novelty of development of an ABA solver, that computes conflict-free, stable, admissible, grounded, ideal, and complete semantics, in Python programming language which can be used via an easy-to-use web interface for visualization of the argument and dispute trees. Experiments were conducted to determine the project's best configurations and to compare this project with proxdd, a state-of-the-art ABA solver, which has no web interface and computes less number of semantics. From the results of the experiments, this project's best configuration is achieved by utilizing "pickle" technique and tree caching technique. Using this project's best configuration, this project achieved a lower average runtime compared to proxdd. On other aspect, this project encountered more cases with exceptions compared to proxdd, which might be caused by this project computing more semantics and hence requires more resources to do so. Hence, it can be said that this project run comparably well to the state-of-the-art ABA solver proxdd. Future works of this project include computational complexity analysis and efficiency analysis of algorithms implemented, implementation of more semantics in argumentation framework, and usability testing of the web interface.
[ { "version": "v1", "created": "Wed, 14 Dec 2016 03:21:32 GMT" } ]
1,481,760,000,000
[ [ "Kenrick", "", "" ] ]
1612.04791
Patrick Rodler
Patrick Rodler and Wolfgang Schmid and Kostyantyn Shchekotykhin
Scalable Computation of Optimized Queries for Sequential Diagnosis
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many model-based diagnosis applications it is impossible to provide such a set of observations and/or measurements that allow to identify the real cause of a fault. Therefore, diagnosis systems often return many possible candidates, leaving the burden of selecting the correct diagnosis to a user. Sequential diagnosis techniques solve this problem by automatically generating a sequence of queries to some oracle. The answers to these queries provide additional information necessary to gradually restrict the search space by removing diagnosis candidates inconsistent with the answers. During query computation, existing sequential diagnosis methods often require the generation of many unnecessary query candidates and strongly rely on expensive logical reasoners. We tackle this issue by devising efficient heuristic query search methods. The proposed methods enable for the first time a completely reasoner-free query generation while at the same time guaranteeing optimality conditions, e.g. minimal cardinality or best understandability, of the returned query that existing methods cannot realize. Hence, the performance of this approach is independent of the (complexity of the) diagnosed system. Experiments conducted using real-world problems show that the new approach is highly scalable and outperforms existing methods by orders of magnitude.
[ { "version": "v1", "created": "Wed, 14 Dec 2016 20:15:36 GMT" }, { "version": "v2", "created": "Thu, 15 Dec 2016 18:24:55 GMT" }, { "version": "v3", "created": "Fri, 16 Dec 2016 17:26:02 GMT" } ]
1,482,105,600,000
[ [ "Rodler", "Patrick", "" ], [ "Schmid", "Wolfgang", "" ], [ "Shchekotykhin", "Kostyantyn", "" ] ]
1612.04876
Memo Akten
Memo Akten and Mick Grierson
Collaborative creativity with Monte-Carlo Tree Search and Convolutional Neural Networks
Presented at the Constructive Machine Learning workshop at NIPS 2016 as a poster and spotlight talk. 8 pages including 2 page references, 2 page appendix, 3 figures. Blog post (including videos) at https://medium.com/@memoakten/collaborative-creativity-with-monte-carlo-tree-search-and-convolutional-neural-networks-and-other-69d7107385a0
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate a human-machine collaborative drawing environment in which an autonomous agent sketches images while optionally allowing a user to directly influence the agent's trajectory. We combine Monte Carlo Tree Search with image classifiers and test both shallow models (e.g. multinomial logistic regression) and deep Convolutional Neural Networks (e.g. LeNet, Inception v3). We found that using the shallow model, the agent produces a limited variety of images, which are noticably recogonisable by humans. However, using the deeper models, the agent produces a more diverse range of images, and while the agent remains very confident (99.99%) in having achieved its objective, to humans they mostly resemble unrecognisable 'random' noise. We relate this to recent research which also discovered that 'deep neural networks are easily fooled' \cite{Nguyen2015} and we discuss possible solutions and future directions for the research.
[ { "version": "v1", "created": "Wed, 14 Dec 2016 23:13:26 GMT" } ]
1,481,846,400,000
[ [ "Akten", "Memo", "" ], [ "Grierson", "Mick", "" ] ]
1612.05028
Oliver Kutz
Mihai Codescu, Eugen Kuksa, Oliver Kutz, Till Mossakowski, Fabian Neuhaus
Ontohub: A semantic repository for heterogeneous ontologies
Preprint, journal special issue
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ontohub is a repository engine for managing distributed heterogeneous ontologies. The distributed nature enables communities to share and exchange their contributions easily. The heterogeneous nature makes it possible to integrate ontologies written in various ontology languages. Ontohub supports a wide range of formal logical and ontology languages, as well as various structuring and modularity constructs and inter-theory (concept) mappings, building on the OMG-standardized DOL language. Ontohub repositories are organised as Git repositories, thus inheriting all features of this popular version control system. Moreover, Ontohub is the first repository engine meeting a substantial amount of the requirements formulated in the context of the Open Ontology Repository (OOR) initiative, including an API for federation as well as support for logical inference and axiom selection.
[ { "version": "v1", "created": "Thu, 15 Dec 2016 11:48:13 GMT" } ]
1,481,846,400,000
[ [ "Codescu", "Mihai", "" ], [ "Kuksa", "Eugen", "" ], [ "Kutz", "Oliver", "" ], [ "Mossakowski", "Till", "" ], [ "Neuhaus", "Fabian", "" ] ]
1612.05497
Wen Jiang
Wen Jiang
A correlation coefficient of belief functions
19 pages, 1 figure
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How to manage conflict is still an open issue in Dempster-Shafer evidence theory. The correlation coefficient can be used to measure the similarity of evidence in Dempster-Shafer evidence theory. However, existing correlation coefficients of belief functions have some shortcomings. In this paper, a new correlation coefficient is proposed with many desirable properties. One of its applications is to measure the conflict degree among belief functions. Some numerical examples and comparisons demonstrate the effectiveness of the correlation coefficient.
[ { "version": "v1", "created": "Fri, 16 Dec 2016 14:58:17 GMT" }, { "version": "v2", "created": "Thu, 2 Feb 2017 03:29:42 GMT" } ]
1,486,080,000,000
[ [ "Jiang", "Wen", "" ] ]
1612.06528
Sarmimala Saikia
Sarmimala Saikia, Lovekesh Vig, Ashwin Srinivasan, Gautam Shroff, Puneet Agarwal, Richa Rawat
Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs
9 pages, 7 figures, Cognitive Computation: Integrating Neural and Symbolic Approaches (Workshop at 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.), http://daselab.cs.wright.edu/nesy/CoCo2016/coco_nips_2016_pre-proceedings.pdf (page 78-86). arXiv admin note: substantial text overlap with arXiv:1608.01093
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate solving discrete optimisation problems using the estimation of distribution (EDA) approach via a novel combination of deep belief networks(DBN) and inductive logic programming (ILP).While DBNs are used to learn the structure of successively better feasible solutions,ILP enables the incorporation of domain-based background knowledge related to the goodness of solutions.Recent work showed that ILP could be an effective way to use domain knowledge in an EDA scenario.However,in a purely ILP-based EDA,sampling successive populations is either inefficient or not straightforward.In our Neuro-symbolic EDA,an ILP engine is used to construct a model for good solutions using domain-based background knowledge.These rules are introduced as Boolean features in the last hidden layer of DBNs used for EDA-based optimization.This incorporation of logical ILP features requires some changes while training and sampling from DBNs: (a)our DBNs need to be trained with data for units at the input layer as well as some units in an otherwise hidden layer, and (b)we would like the samples generated to be drawn from instances entailed by the logical model.We demonstrate the viability of our approach on instances of two optimisation problems: predicting optimal depth-of-win for the KRK endgame,and jobshop scheduling.Our results are promising: (i)On each iteration of distribution estimation,samples obtained with an ILP-assisted DBN have a substantially greater proportion of good solutions than samples generated using a DBN without ILP features, and (ii)On termination of distribution estimation,samples obtained using an ILP-assisted DBN contain more near-optimal samples than samples from a DBN without ILP features.These results suggest that the use of ILP-constructed theories could be useful for incorporating complex domain-knowledge into deep models for estimation of distribution based procedures.
[ { "version": "v1", "created": "Tue, 20 Dec 2016 06:56:12 GMT" } ]
1,483,228,800,000
[ [ "Saikia", "Sarmimala", "" ], [ "Vig", "Lovekesh", "" ], [ "Srinivasan", "Ashwin", "" ], [ "Shroff", "Gautam", "" ], [ "Agarwal", "Puneet", "" ], [ "Rawat", "Richa", "" ] ]
1612.06915
Neil Burch
Neil Burch, Martin Schmid, Matej Morav\v{c}\'ik, Michael Bowling
AIVAT: A New Variance Reduction Technique for Agent Evaluation in Imperfect Information Games
To appear at AAAI-17 Workshop on Computer Poker and Imperfect Information Games
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evaluating agent performance when outcomes are stochastic and agents use randomized strategies can be challenging when there is limited data available. The variance of sampled outcomes may make the simple approach of Monte Carlo sampling inadequate. This is the case for agents playing heads-up no-limit Texas hold'em poker, where man-machine competitions have involved multiple days of consistent play and still not resulted in statistically significant conclusions even when the winner's margin is substantial. In this paper, we introduce AIVAT, a low variance, provably unbiased value assessment tool that uses an arbitrary heuristic estimate of state value, as well as the explicit strategy of a subset of the agents. Unlike existing techniques which reduce the variance from chance events, or only consider game ending actions, AIVAT reduces the variance both from choices by nature and by players with a known strategy. The resulting estimator in no-limit poker can reduce the number of hands needed to draw statistical conclusions by more than a factor of 10.
[ { "version": "v1", "created": "Tue, 20 Dec 2016 23:09:40 GMT" }, { "version": "v2", "created": "Thu, 19 Jan 2017 21:22:12 GMT" } ]
1,485,129,600,000
[ [ "Burch", "Neil", "" ], [ "Schmid", "Martin", "" ], [ "Moravčík", "Matej", "" ], [ "Bowling", "Michael", "" ] ]
1612.07555
J. G. Wolff
J Gerard Wolff
The SP Theory of Intelligence as a Foundation for the Development of a General, Human-Level Thinking Machine
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper summarises how the "SP theory of intelligence" and its realisation in the "SP computer model" simplifies and integrates concepts across artificial intelligence and related areas, and thus provides a promising foundation for the development of a general, human-level thinking machine, in accordance with the main goal of research in artificial general intelligence. The key to this simplification and integration is the powerful concept of "multiple alignment", borrowed and adapted from bioinformatics. This concept has the potential to be the "double helix" of intelligence, with as much significance for human-level intelligence as has DNA for biological sciences. Strengths of the SP system include: versatility in the representation of diverse kinds of knowledge; versatility in aspects of intelligence (including: strengths in unsupervised learning; the processing of natural language; pattern recognition at multiple levels of abstraction that is robust in the face of errors in data; several kinds of reasoning (including: one-step `deductive' reasoning; chains of reasoning; abductive reasoning; reasoning with probabilistic networks and trees; reasoning with 'rules'; nonmonotonic reasoning and reasoning with default values; Bayesian reasoning with 'explaining away'; and more); planning; problem solving; and more); seamless integration of diverse kinds of knowledge and diverse aspects of intelligence in any combination; and potential for application in several areas (including: helping to solve nine problems with big data; helping to develop human-level intelligence in autonomous robots; serving as a database with intelligence and with versatility in the representation and integration of several forms of knowledge; serving as a vehicle for medical knowledge and as an aid to medical diagnosis; and several more).
[ { "version": "v1", "created": "Thu, 22 Dec 2016 11:50:47 GMT" } ]
1,482,451,200,000
[ [ "Wolff", "J Gerard", "" ] ]
1612.07589
Wolfgang Faber
Wolfgang Faber, Mauro Vallati, Federico Cerutti, Massimiliano Giacomin
Solving Set Optimization Problems by Cardinality Optimization via Weak Constraints with an Application to Argumentation
Informal proceedings of the 1st Workshop on Trends and Applications of Answer Set Programming (TAASP 2016), Klagenfurt, Austria, 26 September 2016
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optimization - minimization or maximization - in the lattice of subsets is a frequent operation in Artificial Intelligence tasks. Examples are subset-minimal model-based diagnosis, nonmonotonic reasoning by means of circumscription, or preferred extensions in abstract argumentation. Finding the optimum among many admissible solutions is often harder than finding admissible solutions with respect to both computational complexity and methodology. This paper addresses the former issue by means of an effective method for finding subset-optimal solutions. It is based on the relationship between cardinality-optimal and subset-optimal solutions, and the fact that many logic-based declarative programming systems provide constructs for finding cardinality-optimal solutions, for example maximum satisfiability (MaxSAT) or weak constraints in Answer Set Programming (ASP). Clearly each cardinality-optimal solution is also a subset-optimal one, and if the language also allows for the addition of particular restricting constructs (both MaxSAT and ASP do) then all subset-optimal solutions can be found by an iterative computation of cardinality-optimal solutions. As a showcase, the computation of preferred extensions of abstract argumentation frameworks using the proposed method is studied.
[ { "version": "v1", "created": "Thu, 22 Dec 2016 13:20:02 GMT" } ]
1,482,451,200,000
[ [ "Faber", "Wolfgang", "" ], [ "Vallati", "Mauro", "" ], [ "Cerutti", "Federico", "" ], [ "Giacomin", "Massimiliano", "" ] ]
1612.08657
Olivier Auber
Antoine Saillenfest, Jean-Louis Dessalles, Olivier Auber
Role of Simplicity in Creative Behaviour: The Case of the Poietic Generator
This study was supported by grants from the programme Futur&Ruptures and from the 'Chaire Modelisation des Imaginaires, Innovation et Creation', http://www.computationalcreativity.net/iccc2016/posters-and-demos/
Proceedings of the Seventh International Conference on Computational Creativity (ICCC-2016). Paris, France
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
We propose to apply Simplicity Theory (ST) to model interest in creative situations. ST has been designed to describe and predict interest in communication. Here we use ST to derive a decision rule that we apply to a simplified version of a creative game, the Poietic Generator. The decision rule produces what can be regarded as an elementary form of creativity. This study is meant as a proof of principle. It suggests that some creative actions may be motivated by the search for unexpected simplicity.
[ { "version": "v1", "created": "Thu, 22 Dec 2016 12:56:07 GMT" } ]
1,482,883,200,000
[ [ "Saillenfest", "Antoine", "" ], [ "Dessalles", "Jean-Louis", "" ], [ "Auber", "Olivier", "" ] ]
1612.08777
Joshua Friedman
Joshua S. Friedman
Automated timetabling for small colleges and high schools using huge integer programs
Errors corrected from version 1
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We formulate an integer program to solve a highly constrained academic timetabling problem at the United States Merchant Marine Academy. The IP instance that results from our real case study has approximately both 170,000 rows and columns and solves to optimality in 4--24 hours using a commercial solver on a portable computer (near optimal feasible solutions were often found in 4--12 hours). Our model is applicable to both high schools and small colleges who wish to deviate from group scheduling. We also solve a necessary preprocessing student subgrouping problem, which breaks up big groups of students into small groups so they can optimally fit into small capacity classes.
[ { "version": "v1", "created": "Wed, 28 Dec 2016 00:50:16 GMT" }, { "version": "v2", "created": "Tue, 3 Jan 2017 18:24:48 GMT" } ]
1,483,488,000,000
[ [ "Friedman", "Joshua S.", "" ] ]
1612.09212
Rouven Bauer
Rouven Bauer
A hybrid approach to supervised machine learning for algorithmic melody composition
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we present an algorithm for composing monophonic melodies similar in style to those of a given, phrase annotated, sample of melodies. For implementation, a hybrid approach incorporating parametric Markov models of higher order and a contour concept of phrases is used. This work is based on the master thesis of Thayabaran Kathiresan (2015). An online listening test conducted shows that enhancing a pure Markov model with musically relevant context, like count and planed melody contour, improves the result significantly.
[ { "version": "v1", "created": "Thu, 29 Dec 2016 17:36:05 GMT" } ]
1,483,056,000,000
[ [ "Bauer", "Rouven", "" ] ]
1612.09591
Matthias Nickles
Matthias Nickles
PrASP Report
Technical Report
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This technical report describes the usage, syntax, semantics and core algorithms of the probabilistic inductive logic programming framework PrASP. PrASP is a research software which integrates non-monotonic reasoning based on Answer Set Programming (ASP), probabilistic inference and parameter learning. In contrast to traditional approaches to Probabilistic (Inductive) Logic Programming, our framework imposes only little restrictions on probabilistic logic programs. In particular, PrASP allows for ASP as well as First-Order Logic syntax, and for the annotation of formulas with point probabilities as well as interval probabilities. A range of widely configurable inference algorithms can be combined in a pipeline-like fashion, in order to cover a variety of use cases.
[ { "version": "v1", "created": "Fri, 30 Dec 2016 20:45:28 GMT" } ]
1,483,315,200,000
[ [ "Nickles", "Matthias", "" ] ]
1612.09593
Hamid Reza Hassanzadeh
Hamid Reza Hassanzadeh, Hadi Sadoghi Yazdi, Abedin Vahedian
Fuzzy Constraints Linear Discriminant Analysis
null
3rd Iranian Joint Congress on Intelligent Systems and Fuzzy Systems, 2009
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we introduce a fuzzy constraint linear discriminant analysis (FC-LDA). The FC-LDA tries to minimize misclassification error based on modified perceptron criterion that benefits handling the uncertainty near the decision boundary by means of a fuzzy linear programming approach with fuzzy resources. The method proposed has low computational complexity because of its linear characteristics and the ability to deal with noisy data with different degrees of tolerance. Obtained results verify the success of the algorithm when dealing with different problems. Comparing FC-LDA and LDA shows superiority in classification task.
[ { "version": "v1", "created": "Fri, 30 Dec 2016 20:48:33 GMT" } ]
1,483,315,200,000
[ [ "Hassanzadeh", "Hamid Reza", "" ], [ "Yazdi", "Hadi Sadoghi", "" ], [ "Vahedian", "Abedin", "" ] ]
1701.00287
Caelan Garrett
Caelan Reed Garrett, Tom\'as Lozano-P\'erez, and Leslie Pack Kaelbling
STRIPS Planning in Infinite Domains
11 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many robotic planning applications involve continuous actions with highly non-linear constraints, which cannot be modeled using modern planners that construct a propositional representation. We introduce STRIPStream: an extension of the STRIPS language which can model these domains by supporting the specification of blackbox generators to handle complex constraints. The outputs of these generators interact with actions through possibly infinite streams of objects and static predicates. We provide two algorithms which both reduce STRIPStream problems to a sequence of finite-domain planning problems. The representation and algorithms are entirely domain independent. We demonstrate our framework on simple illustrative domains, and then on a high-dimensional, continuous robotic task and motion planning domain.
[ { "version": "v1", "created": "Sun, 1 Jan 2017 20:37:51 GMT" }, { "version": "v2", "created": "Sun, 28 May 2017 01:08:00 GMT" } ]
1,496,102,400,000
[ [ "Garrett", "Caelan Reed", "" ], [ "Lozano-Pérez", "Tomás", "" ], [ "Kaelbling", "Leslie Pack", "" ] ]
1701.00349
Rohitash Chandra
Rohitash Chandra
An affective computational model for machine consciousness
under review
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past, several models of consciousness have become popular and have led to the development of models for machine consciousness with varying degrees of success and challenges for simulation and implementations. Moreover, affective computing attributes that involve emotions, behavior and personality have not been the focus of models of consciousness as they lacked motivation for deployment in software applications and robots. The affective attributes are important factors for the future of machine consciousness with the rise of technologies that can assist humans. Personality and affection hence can give an additional flavor for the computational model of consciousness in humanoid robotics. Recent advances in areas of machine learning with a focus on deep learning can further help in developing aspects of machine consciousness in areas that can better replicate human sensory perceptions such as speech recognition and vision. With such advancements, one encounters further challenges in developing models that can synchronize different aspects of affective computing. In this paper, we review some existing models of consciousnesses and present an affective computational model that would enable the human touch and feel for robotic systems.
[ { "version": "v1", "created": "Mon, 2 Jan 2017 09:48:47 GMT" } ]
1,483,401,600,000
[ [ "Chandra", "Rohitash", "" ] ]
1701.00464
Antonio Lieto
Antonio Lieto, Antonio Chella, Marcello Frixione
Conceptual Spaces for Cognitive Architectures: A Lingua Franca for Different Levels of Representation
31 pages, 3 figures in Biologically Inspired Cognitive Architectures, 2017
null
10.1016/j.bica.2016.10.005
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During the last decades, many cognitive architectures (CAs) have been realized adopting different assumptions about the organization and the representation of their knowledge level. Some of them (e.g. SOAR [Laird (2012)]) adopt a classical symbolic approach, some (e.g. LEABRA [O'Reilly and Munakata (2000)]) are based on a purely connectionist model, while others (e.g. CLARION [Sun (2006)] adopt a hybrid approach combining connectionist and symbolic representational levels. Additionally, some attempts (e.g. biSOAR) trying to extend the representational capacities of CAs by integrating diagrammatical representations and reasoning are also available [Kurup and Chandrasekaran (2007)]. In this paper we propose a reflection on the role that Conceptual Spaces, a framework developed by Peter G\"ardenfors [G\"ardenfors (2000)] more than fifteen years ago, can play in the current development of the Knowledge Level in Cognitive Systems and Architectures. In particular, we claim that Conceptual Spaces offer a lingua franca that allows to unify and generalize many aspects of the symbolic, sub-symbolic and diagrammatic approaches (by overcoming some of their typical problems) and to integrate them on a common ground. In doing so we extend and detail some of the arguments explored by G\"ardenfors [G\"ardenfors (1997)] for defending the need of a conceptual, intermediate, representation level between the symbolic and the sub-symbolic one.
[ { "version": "v1", "created": "Mon, 2 Jan 2017 17:35:34 GMT" } ]
1,483,401,600,000
[ [ "Lieto", "Antonio", "" ], [ "Chella", "Antonio", "" ], [ "Frixione", "Marcello", "" ] ]
1701.00642
Paul Weng
Dajian Li and Paul Weng and Orkun Karabasoglu
Finding Risk-Averse Shortest Path with Time-dependent Stochastic Costs
accepted at MIWAI 2017
null
10.1007/978-3-319-49397-8_9
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we tackle the problem of risk-averse route planning in a transportation network with time-dependent and stochastic costs. To solve this problem, we propose an adaptation of the A* algorithm that accommodates any risk measure or decision criterion that is monotonic with first-order stochastic dominance. We also present a case study of our algorithm on the Manhattan, NYC, transportation network.
[ { "version": "v1", "created": "Tue, 3 Jan 2017 10:47:35 GMT" } ]
1,483,488,000,000
[ [ "Li", "Dajian", "" ], [ "Weng", "Paul", "" ], [ "Karabasoglu", "Orkun", "" ] ]
1701.00646
Paul Weng
Paul Weng
From Preference-Based to Multiobjective Sequential Decision-Making
accepted at MIWAI 2017
null
10.1007/978-3-319-49397-8_20
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a link between preference-based and multiobjective sequential decision-making. While transforming a multiobjective problem to a preference-based one is quite natural, the other direction is a bit less obvious. We present how this transformation (from preference-based to multiobjective) can be done under the classic condition that preferences over histories can be represented by additively decomposable utilities and that the decision criterion to evaluate policies in a state is based on expectation. This link yields a new source of multiobjective sequential decision-making problems (i.e., when reward values are unknown) and justifies the use of solving methods developed in one setting in the other one.
[ { "version": "v1", "created": "Tue, 3 Jan 2017 10:57:06 GMT" } ]
1,483,488,000,000
[ [ "Weng", "Paul", "" ] ]
1701.00833
Tshilidzi Marwala
I. Boulkaibet, T. Marwala, M.I. Friswell, H. Haddad Khodaparast and S. Adhikari
Fuzzy finite element model updating using metaheuristic optimization algorithms
This article was accepted by the 2017 International Modal Analysis Conference
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a non-probabilistic method based on fuzzy logic is used to update finite element models (FEMs). Model updating techniques use the measured data to improve the accuracy of numerical models of structures. However, the measured data are contaminated with experimental noise and the models are inaccurate due to randomness in the parameters. This kind of aleatory uncertainty is irreducible, and may decrease the accuracy of the finite element model updating process. However, uncertainty quantification methods can be used to identify the uncertainty in the updating parameters. In this paper, the uncertainties associated with the modal parameters are defined as fuzzy membership functions, while the model updating procedure is defined as an optimization problem at each {\alpha}-cut level. To determine the membership functions of the updated parameters, an objective function is defined and minimized using two metaheuristic optimization algorithms: ant colony optimization (ACO) and particle swarm optimization (PSO). A structural example is used to investigate the accuracy of the fuzzy model updating strategy using the PSO and ACO algorithms. Furthermore, the results obtained by the fuzzy finite element model updating are compared with the Bayesian model updating results.
[ { "version": "v1", "created": "Tue, 3 Jan 2017 20:58:55 GMT" } ]
1,483,574,400,000
[ [ "Boulkaibet", "I.", "" ], [ "Marwala", "T.", "" ], [ "Friswell", "M. I.", "" ], [ "Khodaparast", "H. Haddad", "" ], [ "Adhikari", "S.", "" ] ]
1701.00867
Abhishek Mishra
Nithyanand Kota, Abhishek Mishra, Sunil Srinivasa, Xi (Peter) Chen, Pieter Abbeel
A K-fold Method for Baseline Estimation in Policy Gradient Algorithms
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The high variance issue in unbiased policy-gradient methods such as VPG and REINFORCE is typically mitigated by adding a baseline. However, the baseline fitting itself suffers from the underfitting or the overfitting problem. In this paper, we develop a K-fold method for baseline estimation in policy gradient algorithms. The parameter K is the baseline estimation hyperparameter that can adjust the bias-variance trade-off in the baseline estimates. We demonstrate the usefulness of our approach via two state-of-the-art policy gradient algorithms on three MuJoCo locomotive control tasks.
[ { "version": "v1", "created": "Tue, 3 Jan 2017 23:29:04 GMT" } ]
1,483,574,400,000
[ [ "Kota", "Nithyanand", "", "Peter" ], [ "Mishra", "Abhishek", "", "Peter" ], [ "Srinivasa", "Sunil", "", "Peter" ], [ "Xi", "", "", "Peter" ], [ "Chen", "", "" ], [ "Abbeel", "Pieter", "" ] ]
1701.01048
Roni Khardon
Roni Khardon and Scott Sanner
Stochastic Planning and Lifted Inference
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lifted probabilistic inference (Poole, 2003) and symbolic dynamic programming for lifted stochastic planning (Boutilier et al, 2001) were introduced around the same time as algorithmic efforts to use abstraction in stochastic systems. Over the years, these ideas evolved into two distinct lines of research, each supported by a rich literature. Lifted probabilistic inference focused on efficient arithmetic operations on template-based graphical models under a finite domain assumption while symbolic dynamic programming focused on supporting sequential decision-making in rich quantified logical action models and on open domain reasoning. Given their common motivation but different focal points, both lines of research have yielded highly complementary innovations. In this chapter, we aim to help close the gap between these two research areas by providing an overview of lifted stochastic planning from the perspective of probabilistic inference, showing strong connections to other chapters in this book. This also allows us to define Generalized Lifted Inference as a paradigm that unifies these areas and elucidates open problems for future research that can benefit both lifted inference and stochastic planning.
[ { "version": "v1", "created": "Wed, 4 Jan 2017 15:37:29 GMT" } ]
1,483,574,400,000
[ [ "Khardon", "Roni", "" ], [ "Sanner", "Scott", "" ] ]
1701.01724
Michael Bowling
Matej Morav\v{c}\'ik, Martin Schmid, Neil Burch, Viliam Lis\'y, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, Michael Bowling
DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker
null
null
10.1126/science.aam6960
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial intelligence has seen several breakthroughs in recent years, with games often serving as milestones. A common feature of these games is that players have perfect information. Poker is the quintessential game of imperfect information, and a longstanding challenge problem in artificial intelligence. We introduce DeepStack, an algorithm for imperfect information settings. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition that is automatically learned from self-play using deep learning. In a study involving 44,000 hands of poker, DeepStack defeated with statistical significance professional poker players in heads-up no-limit Texas hold'em. The approach is theoretically sound and is shown to produce more difficult to exploit strategies than prior approaches.
[ { "version": "v1", "created": "Fri, 6 Jan 2017 18:56:49 GMT" }, { "version": "v2", "created": "Tue, 10 Jan 2017 04:35:28 GMT" }, { "version": "v3", "created": "Fri, 3 Mar 2017 21:17:05 GMT" } ]
1,488,844,800,000
[ [ "Moravčík", "Matej", "" ], [ "Schmid", "Martin", "" ], [ "Burch", "Neil", "" ], [ "Lisý", "Viliam", "" ], [ "Morrill", "Dustin", "" ], [ "Bard", "Nolan", "" ], [ "Davis", "Trevor", "" ], [ "Waugh", "Kevin", "" ], [ "Johanson", "Michael", "" ], [ "Bowling", "Michael", "" ] ]
1701.02388
Gabriel Murray
Gabriel Murray
Stoic Ethics for Artificial Agents
Final accepted version submitted to Canadian A.I. 2017 conference
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a position paper advocating the notion that Stoic philosophy and ethics can inform the development of ethical A.I. systems. This is in sharp contrast to most work on building ethical A.I., which has focused on Utilitarian or Deontological ethical theories. We relate ethical A.I. to several core Stoic notions, including the dichotomy of control, the four cardinal virtues, the ideal Sage, Stoic practices, and Stoic perspectives on emotion or affect. More generally, we put forward an ethical view of A.I. that focuses more on internal states of the artificial agent rather than on external actions of the agent. We provide examples relating to near-term A.I. systems as well as hypothetical superintelligent agents.
[ { "version": "v1", "created": "Mon, 9 Jan 2017 23:25:43 GMT" }, { "version": "v2", "created": "Tue, 28 Mar 2017 23:59:25 GMT" } ]
1,490,832,000,000
[ [ "Murray", "Gabriel", "" ] ]
1701.02543
Junbo Zhang
Junbo Zhang, Yu Zheng, Dekang Qi, Ruiyuan Li, Xiuwen Yi, Tianrui Li
Predicting Citywide Crowd Flows Using Deep Spatio-Temporal Residual Networks
21 pages, 16 figures. arXiv admin note: substantial text overlap with arXiv:1610.00081
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Forecasting the flow of crowds is of great importance to traffic management and public safety, and very challenging as it is affected by many complex factors, including spatial dependencies (nearby and distant), temporal dependencies (closeness, period, trend), and external conditions (e.g., weather and events). We propose a deep-learning-based approach, called ST-ResNet, to collectively forecast two types of crowd flows (i.e. inflow and outflow) in each and every region of a city. We design an end-to-end structure of ST-ResNet based on unique properties of spatio-temporal data. More specifically, we employ the residual neural network framework to model the temporal closeness, period, and trend properties of crowd traffic. For each property, we design a branch of residual convolutional units, each of which models the spatial properties of crowd traffic. ST-ResNet learns to dynamically aggregate the output of the three residual neural networks based on data, assigning different weights to different branches and regions. The aggregation is further combined with external factors, such as weather and day of the week, to predict the final traffic of crowds in each and every region. We have developed a real-time system based on Microsoft Azure Cloud, called UrbanFlow, providing the crowd flow monitoring and forecasting in Guiyang City of China. In addition, we present an extensive experimental evaluation using two types of crowd flows in Beijing and New York City (NYC), where ST-ResNet outperforms nine well-known baselines.
[ { "version": "v1", "created": "Tue, 10 Jan 2017 12:12:39 GMT" } ]
1,484,092,800,000
[ [ "Zhang", "Junbo", "" ], [ "Zheng", "Yu", "" ], [ "Qi", "Dekang", "" ], [ "Li", "Ruiyuan", "" ], [ "Yi", "Xiuwen", "" ], [ "Li", "Tianrui", "" ] ]
1701.02545
Daniel Meana-Llori\'an
Daniel Meana-Llori\'an, Cristian Gonz\'alez Garc\'ia, B. Cristina Pelayo G-Bustelo, Juan Manuel Cueva Lovelle, Nestor Garcia-Fernandez
IoFClime: The fuzzy logic and the Internet of Things to control indoor temperature regarding the outdoor ambient conditions
null
null
10.1016/j.future.2016.11.020
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Internet of Things is arriving to our homes or cities through fields already known like Smart Homes, Smart Cities, or Smart Towns. The monitoring of environmental conditions of cities can help to adapt the indoor locations of the cities in order to be more comfortable for people who stay there. A way to improve the indoor conditions is an efficient temperature control, however, it depends on many factors like the different combinations of outdoor temperature and humidity. Therefore, adjusting the indoor temperature is not setting a value according to other value. There are many more factors to take into consideration, hence the traditional logic based in binary states cannot be used. Many problems cannot be solved with a set of binary solutions and we need a new way of development. Fuzzy logic is able to interpret many states, more than two states, giving to computers the capacity to react in a similar way to people. In this paper we will propose a new approach to control the temperature using the Internet of Things together its platforms and fuzzy logic regarding not only the indoor temperature but also the outdoor temperature and humidity in order to save energy and to set a more comfortable environment for their users. Finally, we will conclude that the fuzzy approach allows us to achieve an energy saving around 40% and thus, save money.
[ { "version": "v1", "created": "Tue, 10 Jan 2017 12:15:59 GMT" } ]
1,484,092,800,000
[ [ "Meana-Llorián", "Daniel", "" ], [ "García", "Cristian González", "" ], [ "G-Bustelo", "B. Cristina Pelayo", "" ], [ "Lovelle", "Juan Manuel Cueva", "" ], [ "Garcia-Fernandez", "Nestor", "" ] ]
1701.03000
Athanasios Karapantelakis
Aneta Vulgarakis Feljan, Athanasios Karapantelakis, Leonid Mokrushin, Hongxin Liang, Rafia Inam, Elena Fersman, Carlos R.B. Azevedo, Klaus Raizer, Ricardo S. Souza
A Framework for Knowledge Management and Automated Reasoning Applied on Intelligent Transport Systems
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cyber-Physical Systems in general, and Intelligent Transport Systems (ITS) in particular use heterogeneous data sources combined with problem solving expertise in order to make critical decisions that may lead to some form of actions e.g., driver notifications, change of traffic light signals and braking to prevent an accident. Currently, a major part of the decision process is done by human domain experts, which is time-consuming, tedious and error-prone. Additionally, due to the intrinsic nature of knowledge possession this decision process cannot be easily replicated or reused. Therefore, there is a need for automating the reasoning processes by providing computational systems a formal representation of the domain knowledge and a set of methods to process that knowledge. In this paper, we propose a knowledge model that can be used to express both declarative knowledge about the systems' components, their relations and their current state, as well as procedural knowledge representing possible system behavior. In addition, we introduce a framework for knowledge management and automated reasoning (KMARF). The idea behind KMARF is to automatically select an appropriate problem solver based on formalized reasoning expertise in the knowledge base, and convert a problem definition to the corresponding format. This approach automates reasoning, thus reducing operational costs, and enables reusability of knowledge and methods across different domains. We illustrate the approach on a transportation planning use case.
[ { "version": "v1", "created": "Wed, 11 Jan 2017 15:03:18 GMT" } ]
1,484,179,200,000
[ [ "Feljan", "Aneta Vulgarakis", "" ], [ "Karapantelakis", "Athanasios", "" ], [ "Mokrushin", "Leonid", "" ], [ "Liang", "Hongxin", "" ], [ "Inam", "Rafia", "" ], [ "Fersman", "Elena", "" ], [ "Azevedo", "Carlos R. B.", "" ], [ "Raizer", "Klaus", "" ], [ "Souza", "Ricardo S.", "" ] ]
1701.03037
Yutaka Nagashima
Yutaka Nagashima
Towards Smart Proof Search for Isabelle
Accepted at AITP2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the recent progress in automatic theorem provers, proof engineers are still suffering from the lack of powerful proof automation. In this position paper we first report our proof strategy language based on a meta-tool approach. Then, we propose an AI-based approach to drastically improve proof automation for Isabelle, while identifying three major challenges we plan to address for this objective.
[ { "version": "v1", "created": "Tue, 10 Jan 2017 08:52:31 GMT" } ]
1,484,179,200,000
[ [ "Nagashima", "Yutaka", "" ] ]
1701.03322
Yi Zhou Dr.
Yi Zhou
From First-Order Logic to Assertional Logic
arXiv admin note: text overlap with arXiv:1603.03511
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
First-Order Logic (FOL) is widely regarded as one of the most important foundations for knowledge representation. Nevertheless, in this paper, we argue that FOL has several critical issues for this purpose. Instead, we propose an alternative called assertional logic, in which all syntactic objects are categorized as set theoretic constructs including individuals, concepts and operators, and all kinds of knowledge are formalized by equality assertions. We first present a primitive form of assertional logic that uses minimal assumed knowledge and constructs. Then, we show how to extend it by definitions, which are special kinds of knowledge, i.e., assertions. We argue that assertional logic, although simpler, is more expressive and extensible than FOL. As a case study, we show how assertional logic can be used to unify logic and probability, and more building blocks in AI.
[ { "version": "v1", "created": "Thu, 12 Jan 2017 12:25:42 GMT" }, { "version": "v2", "created": "Fri, 28 Apr 2017 06:09:21 GMT" } ]
1,493,596,800,000
[ [ "Zhou", "Yi", "" ] ]
1701.03500
Grant Molnar
Grant Molnar
A Savage-Like Axiomatization for Nonstandard Expected Utility
The alleged result of this paper is incorrect, the transfer principle applies only to first-order statements over standard structures, but I attempted to apply it over second-order statements as well. I believe a proof in the same vein as the one in this paper could be developed, but much greater care would need to be taken to respect the difference between internal and external sets
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since Leonard Savage's epoch-making "Foundations of Statistics", Subjective Expected Utility Theory has been the presumptive model for decision-making. Savage provided an act-based axiomatization of standard expected utility theory. In this article, we provide a Savage-like axiomatization of nonstandard expected utility theory. It corresponds to a weakening of Savage's 6th axiom.
[ { "version": "v1", "created": "Thu, 12 Jan 2017 20:39:03 GMT" }, { "version": "v2", "created": "Tue, 17 Jan 2017 00:27:55 GMT" }, { "version": "v3", "created": "Mon, 30 Jan 2017 01:18:29 GMT" }, { "version": "v4", "created": "Fri, 3 Mar 2017 16:48:00 GMT" }, { "version": "v5", "created": "Sun, 22 Oct 2017 22:55:38 GMT" }, { "version": "v6", "created": "Mon, 13 Nov 2017 16:32:00 GMT" }, { "version": "v7", "created": "Thu, 8 Feb 2018 20:54:04 GMT" } ]
1,518,393,600,000
[ [ "Molnar", "Grant", "" ] ]
1701.03571
Oleksii Tyshchenko Dr
Zhengbing Hu, Yevgeniy V. Bodyanskiy, Oleksii K. Tyshchenko, Viktoriia O. Samitova
Fuzzy Clustering Data Given in the Ordinal Scale
null
I.J. Intelligent Systems and Applications, 2017, Vol. 9, No. 1, pp. 67-74
10.5815/ijisa.2017.01.07
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fuzzy clustering algorithm for multidimensional data is proposed in this article. The data is described by vectors whose components are linguistic variables defined in an ordinal scale. The obtained results confirm the efficiency of the proposed approach.
[ { "version": "v1", "created": "Fri, 13 Jan 2017 06:32:14 GMT" } ]
1,484,524,800,000
[ [ "Hu", "Zhengbing", "" ], [ "Bodyanskiy", "Yevgeniy V.", "" ], [ "Tyshchenko", "Oleksii K.", "" ], [ "Samitova", "Viktoriia O.", "" ] ]
1701.03714
Nir Oren
Zimi Li and Nir Oren and Simon Parsons
On the links between argumentation-based reasoning and nonmonotonic reasoning
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we investigate the links between instantiated argumentation systems and the axioms for non-monotonic reasoning described in [9] with the aim of characterising the nature of argument based reasoning. In doing so, we consider two possible interpretations of the consequence relation, and describe which axioms are met by ASPIC+ under each of these interpretations. We then consider the links between these axioms and the rationality postulates. Our results indicate that argument based reasoning as characterised by ASPIC+ is - according to the axioms of [9] - non-cumulative and non-monotonic, and therefore weaker than the weakest non-monotonic reasoning systems they considered possible. This weakness underpins ASPIC+'s success in modelling other reasoning systems, and we conclude by considering the relationship between ASPIC+ and other weak logical systems.
[ { "version": "v1", "created": "Fri, 13 Jan 2017 16:33:52 GMT" } ]
1,484,524,800,000
[ [ "Li", "Zimi", "" ], [ "Oren", "Nir", "" ], [ "Parsons", "Simon", "" ] ]
1701.03868
Steven Hansen
Steven Stenberg Hansen
Minimally Naturalistic Artificial Intelligence
Accepted into the NIPS 2016 Workshop on Machine Intelligence (M.A.I.N.)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid advancement of machine learning techniques has re-energized research into general artificial intelligence. While the idea of domain-agnostic meta-learning is appealing, this emerging field must come to terms with its relationship to human cognition and the statistics and structure of the tasks humans perform. The position of this article is that only by aligning our agents' abilities and environments with those of humans do we stand a chance at developing general artificial intelligence (GAI). A broad reading of the famous 'No Free Lunch' theorem is that there is no universally optimal inductive bias or, equivalently, bias-free learning is impossible. This follows from the fact that there are an infinite number of ways to extrapolate data, any of which might be the one used by the data generating environment; an inductive bias prefers some of these extrapolations to others, which lowers performance in environments using these adversarial extrapolations. We may posit that the optimal GAI is the one that maximally exploits the statistics of its environment to create its inductive bias; accepting the fact that this agent is guaranteed to be extremely sub-optimal for some alternative environments. This trade-off appears benign when thinking about the environment as being the physical universe, as performance on any fictive universe is obviously irrelevant. But, we should expect a sharper inductive bias if we further constrain our environment. Indeed, we implicitly do so by defining GAI in terms of accomplishing that humans consider useful. One common version of this is need the for 'common-sense reasoning', which implicitly appeals to the statistics of physical universe as perceived by humans.
[ { "version": "v1", "created": "Sat, 14 Jan 2017 01:57:31 GMT" } ]
1,484,611,200,000
[ [ "Hansen", "Steven Stenberg", "" ] ]
1701.04569
Timothy Ganesan PhD
T.Ganesan, P.Vasant, I.Elamvazuthi
Multiobjective Optimization of Solar Powered Irrigation System with Fuzzy Type-2 Noise Modelling
27 pages, 12 Figures
2016, Emerging Research on Applied Fuzzy Sets and Intuitionistic Fuzzy Matrices, IGI Global, 189 pages
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optimization is becoming a crucial element in industrial applications involving sustainable alternative energy systems. During the design of such systems, the engineer/decision maker would often encounter noise factors (e.g. solar insolation and ambient temperature fluctuations) when their system interacts with the environment. In this chapter, the sizing and design optimization of the solar powered irrigation system was considered. This problem is multivariate, noisy, nonlinear and multiobjective. This design problem was tackled by first using the Fuzzy Type II approach to model the noise factors. Consequently, the Bacterial Foraging Algorithm (BFA) (in the context of a weighted sum framework) was employed to solve this multiobjective fuzzy design problem. This method was then used to construct the approximate Pareto frontier as well as to identify the best solution option in a fuzzy setting. Comprehensive analyses and discussions were performed on the generated numerical results with respect to the implemented solution methods.
[ { "version": "v1", "created": "Tue, 17 Jan 2017 08:52:48 GMT" } ]
1,484,697,600,000
[ [ "Ganesan", "T.", "" ], [ "Vasant", "P.", "" ], [ "Elamvazuthi", "I.", "" ] ]
1701.04663
Varun Raj Kompella
Varun Raj Kompella and Laurenz Wiskott
Intrinsically Motivated Acquisition of Modular Slow Features for Humanoids in Continuous and Non-Stationary Environments
8 pages, 5 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A compact information-rich representation of the environment, also called a feature abstraction, can simplify a robot's task of mapping its raw sensory inputs to useful action sequences. However, in environments that are non-stationary and only partially observable, a single abstraction is probably not sufficient to encode most variations. Therefore, learning multiple sets of spatially or temporally local, modular abstractions of the inputs would be beneficial. How can a robot learn these local abstractions without a teacher? More specifically, how can it decide from where and when to start learning a new abstraction? A recently proposed algorithm called Curious Dr. MISFA addresses this problem. The algorithm is based on two underlying learning principles called artificial curiosity and slowness. The former is used to make the robot self-motivated to explore by rewarding itself whenever it makes progress learning an abstraction; the later is used to update the abstraction by extracting slowly varying components from raw sensory inputs. Curious Dr. MISFA's application is, however, limited to discrete domains constrained by a pre-defined state space and has design limitations that make it unstable in certain situations. This paper presents a significant improvement that is applicable to continuous environments, is computationally less expensive, simpler to use with fewer hyper parameters, and stable in certain non-stationary environments. We demonstrate the efficacy and stability of our method in a vision-based robot simulator.
[ { "version": "v1", "created": "Tue, 17 Jan 2017 13:24:37 GMT" } ]
1,484,697,600,000
[ [ "Kompella", "Varun Raj", "" ], [ "Wiskott", "Laurenz", "" ] ]
1701.05059
Abir M'Baya
Abir M 'Baya (DISP), Jannik Laval (DISP), Nejib Moalla (DISP), Yacine Ouzrout (DISP), Abdelaziz Bouras
Ontology based system to guide internship assignment process
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Internship assignment is a complicated process for universities since it is necessary to take into account a multiplicity of variables to establish a compromise between companies' requirements and student competencies acquired during the university training. These variables build up a complex relations map that requires the formulation of an exhaustive and rigorous conceptual scheme. In this research a domain ontological model is presented as support to the student's decision making for opportunities of University studies level of the University Lumiere Lyon 2 (ULL) education system. The ontology is designed and created using methodological approach offering the possibility of improving the progressive creation, capture and knowledge articulation. In this paper, we draw a balance taking the demands of the companies across the capabilities of the students. This will be done through the establishment of an ontological model of an educational learners' profile and the internship postings which are written in a free text and using uncontrolled vocabulary. Furthermore, we outline the process of semantic matching which improves the quality of query results.
[ { "version": "v1", "created": "Wed, 18 Jan 2017 13:38:36 GMT" } ]
1,484,784,000,000
[ [ "'Baya", "Abir M", "", "DISP" ], [ "Laval", "Jannik", "", "DISP" ], [ "Moalla", "Nejib", "", "DISP" ], [ "Ouzrout", "Yacine", "", "DISP" ], [ "Bouras", "Abdelaziz", "" ] ]
1701.05226
Tarek Richard Besold
Tarek R. Besold, Artur d'Avila Garcez, Keith Stenning, Leendert van der Torre, Michiel van Lambalgen
Reasoning in Non-Probabilistic Uncertainty: Logic Programming and Neural-Symbolic Computing as Examples
Forthcoming with DOI 10.1007/s11023-017-9428-3 in the Special Issue "Reasoning with Imperfect Information and Knowledge" of Minds and Machines (2017). The final publication will be available at http://link.springer.com. --- Changes to previous version: Fixed some typos and a broken reference
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article aims to achieve two goals: to show that probability is not the only way of dealing with uncertainty (and even more, that there are kinds of uncertainty which are for principled reasons not addressable with probabilistic means); and to provide evidence that logic-based methods can well support reasoning with uncertainty. For the latter claim, two paradigmatic examples are presented: Logic Programming with Kleene semantics for modelling reasoning from information in a discourse, to an interpretation of the state of affairs of the intended model, and a neural-symbolic implementation of Input/Output logic for dealing with uncertainty in dynamic normative contexts.
[ { "version": "v1", "created": "Wed, 18 Jan 2017 20:38:55 GMT" }, { "version": "v2", "created": "Wed, 1 Mar 2017 15:36:37 GMT" } ]
1,488,412,800,000
[ [ "Besold", "Tarek R.", "" ], [ "Garcez", "Artur d'Avila", "" ], [ "Stenning", "Keith", "" ], [ "van der Torre", "Leendert", "" ], [ "van Lambalgen", "Michiel", "" ] ]
1701.05291
Zhipeng Huang
Zhipeng Huang and Nikos Mamoulis
Heterogeneous Information Network Embedding for Meta Path based Proximity
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A network embedding is a representation of a large graph in a low-dimensional space, where vertices are modeled as vectors. The objective of a good embedding is to preserve the proximity between vertices in the original graph. This way, typical search and mining methods can be applied in the embedded space with the help of off-the-shelf multidimensional indexing approaches. Existing network embedding techniques focus on homogeneous networks, where all vertices are considered to belong to a single class.
[ { "version": "v1", "created": "Thu, 19 Jan 2017 04:00:46 GMT" } ]
1,484,870,400,000
[ [ "Huang", "Zhipeng", "" ], [ "Mamoulis", "Nikos", "" ] ]
1701.06049
James MacGlashan
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David Roberts, Matthew E. Taylor, Michael L. Littman
Interactive Learning from Policy-Dependent Human Feedback
8 pages + references, 5 figures
International Conference on Machine Learning. PMLR, 2017
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner's current policy. We present empirical results that show this assumption to be false -- whether human trainers give a positive or negative feedback for a decision is influenced by the learner's current policy. Based on this insight, we introduce {\em Convergent Actor-Critic by Humans} (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.
[ { "version": "v1", "created": "Sat, 21 Jan 2017 16:37:41 GMT" }, { "version": "v2", "created": "Sat, 28 Jan 2023 17:02:34 GMT" } ]
1,675,123,200,000
[ [ "MacGlashan", "James", "" ], [ "Ho", "Mark K", "" ], [ "Loftin", "Robert", "" ], [ "Peng", "Bei", "" ], [ "Wang", "Guan", "" ], [ "Roberts", "David", "" ], [ "Taylor", "Matthew E.", "" ], [ "Littman", "Michael L.", "" ] ]
1701.06167
\c{C}a\u{g}r{\i} Latifo\u{g}lu
\c{C}a\u{g}r{\i} Latifo\u{g}lu
Binary Matrix Guessing Problem
9 pages, 4 tables reason for withdrawal: Paper will be rewritten with experiments replicated on verified and validated hardware and software
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the Binary Matrix Guessing Problem and provide two algorithms to solve this problem. The first algorithm we introduce is Elementwise Probing Algorithm (EPA) which is very fast under a score which utilizes Frobenius Distance. The second algorithm is Additive Reinforcement Learning Algorithm which combines ideas from perceptron algorithm and reinforcement learning algorithm. This algorithm is significantly slower compared to first one, but less restrictive and generalizes better. We compare computational performance of both algorithms and provide numerical results. reason for withdrawal: Paper will be rewritten with experiments replicated on verified and validated hardware and software.
[ { "version": "v1", "created": "Sun, 22 Jan 2017 14:19:25 GMT" }, { "version": "v2", "created": "Tue, 16 Oct 2018 10:33:17 GMT" } ]
1,539,734,400,000
[ [ "Latifoğlu", "Çağrı", "" ] ]
1701.06388
Emmanuel Hebrard
Emmanuel H\'ebrard (LAAS-ROC), Marie-Jos\'e Huguet (LAAS-ROC), Daniel Veysseire (LAAS-ROC), Ludivine Sauvan (LAAS-ROC), Bertrand Cabon
Constraint programming for planning test campaigns of communications satellites
null
Constraints, Springer Verlag, 2017, 22, pp.73 - 89
10.1007/s10601-016-9254-x
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The payload of communications satellites must go through a series of tests to assert their ability to survive in space. Each test involves some equipment of the payload to be active, which has an impact on the temperature of the payload. Sequencing these tests in a way that ensures the thermal stability of the payload and minimizes the overall duration of the test campaign is a very important objective for satellite manufacturers. The problem can be decomposed in two sub-problems corresponding to two objectives: First, the number of distinct configurations necessary to run the tests must be minimized. This can be modeled as packing the tests into configurations, and we introduce a set of implied constraints to improve the lower bound of the model. Second, tests must be sequenced so that the number of times an equipment unit has to be switched on or off is minimized. We model this aspect using the constraint Switch, where a buffer with limited capacity represents the currently active equipment units, and we introduce an improvement of the propagation algorithm for this constraint. We then introduce a search strategy in which we sequentially solve the sub-problems (packing and sequencing). Experiments conducted on real and random instances show the respective interest of our contributions.
[ { "version": "v1", "created": "Mon, 23 Jan 2017 13:48:35 GMT" } ]
1,485,216,000,000
[ [ "Hébrard", "Emmanuel", "", "LAAS-ROC" ], [ "Huguet", "Marie-José", "", "LAAS-ROC" ], [ "Veysseire", "Daniel", "", "LAAS-ROC" ], [ "Sauvan", "Ludivine", "", "LAAS-ROC" ], [ "Cabon", "Bertrand", "" ] ]
1701.06635
Abhinav Jauhri
Abhinav Jauhri, Brian Foo, Jerome Berclaz, Chih Chi Hu, Radek Grzeszczuk, Vasu Parameswaran, John Paul Shen
Space-Time Graph Modeling of Ride Requests Based on Real-World Data
Accepted at AAAI-17 Workshop on AI and OR for Social Good (AIORSocGood-17)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper focuses on modeling ride requests and their variations over location and time, based on analyzing extensive real-world data from a ride-sharing service. We introduce a graph model that captures the spatial and temporal variability of ride requests and the potentials for ride pooling. We discover these ride request graphs exhibit a well known property called densification power law often found in real graphs modelling human behaviors. We show the pattern of ride requests and the potential of ride pooling for a city can be characterized by the densification factor of the ride request graphs. Previous works have shown that it is possible to automatically generate synthetic versions of these graphs that exhibit a given densification factor. We present an algorithm for automatic generation of synthetic ride request graphs that match quite well the densification factor of ride request graphs from actual ride request data.
[ { "version": "v1", "created": "Mon, 23 Jan 2017 21:18:33 GMT" } ]
1,485,302,400,000
[ [ "Jauhri", "Abhinav", "" ], [ "Foo", "Brian", "" ], [ "Berclaz", "Jerome", "" ], [ "Hu", "Chih Chi", "" ], [ "Grzeszczuk", "Radek", "" ], [ "Parameswaran", "Vasu", "" ], [ "Shen", "John Paul", "" ] ]
1701.06699
Jeremy Morton
Alex Kuefler, Jeremy Morton, Tim Wheeler, Mykel Kochenderfer
Imitating Driver Behavior with Generative Adversarial Networks
8 pages, 6 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to accurately predict and simulate human driving behavior is critical for the development of intelligent transportation systems. Traditional modeling methods have employed simple parametric models and behavioral cloning. This paper adopts a method for overcoming the problem of cascading errors inherent in prior approaches, resulting in realistic behavior that is robust to trajectory perturbations. We extend Generative Adversarial Imitation Learning to the training of recurrent policies, and we demonstrate that our model outperforms rule-based controllers and maximum likelihood models in realistic highway simulations. Our model both reproduces emergent behavior of human drivers, such as lane change rate, while maintaining realistic control over long time horizons.
[ { "version": "v1", "created": "Tue, 24 Jan 2017 00:59:42 GMT" } ]
1,485,302,400,000
[ [ "Kuefler", "Alex", "" ], [ "Morton", "Jeremy", "" ], [ "Wheeler", "Tim", "" ], [ "Kochenderfer", "Mykel", "" ] ]
1701.07657
Giovanni Sileno
Giovanni Sileno
Operationalizing Declarative and Procedural Knowledge: a Benchmark on Logic Programming Petri Nets (LPPNs)
draft version -- updated
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modelling, specifying and reasoning about complex systems requires to process in an integrated fashion declarative and procedural aspects of the target domain. The paper reports on an experiment conducted with a propositional version of Logic Programming Petri Nets (LPPNs), a notation extending Petri Nets with logic programming constructs. Two semantics are presented: a denotational semantics that fully maps the notation to ASP via Event Calculus; and a hybrid operational semantics that process separately the causal mechanisms via Petri nets, and the constraints associated to objects and to events via Answer Set Programming (ASP). These two alternative specifications enable an empirical evaluation in terms of computational efficiency. Experimental results show that the hybrid semantics is more efficient w.r.t. sequences, whereas the two semantics follows the same behaviour w.r.t. branchings (although the denotational one performs better in absolute terms).
[ { "version": "v1", "created": "Thu, 26 Jan 2017 11:21:50 GMT" }, { "version": "v2", "created": "Fri, 31 Jul 2020 23:08:48 GMT" } ]
1,596,499,200,000
[ [ "Sileno", "Giovanni", "" ] ]
1701.08306
Zohreh Shams
Zohreh Shams, Marina De Vos, Julian Padget and Wamberto W. Vasconcelos
Practical Reasoning with Norms for Autonomous Software Agents (Full Edition)
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous software agents operating in dynamic environments need to constantly reason about actions in pursuit of their goals, while taking into consideration norms which might be imposed on those actions. Normative practical reasoning supports agents making decisions about what is best for them to (not) do in a given situation. What makes practical reasoning challenging is the interplay between goals that agents are pursuing and the norms that the agents are trying to uphold. We offer a formalisation to allow agents to plan for multiple goals and norms in the presence of durative actions that can be executed concurrently. We compare plans based on decision-theoretic notions (i.e. utility) such that the utility gain of goals and utility loss of norm violations are the basis for this comparison. The set of optimal plans consists of plans that maximise the overall utility, each of which can be chosen by the agent to execute. We provide an implementation of our proposal in Answer Set Programming, thus allowing us to state the original problem in terms of a logic program that can be queried for solutions with specific properties. The implementation is proven to be sound and complete.
[ { "version": "v1", "created": "Sat, 28 Jan 2017 17:55:04 GMT" } ]
1,485,820,800,000
[ [ "Shams", "Zohreh", "" ], [ "De Vos", "Marina", "" ], [ "Padget", "Julian", "" ], [ "Vasconcelos", "Wamberto W.", "" ] ]
1701.08317
Sarath Sreedharan
Tathagata Chakraborti, Sarath Sreedharan, Yu Zhang and Subbarao Kambhampati
Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When AI systems interact with humans in the loop, they are often called on to provide explanations for their plans and behavior. Past work on plan explanations primarily involved the AI system explaining the correctness of its plan and the rationale for its decision in terms of its own model. Such soliloquy is wholly inadequate in most realistic scenarios where the humans have domain and task models that differ significantly from that used by the AI system. We posit that the explanations are best studied in light of these differing models. In particular, we show how explanation can be seen as a "model reconciliation problem" (MRP), where the AI system in effect suggests changes to the human's model, so as to make its plan be optimal with respect to that changed human model. We will study the properties of such explanations, present algorithms for automatically computing them, and evaluate the performance of the algorithms.
[ { "version": "v1", "created": "Sat, 28 Jan 2017 19:22:52 GMT" }, { "version": "v2", "created": "Sun, 26 Feb 2017 22:39:38 GMT" }, { "version": "v3", "created": "Mon, 24 Apr 2017 15:54:37 GMT" }, { "version": "v4", "created": "Sun, 28 May 2017 03:24:37 GMT" }, { "version": "v5", "created": "Tue, 30 May 2017 21:31:24 GMT" } ]
1,496,275,200,000
[ [ "Chakraborti", "Tathagata", "" ], [ "Sreedharan", "Sarath", "" ], [ "Zhang", "Yu", "" ], [ "Kambhampati", "Subbarao", "" ] ]
1701.08546
Marc Sol\'e Sim\'o
Marc Sol\'e, Victor Munt\'es-Mulero, Annie Ibrahim Rana, Giovani Estrada
Survey on Models and Techniques for Root-Cause Analysis
18 pages, 222 references
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automation and computer intelligence to support complex human decisions becomes essential to manage large and distributed systems in the Cloud and IoT era. Understanding the root cause of an observed symptom in a complex system has been a major problem for decades. As industry dives into the IoT world and the amount of data generated per year grows at an amazing speed, an important question is how to find appropriate mechanisms to determine root causes that can handle huge amounts of data or may provide valuable feedback in real-time. While many survey papers aim at summarizing the landscape of techniques for modelling system behavior and infering the root cause of a problem based in the resulting models, none of those focuses on analyzing how the different techniques in the literature fit growing requirements in terms of performance and scalability. In this survey, we provide a review of root-cause analysis, focusing on these particular aspects. We also provide guidance to choose the best root-cause analysis strategy depending on the requirements of a particular system and application.
[ { "version": "v1", "created": "Mon, 30 Jan 2017 11:17:14 GMT" }, { "version": "v2", "created": "Mon, 3 Jul 2017 13:01:07 GMT" } ]
1,499,126,400,000
[ [ "Solé", "Marc", "" ], [ "Muntés-Mulero", "Victor", "" ], [ "Rana", "Annie Ibrahim", "" ], [ "Estrada", "Giovani", "" ] ]
1701.08665
Xiaodong Pan
Xiaodong Pan, Yang Xu
Redefinition of the concept of fuzzy set based on vague partition from the perspective of axiomatization
25 pages. arXiv admin note: substantial text overlap with arXiv:1506.07821
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Based on the in-depth analysis of the essence and features of vague phenomena, this paper focuses on establishing the axiomatical foundation of membership degree theory for vague phenomena, presents an axiomatic system to govern membership degrees and their interconnections. On this basis, the concept of vague partition is introduced, further, the concept of fuzzy set introduced by Zadeh in 1965 is redefined based on vague partition from the perspective of axiomatization. The thesis defended in this paper is that the relationship among vague attribute values should be the starting point to recognize and model vague phenomena from a quantitative view.
[ { "version": "v1", "created": "Fri, 27 Jan 2017 11:27:45 GMT" } ]
1,486,339,200,000
[ [ "Pan", "Xiaodong", "" ], [ "Xu", "Yang", "" ] ]
1701.08709
Fred Glover
Fred Glover
Diversification Methods for Zero-One Optimization
28 pages, 7 illustrations, 4 pseudocodes
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce new diversification methods for zero-one optimization that significantly extend strategies previously introduced in the setting of metaheuristic search. Our methods incorporate easily implemented strategies for partitioning assignments of values to variables, accompanied by processes called augmentation and shifting which create greater flexibility and generality. We then show how the resulting collection of diversified solutions can be further diversified by means of permutation mappings, which equally can be used to generate diversified collections of permutations for applications such as scheduling and routing. These methods can be applied to non-binary vectors by the use of binarization procedures and by Diversification-Based Learning (DBL) procedures which also provide connections to applications in clustering and machine learning. Detailed pseudocode and numerical illustrations are provided to show the operation of our methods and the collections of solutions they create.
[ { "version": "v1", "created": "Mon, 30 Jan 2017 17:01:31 GMT" }, { "version": "v2", "created": "Thu, 23 Mar 2017 04:19:25 GMT" } ]
1,490,313,600,000
[ [ "Glover", "Fred", "" ] ]