id
stringlengths
9
10
submitter
stringlengths
5
47
authors
stringlengths
5
1.72k
title
stringlengths
11
234
comments
stringlengths
1
491
journal-ref
stringlengths
4
396
doi
stringlengths
13
97
report-no
stringlengths
4
138
categories
stringclasses
1 value
license
stringclasses
9 values
abstract
stringlengths
29
3.66k
versions
listlengths
1
21
update_date
int64
1,180B
1,718B
authors_parsed
sequencelengths
1
98
2011.13721
Alexis De Colnet
Alexis de Colnet and Stefan Mengel
Lower Bounds for Approximate Knowledge Compilation
11 pages, including appendices
null
10.24963/ijcai.2020/254
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge compilation studies the trade-off between succinctness and efficiency of different representation languages. For many languages, there are known strong lower bounds on the representation size, but recent work shows that, for some languages, one can bypass these bounds using approximate compilation. The idea is to compile an approximation of the knowledge for which the number of errors can be controlled. We focus on circuits in deterministic decomposable negation normal form (d-DNNF), a compilation language suitable in contexts such as probabilistic reasoning, as it supports efficient model counting and probabilistic inference. Moreover, there are known size lower bounds for d-DNNF which by relaxing to approximation one might be able to avoid. In this paper we formalize two notions of approximation: weak approximation which has been studied before in the decision diagram literature and strong approximation which has been used in recent algorithmic results. We then show lower bounds for approximation by d-DNNF, complementing the positive results from the literature.
[ { "version": "v1", "created": "Fri, 27 Nov 2020 13:11:32 GMT" } ]
1,606,694,400,000
[ [ "de Colnet", "Alexis", "" ], [ "Mengel", "Stefan", "" ] ]
2011.13782
Michael Luo Zhiyu
Rachit Dubey, Erin Grant, Michael Luo, Karthik Narasimhan, Thomas Griffiths
Connecting Context-specific Adaptation in Humans to Meta-learning
9 pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Cognitive control, the ability of a system to adapt to the demands of a task, is an integral part of cognition. A widely accepted fact about cognitive control is that it is context-sensitive: Adults and children alike infer information about a task's demands from contextual cues and use these inferences to learn from ambiguous cues. However, the precise way in which people use contextual cues to guide adaptation to a new task remains poorly understood. This work connects the context-sensitive nature of cognitive control to a method for meta-learning with context-conditioned adaptation. We begin by identifying an essential difference between human learning and current approaches to meta-learning: In contrast to humans, existing meta-learning algorithms do not make use of task-specific contextual cues but instead rely exclusively on online feedback in the form of task-specific labels or rewards. To remedy this, we introduce a framework for using contextual information about a task to guide the initialization of task-specific models before adaptation to online feedback. We show how context-conditioned meta-learning can capture human behavior in a cognitive task and how it can be scaled to improve the speed of learning in various settings, including few-shot classification and low-sample reinforcement learning. Our work demonstrates that guiding meta-learning with task information can capture complex, human-like behavior, thereby deepening our understanding of cognitive control.
[ { "version": "v1", "created": "Fri, 27 Nov 2020 15:31:39 GMT" }, { "version": "v2", "created": "Tue, 1 Dec 2020 01:33:18 GMT" } ]
1,606,867,200,000
[ [ "Dubey", "Rachit", "" ], [ "Grant", "Erin", "" ], [ "Luo", "Michael", "" ], [ "Narasimhan", "Karthik", "" ], [ "Griffiths", "Thomas", "" ] ]
2011.14016
Alan Lindsay
Alan Lindsay, Bart Craenen, Sara Dalzel-Job, Robin L. Hill, Ronald P. A. Petrick
Investigating Human Response, Behaviour, and Preference in Joint-Task Interaction
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Human interaction relies on a wide range of signals, including non-verbal cues. In order to develop effective Explainable Planning (XAIP) agents it is important that we understand the range and utility of these communication channels. Our starting point is existing results from joint task interaction and their study in cognitive science. Our intention is that these lessons can inform the design of interaction agents -- including those using planning techniques -- whose behaviour is conditioned on the user's response, including affective measures of the user (i.e., explicitly incorporating the user's affective state within the planning model). We have identified several concepts at the intersection of plan-based agent behaviour and joint task interaction and have used these to design two agents: one reactive and the other partially predictive. We have designed an experiment in order to examine human behaviour and response as they interact with these agents. In this paper we present the designed study and the key questions that are being investigated. We also present the results from an empirical analysis where we examined the behaviour of the two agents for simulated users.
[ { "version": "v1", "created": "Fri, 27 Nov 2020 22:16:59 GMT" } ]
1,606,780,800,000
[ [ "Lindsay", "Alan", "" ], [ "Craenen", "Bart", "" ], [ "Dalzel-Job", "Sara", "" ], [ "Hill", "Robin L.", "" ], [ "Petrick", "Ronald P. A.", "" ] ]
2011.14124
Edward Lockhart
Edward Lockhart, Neil Burch, Nolan Bard, Sebastian Borgeaud, Tom Eccles, Lucas Smaira, Ray Smith
Human-Agent Cooperation in Bridge Bidding
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a human-compatible reinforcement-learning approach to a cooperative game, making use of a third-party hand-coded human-compatible bot to generate initial training data and to perform initial evaluation. Our learning approach consists of imitation learning, search, and policy iteration. Our trained agents achieve a new state-of-the-art for bridge bidding in three settings: an agent playing in partnership with a copy of itself; an agent partnering a pre-existing bot; and an agent partnering a human player.
[ { "version": "v1", "created": "Sat, 28 Nov 2020 12:37:02 GMT" } ]
1,606,780,800,000
[ [ "Lockhart", "Edward", "" ], [ "Burch", "Neil", "" ], [ "Bard", "Nolan", "" ], [ "Borgeaud", "Sebastian", "" ], [ "Eccles", "Tom", "" ], [ "Smaira", "Lucas", "" ], [ "Smith", "Ray", "" ] ]
2011.14475
Eduardo C\'esar Garrido-Merch\'an
Eduardo C. Garrido-Merch\'an and Martin Molina and Francisco M. Mendoza
An Artificial Consciousness Model and its relations with Philosophy of Mind
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work seeks to study the beneficial properties that an autonomous agent can obtain by implementing a cognitive architecture similar to the one of conscious beings. Along this document, a conscious model of autonomous agent based in a global workspace architecture is presented. We describe how this agent is viewed from different perspectives of philosophy of mind, being inspired by their ideas. The goal of this model is to create autonomous agents able to navigate within an environment composed of multiple independent magnitudes, adapting to its surroundings in order to find the best possible position in base of its inner preferences. The purpose of the model is to test the effectiveness of many cognitive mechanisms that are incorporated, such as an attention mechanism for magnitude selection, pos-session of inner feelings and preferences, usage of a memory system to storage beliefs and past experiences, and incorporating a global workspace which controls and integrates information processed by all the subsystem of the model. We show in a large experiment set how an autonomous agent can benefit from having a cognitive architecture such as the one described.
[ { "version": "v1", "created": "Mon, 30 Nov 2020 00:24:17 GMT" }, { "version": "v2", "created": "Tue, 1 Dec 2020 17:27:10 GMT" } ]
1,606,867,200,000
[ [ "Garrido-Merchán", "Eduardo C.", "" ], [ "Molina", "Martin", "" ], [ "Mendoza", "Francisco M.", "" ] ]
2011.15067
Marlene Berke
Marlene Berke, Mario Belledonne, and Julian Jara-Ettinger
Learning a metacognition for object perception
SVRHM workshop at NeurIPS
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Beyond representing the external world, humans also represent their own cognitive processes. In the context of perception, this metacognition helps us identify unreliable percepts, such as when we recognize that we are seeing an illusion. Here we propose MetaGen, a model for the unsupervised learning of metacognition. In MetaGen, metacognition is expressed as a generative model of how a perceptual system produces noisy percepts. Using basic principles of how the world works (such as object permanence, part of infants' core knowledge), MetaGen jointly infers the objects in the world causing the percepts and a representation of its own perceptual system. MetaGen can then use this metacognition to infer which objects are actually present in the world. On simulated data, we find that MetaGen quickly learns a metacognition and improves overall accuracy, outperforming models that lack a metacognition.
[ { "version": "v1", "created": "Mon, 30 Nov 2020 18:05:00 GMT" } ]
1,606,780,800,000
[ [ "Berke", "Marlene", "" ], [ "Belledonne", "Mario", "" ], [ "Jara-Ettinger", "Julian", "" ] ]
2012.00583
Xiaohan Cheng
Xiaohan Cheng
Obtain Employee Turnover Rate and Optimal Reduction Strategy Based On Neural Network and Reinforcement Learning
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, human resource is an important part of various resources of enterprises. For enterprises, high-loyalty and high-quality talented persons are often the core competitiveness of enterprises. Therefore, it is of great practical significance to predict whether employees leave and reduce the turnover rate of employees. First, this paper established a multi-layer perceptron predictive model of employee turnover rate. A model based on Sarsa which is a kind of reinforcement learning algorithm is proposed to automatically generate a set of strategies to reduce the employee turnover rate. These strategies are a collection of strategies that can reduce the employee turnover rate the most and cost less from the perspective of the enterprise, and can be used as a reference plan for the enterprise to optimize the employee system. The experimental results show that the algorithm can indeed improve the efficiency and accuracy of the specific strategy.
[ { "version": "v1", "created": "Tue, 1 Dec 2020 15:48:23 GMT" } ]
1,606,867,200,000
[ [ "Cheng", "Xiaohan", "" ] ]
2012.01410
Daniele Francesco Santamaria
Domenico Cantone, Carmelo Fabio Longo, Marianna Nicolosi-Asmundo, Daniele Francesco Santamaria, Corrado Santoro
Ontological Smart Contracts in OASIS: Ontology for Agents, Systems, and Integration of Services (Extended Version)
Please cite https://www.scopus.com/record/display.uri?eid=2-s2.0-85130258663&origin=resultslist
Intelligent Distributed Computing XIV, Studies in Computational Intelligence 1026, 2021
10.1007/978-3-030-96627-0_22
Chapter 22, pp. 237--247
cs.AI
http://creativecommons.org/licenses/by/4.0/
In this contribution we extend an ontology for modelling agents and their interactions, called Ontology for Agents, Systems, and Integration of Services (in short, OASIS), with conditionals and ontological smart contracts (in short, OSCs). OSCs are ontological representations of smart contracts that allow to establish responsibilities and authorizations among agents and set agreements, whereas conditionals allow one to restrict and limit agent interactions, define activation mechanisms that trigger agent actions, and define constraints and contract terms on OSCs. Conditionals and OSCs, as defined in OASIS, are applied to extend with ontological capabilities digital public ledgers such as the blockchain and smart contracts implemented on it. We will also sketch the architecture of a framework based on the OASIS definition of OSCs that exploits the Ethereum platform and the Interplanetary File System.
[ { "version": "v1", "created": "Wed, 2 Dec 2020 18:58:26 GMT" }, { "version": "v2", "created": "Fri, 10 Sep 2021 14:39:54 GMT" }, { "version": "v3", "created": "Tue, 14 Sep 2021 19:56:58 GMT" }, { "version": "v4", "created": "Tue, 20 Feb 2024 21:37:17 GMT" } ]
1,708,560,000,000
[ [ "Cantone", "Domenico", "" ], [ "Longo", "Carmelo Fabio", "" ], [ "Nicolosi-Asmundo", "Marianna", "" ], [ "Santamaria", "Daniele Francesco", "" ], [ "Santoro", "Corrado", "" ] ]
2012.01569
Uwe Aickelin
Hadi A. Khorshidi and Uwe Aickelin
Multicriteria Group Decision-Making Under Uncertainty Using Interval Data and Cloud Models
Journal of the Operational Research Society, 2020
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In this study, we propose a multicriteria group decision making (MCGDM) algorithm under uncertainty where data is collected as intervals. The proposed MCGDM algorithm aggregates the data, determines the optimal weights for criteria and ranks alternatives with no further input. The intervals give flexibility to experts in assessing alternatives against criteria and provide an opportunity to gain maximum information. We also propose a novel method to aggregate expert judgements using cloud models. We introduce an experimental approach to check the validity of the aggregation method. After that, we use the aggregation method for an MCGDM problem. Here, we find the optimal weights for each criterion by proposing a bilevel optimisation model. Then, we extend the technique for order of preference by similarity to ideal solution (TOPSIS) for data based on cloud models to prioritise alternatives. As a result, the algorithm can gain information from decision makers with different levels of uncertainty and examine alternatives with no more information from decision-makers. The proposed MCGDM algorithm is implemented on a case study of a cybersecurity problem to illustrate its feasibility and effectiveness. The results verify the robustness and validity of the proposed MCGDM using sensitivity analysis and comparison with other existing algorithms.
[ { "version": "v1", "created": "Tue, 1 Dec 2020 06:34:48 GMT" } ]
1,607,040,000,000
[ [ "Khorshidi", "Hadi A.", "" ], [ "Aickelin", "Uwe", "" ] ]
2012.02194
Uwe Aickelin
Justin Kane Gunn, Hadi Akbarzadeh Khorshidi, Uwe Aickelin
Methods of ranking for aggregated fuzzy numbers from interval-valued data
2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper primarily presents two methods of ranking aggregated fuzzy numbers from intervals using the Interval Agreement Approach (IAA). The two proposed ranking methods within this study contain the combination and application of previously proposed similarity measures, along with attributes novel to that of aggregated fuzzy numbers from interval-valued data. The shortcomings of previous measures, along with the improvements of the proposed methods, are illustrated using both a synthetic and real-world application. The real-world application regards the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) algorithm, modified to include both the previous and newly proposed methods.
[ { "version": "v1", "created": "Thu, 3 Dec 2020 02:56:15 GMT" } ]
1,607,299,200,000
[ [ "Gunn", "Justin Kane", "" ], [ "Khorshidi", "Hadi Akbarzadeh", "" ], [ "Aickelin", "Uwe", "" ] ]
2012.02903
Christine Allen-Blanchette
Christine Allen-Blanchette and Kostas Daniilidis
Joint Estimation of Image Representations and their Lie Invariants
Resolves typographical errors
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Images encode both the state of the world and its content. The former is useful for tasks such as planning and control, and the latter for classification. The automatic extraction of this information is challenging because of the high-dimensionality and entangled encoding inherent to the image representation. This article introduces two theoretical approaches aimed at the resolution of these challenges. The approaches allow for the interpolation and extrapolation of images from an image sequence by joint estimation of the image representation and the generators of the sequence dynamics. In the first approach, the image representations are learned using probabilistic PCA \cite{tipping1999probabilistic}. The linear-Gaussian conditional distributions allow for a closed form analytical description of the latent distributions but assumes the underlying image manifold is a linear subspace. In the second approach, the image representations are learned using probabilistic nonlinear PCA which relieves the linear manifold assumption at the cost of requiring a variational approximation of the latent distributions. In both approaches, the underlying dynamics of the image sequence are modelled explicitly to disentangle them from the image representations. The dynamics themselves are modelled with Lie group structure which enforces the desirable properties of smoothness and composability of inter-image transformations.
[ { "version": "v1", "created": "Sat, 5 Dec 2020 00:07:41 GMT" }, { "version": "v2", "created": "Tue, 8 Dec 2020 13:28:42 GMT" } ]
1,607,472,000,000
[ [ "Allen-Blanchette", "Christine", "" ], [ "Daniilidis", "Kostas", "" ] ]
2012.02947
Nikhil Krishnaswamy
Nikhil Krishnaswamy and James Pustejovsky
Neurosymbolic AI for Situated Language Understanding
18 pages + refs, 16 figures, presented at the 8th Annual Conference on Advances in Cognitive Systems (ACS), 2020
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, data-intensive AI, particularly the domain of natural language processing and understanding, has seen significant progress driven by the advent of large datasets and deep neural networks that have sidelined more classic AI approaches to the field. These systems can apparently demonstrate sophisticated linguistic understanding or generation capabilities, but often fail to transfer their skills to situations they have not encountered before. We argue that computational situated grounding provides a solution to some of these learning challenges by creating situational representations that both serve as a formal model of the salient phenomena, and contain rich amounts of exploitable, task-appropriate data for training new, flexible computational models. Our model reincorporates some ideas of classic AI into a framework of neurosymbolic intelligence, using multimodal contextual modeling of interactive situations, events, and object properties. We discuss how situated grounding provides diverse data and multiple levels of modeling for a variety of AI learning challenges, including learning how to interact with object affordances, learning semantics for novel structures and configurations, and transferring such learned knowledge to new objects and situations.
[ { "version": "v1", "created": "Sat, 5 Dec 2020 05:03:28 GMT" } ]
1,607,385,600,000
[ [ "Krishnaswamy", "Nikhil", "" ], [ "Pustejovsky", "James", "" ] ]
2012.03058
Xingyu Zhao
Xingyu Zhao, Wei Huang, Xiaowei Huang, Valentin Robu, David Flynn
BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations
Preprint accepted by UAI2021. The final version to appear in the UAI2021 volume of Proceedings of Machine Learning Research
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as one of the key areas of AI research. In this paper, we develop a novel Bayesian extension to the LIME framework, one of the most widely used approaches in XAI -- which we call BayLIME. Compared to LIME, BayLIME exploits prior knowledge and Bayesian reasoning to improve both the consistency in repeated explanations of a single prediction and the robustness to kernel settings. BayLIME also exhibits better explanation fidelity than the state-of-the-art (LIME, SHAP and GradCAM) by its ability to integrate prior knowledge from, e.g., a variety of other XAI techniques, as well as verification and validation (V&V) methods. We demonstrate the desirable properties of BayLIME through both theoretical analysis and extensive experiments.
[ { "version": "v1", "created": "Sat, 5 Dec 2020 15:41:52 GMT" }, { "version": "v2", "created": "Thu, 13 May 2021 20:17:32 GMT" }, { "version": "v3", "created": "Wed, 19 May 2021 12:28:47 GMT" }, { "version": "v4", "created": "Thu, 20 May 2021 07:46:30 GMT" }, { "version": "v5", "created": "Sat, 29 May 2021 07:49:00 GMT" } ]
1,622,505,600,000
[ [ "Zhao", "Xingyu", "" ], [ "Huang", "Wei", "" ], [ "Huang", "Xiaowei", "" ], [ "Robu", "Valentin", "" ], [ "Flynn", "David", "" ] ]
2012.03119
Nicolas Prevot
Nicolas Prevot
GpuShareSat: a SAT solver using the GPU for clause sharing
13 pages, 4 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
We describe a SAT solver using both the GPU (CUDA) and the CPU with a new clause exchange strategy. The CPU runs a classic multithreaded CDCL SAT solver. EachCPU thread exports all the clauses it learns to the GPU. The GPU makes a heavy usage of bitwise operations. It notices when a clause would have been used by a CPU thread and notifies that thread, in which case it imports that clause. This relies on the GPU repeatedly testing millions of clauses against hundreds of assignments. All the clauses are tested independantly from each other (which allows the GPU massively parallel approach), but against all the assignments at once, using bitwise operations. This allows CPU threads to only import clauses which would have been useful for them. Our solver is based upon glucose-syrup. Experiments show that this leads to a strong performance improvement, with 22 more instances solved on the SAT 2020 competition than glucose-syrup.
[ { "version": "v1", "created": "Sat, 5 Dec 2020 20:57:23 GMT" } ]
1,607,385,600,000
[ [ "Prevot", "Nicolas", "" ] ]
2012.03190
Xuejiao Tang
Xuejiao Tang, Jiong Qiu, Ruijun Chen, Wenbin Zhang, Vasileios Iosifidis, Zhen Liu, Wei Meng, Mingli Zhang and Ji Zhang
A Data-driven Human Responsibility Management System
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
An ideal safe workplace is described as a place where staffs fulfill responsibilities in a well-organized order, potential hazardous events are being monitored in real-time, as well as the number of accidents and relevant damages are minimized. However, occupational-related death and injury are still increasing and have been highly attended in the last decades due to the lack of comprehensive safety management. A smart safety management system is therefore urgently needed, in which the staffs are instructed to fulfill responsibilities as well as automating risk evaluations and alerting staffs and departments when needed. In this paper, a smart system for safety management in the workplace based on responsibility big data analysis and the internet of things (IoT) are proposed. The real world implementation and assessment demonstrate that the proposed systems have superior accountability performance and improve the responsibility fulfillment through real-time supervision and self-reminder.
[ { "version": "v1", "created": "Sun, 6 Dec 2020 06:16:51 GMT" } ]
1,607,385,600,000
[ [ "Tang", "Xuejiao", "" ], [ "Qiu", "Jiong", "" ], [ "Chen", "Ruijun", "" ], [ "Zhang", "Wenbin", "" ], [ "Iosifidis", "Vasileios", "" ], [ "Liu", "Zhen", "" ], [ "Meng", "Wei", "" ], [ "Zhang", "Mingli", "" ], [ "Zhang", "Ji", "" ] ]
2012.03204
Hangtian Jia
Hangtian Jia, Yujing Hu, Yingfeng Chen, Chunxu Ren, Tangjie Lv, Changjie Fan, Chongjie Zhang
Fever Basketball: A Complex, Flexible, and Asynchronized Sports Game Environment for Multi-agent Reinforcement Learning
7 pages,12 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The development of deep reinforcement learning (DRL) has benefited from the emergency of a variety type of game environments where new challenging problems are proposed and new algorithms can be tested safely and quickly, such as Board games, RTS, FPS, and MOBA games. However, many existing environments lack complexity and flexibility and assume the actions are synchronously executed in multi-agent settings, which become less valuable. We introduce the Fever Basketball game, a novel reinforcement learning environment where agents are trained to play basketball game. It is a complex and challenging environment that supports multiple characters, multiple positions, and both the single-agent and multi-agent player control modes. In addition, to better simulate real-world basketball games, the execution time of actions differs among players, which makes Fever Basketball a novel asynchronized environment. We evaluate commonly used multi-agent algorithms of both independent learners and joint-action learners in three game scenarios with varying difficulties, and heuristically propose two baseline methods to diminish the extra non-stationarity brought by asynchronism in Fever Basketball Benchmarks. Besides, we propose an integrated curricula training (ICT) framework to better handle Fever Basketball problems, which includes several game-rule based cascading curricula learners and a coordination curricula switcher focusing on enhancing coordination within the team. The results show that the game remains challenging and can be used as a benchmark environment for studies like long-time horizon, sparse rewards, credit assignment, and non-stationarity, etc. in multi-agent settings.
[ { "version": "v1", "created": "Sun, 6 Dec 2020 07:51:59 GMT" } ]
1,607,385,600,000
[ [ "Jia", "Hangtian", "" ], [ "Hu", "Yujing", "" ], [ "Chen", "Yingfeng", "" ], [ "Ren", "Chunxu", "" ], [ "Lv", "Tangjie", "" ], [ "Fan", "Changjie", "" ], [ "Zhang", "Chongjie", "" ] ]
2012.03527
Sandi Baressi \v{S}egota
Nikola An{\dj}eli\'c, Sandi Baressi \v{S}egota, Ivan Lorencin and Zlatan Car
Estimation of Gas Turbine Shaft Torque and Fuel Flow of a CODLAG Propulsion System Using Genetic Programming Algorithm
25 pages, 5 figures, 7 tables
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, the publicly available dataset of condition based maintenance of combined diesel-electric and gas (CODLAG) propulsion system for ships has been utilized to obtain symbolic expressions which could estimate gas turbine shaft torque and fuel flow using genetic programming (GP) algorithm. The entire dataset consists of 11934 samples that was divided into training and testing portions of dataset in an 80:20 ratio. The training dataset used to train the GP algorithm to obtain symbolic expressions for gas turbine shaft torque and fuel flow estimation consisted of 9548 samples. The best symbolic expressions obtained for gas turbine shaft torque and fuel flow estimation were obtained based on their $R^2$ score generated as a result of the application of the testing portion of the dataset on the aforementioned symbolic expressions. The testing portion of the dataset consisted of 2386 samples. The three best symbolic expressions obtained for gas turbine shaft torque estimation generated $R^2$ scores of 0.999201, 0.999296, and 0.999374, respectively. The three best symbolic expressions obtained for fuel flow estimation generated $R^2$ scores of 0.995495, 0.996465, and 0.996487, respectively.
[ { "version": "v1", "created": "Mon, 7 Dec 2020 08:39:58 GMT" } ]
1,607,385,600,000
[ [ "Anđelić", "Nikola", "" ], [ "Šegota", "Sandi Baressi", "" ], [ "Lorencin", "Ivan", "" ], [ "Car", "Zlatan", "" ] ]
2012.03624
Geoffrey Harris
Geoff Harris
Improving Constraint Satisfaction Algorithm Efficiency for the AllDifferent Constraint
*sigh* - it has been gently and kindly pointed out to me that I have simply re-discovered the channelling of constraints across alternate problem specifications. Gosh this is oddly amusing albeit embarrassing!
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Combinatorial problems stated as Constraint Satisfaction Problems (CSP) are examined. It is shown by example that any algorithm designed for the original CSP, and involving the AllDifferent constraint, has at least the same level of efficacy when simultaneously applied to both the original and its complementary problem. The 1-to-1 mapping employed to transform a CSP to its complementary problem, which is also a CSP, is introduced. This "Dual CSP" method and its application are outlined. The analysis of several random problem instances demonstrate the benefits of this method for variable domain reduction compared to the standard approach to CSP. Extensions to additional constraints other than AllDifferent, as well as the use of hybrid algorithms, are proposed as candidates for this Dual CSP method.
[ { "version": "v1", "created": "Mon, 7 Dec 2020 12:14:55 GMT" }, { "version": "v2", "created": "Sun, 13 Dec 2020 09:59:33 GMT" } ]
1,607,990,400,000
[ [ "Harris", "Geoff", "" ] ]
2012.03721
Uwe Aickelin
Justin Kane Gunn, Hadi Akbarzadeh Khorshidi, Uwe Aickelin
Similarity measure for aggregated fuzzy numbers from interval-valued data
Soft Computing Letters, 100002
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper presents a method to compute the degree of similarity between two aggregated fuzzy numbers from intervals using the Interval Agreement Approach (IAA). The similarity measure proposed within this study contains several features and attributes, of which are novel to aggregated fuzzy numbers. The attributes completely redefined or modified within this study include area, perimeter, centroids, quartiles and the agreement ratio. The recommended weighting for each feature has been learned using Principal Component Analysis (PCA). Furthermore, an illustrative example is provided to detail the application and potential future use of the similarity measure.
[ { "version": "v1", "created": "Fri, 4 Dec 2020 03:44:40 GMT" } ]
1,607,385,600,000
[ [ "Gunn", "Justin Kane", "" ], [ "Khorshidi", "Hadi Akbarzadeh", "" ], [ "Aickelin", "Uwe", "" ] ]
2012.04216
Ben Hutchinson
Angie Peng and Jeff Naecker and Ben Hutchinson and Andrew Smart and Nyalleng Moorosi
Fairness Preferences, Actual and Hypothetical: A Study of Crowdworker Incentives
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How should we decide which fairness criteria or definitions to adopt in machine learning systems? To answer this question, we must study the fairness preferences of actual users of machine learning systems. Stringent parity constraints on treatment or impact can come with trade-offs, and may not even be preferred by the social groups in question (Zafar et al., 2017). Thus it might be beneficial to elicit what the group's preferences are, rather than rely on a priori defined mathematical fairness constraints. Simply asking for self-reported rankings of users is challenging because research has shown that there are often gaps between people's stated and actual preferences(Bernheim et al., 2013). This paper outlines a research program and experimental designs for investigating these questions. Participants in the experiments are invited to perform a set of tasks in exchange for a base payment--they are told upfront that they may receive a bonus later on, and the bonus could depend on some combination of output quantity and quality. The same group of workers then votes on a bonus payment structure, to elicit preferences. The voting is hypothetical (not tied to an outcome) for half the group and actual (tied to the actual payment outcome) for the other half, so that we can understand the relation between a group's actual preferences and hypothetical (stated) preferences. Connections and lessons from fairness in machine learning are explored.
[ { "version": "v1", "created": "Tue, 8 Dec 2020 05:00:57 GMT" } ]
1,607,472,000,000
[ [ "Peng", "Angie", "" ], [ "Naecker", "Jeff", "" ], [ "Hutchinson", "Ben", "" ], [ "Smart", "Andrew", "" ], [ "Moorosi", "Nyalleng", "" ] ]
2012.04424
Stefan Mengel
Danel Le Berre, Pierre Marquis, Stefan Mengel, Romain Wallon
On Irrelevant Literals in Pseudo-Boolean Constraint Learning
published at IJCAI 2020
null
10.24963/ijcai.2020/160
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning pseudo-Boolean (PB) constraints in PB solvers exploiting cutting planes based inference is not as well understood as clause learning in conflict-driven clause learning solvers. In this paper, we show that PB constraints derived using cutting planes may contain \emph{irrelevant literals}, i.e., literals whose assigned values (whatever they are) never change the truth value of the constraint. Such literals may lead to infer constraints that are weaker than they should be, impacting the size of the proof built by the solver, and thus also affecting its performance. This suggests that current implementations of PB solvers based on cutting planes should be reconsidered to prevent the generation of irrelevant literals. Indeed, detecting and removing irrelevant literals is too expensive in practice to be considered as an option (the associated problem is NP-hard.
[ { "version": "v1", "created": "Tue, 8 Dec 2020 13:52:09 GMT" } ]
1,607,472,000,000
[ [ "Berre", "Danel Le", "" ], [ "Marquis", "Pierre", "" ], [ "Mengel", "Stefan", "" ], [ "Wallon", "Romain", "" ] ]
2012.04442
Michael Neumann
Michael Neumann, Sebastian Koralewski and Michael Beetz
URoboSim -- An Episodic Simulation Framework for Prospective Reasoning in Robotic Agents
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anticipating what might happen as a result of an action is an essential ability humans have in order to perform tasks effectively. On the other hand, robots capabilities in this regard are quite lacking. While machine learning is used to increase the ability of prospection it is still limiting for novel situations. A possibility to improve the prospection ability of robots is through simulation of imagined motions and the physical results of these actions. Therefore, we present URoboSim, a robot simulator that allows robots to perform tasks as mental simulation before performing this task in reality. We show the capabilities of URoboSim in form of mental simulations, generating data for machine learning and the usage as belief state for a real robot.
[ { "version": "v1", "created": "Tue, 8 Dec 2020 14:23:24 GMT" } ]
1,607,472,000,000
[ [ "Neumann", "Michael", "" ], [ "Koralewski", "Sebastian", "" ], [ "Beetz", "Michael", "" ] ]
2012.04626
Marc Rigter
Marc Rigter, Bruno Lacerda, Nick Hawes
Minimax Regret Optimisation for Robust Planning in Uncertain Markov Decision Processes
Full version of AAAI 2021 paper, with corrigendum attached that describes error in original paper
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The parameters for a Markov Decision Process (MDP) often cannot be specified exactly. Uncertain MDPs (UMDPs) capture this model ambiguity by defining sets which the parameters belong to. Minimax regret has been proposed as an objective for planning in UMDPs to find robust policies which are not overly conservative. In this work, we focus on planning for Stochastic Shortest Path (SSP) UMDPs with uncertain cost and transition functions. We introduce a Bellman equation to compute the regret for a policy. We propose a dynamic programming algorithm that utilises the regret Bellman equation, and show that it optimises minimax regret exactly for UMDPs with independent uncertainties. For coupled uncertainties, we extend our approach to use options to enable a trade off between computation and solution quality. We evaluate our approach on both synthetic and real-world domains, showing that it significantly outperforms existing baselines.
[ { "version": "v1", "created": "Tue, 8 Dec 2020 18:48:14 GMT" }, { "version": "v2", "created": "Sun, 12 Feb 2023 15:43:28 GMT" } ]
1,676,332,800,000
[ [ "Rigter", "Marc", "" ], [ "Lacerda", "Bruno", "" ], [ "Hawes", "Nick", "" ] ]
2012.04751
Sebastian Risi
Djordje Grbic, Rasmus Berg Palm, Elias Najarro, Claire Glanois, Sebastian Risi
EvoCraft: A New Challenge for Open-Endedness
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces EvoCraft, a framework for Minecraft designed to study open-ended algorithms. We introduce an API that provides an open-source Python interface for communicating with Minecraft to place and track blocks. In contrast to previous work in Minecraft that focused on learning to play the game, the grand challenge we pose here is to automatically search for increasingly complex artifacts in an open-ended fashion. Compared to other environments used to study open-endedness, Minecraft allows the construction of almost any kind of structure, including actuated machines with circuits and mechanical components. We present initial baseline results in evolving simple Minecraft creations through both interactive and automated evolution. While evolution succeeds when tasked to grow a structure towards a specific target, it is unable to find a solution when rewarded for creating a simple machine that moves. Thus, EvoCraft offers a challenging new environment for automated search methods (such as evolution) to find complex artifacts that we hope will spur the development of more open-ended algorithms. A Python implementation of the EvoCraft framework is available at: https://github.com/real-itu/Evocraft-py.
[ { "version": "v1", "created": "Tue, 8 Dec 2020 21:36:18 GMT" } ]
1,607,558,400,000
[ [ "Grbic", "Djordje", "" ], [ "Palm", "Rasmus Berg", "" ], [ "Najarro", "Elias", "" ], [ "Glanois", "Claire", "" ], [ "Risi", "Sebastian", "" ] ]
2012.04759
Yiming Xu
Yiming Xu, Diego Klabjan
Concept Drift and Covariate Shift Detection Ensemble with Lagged Labels
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In model serving, having one fixed model during the entire often life-long inference process is usually detrimental to model performance, as data distribution evolves over time, resulting in lack of reliability of the model trained on historical data. It is important to detect changes and retrain the model in time. The existing methods generally have three weaknesses: 1) using only classification error rate as signal, 2) assuming ground truth labels are immediately available after features from samples are received and 3) unable to decide what data to use to retrain the model when change occurs. We address the first problem by utilizing six different signals to capture a wide range of characteristics of data, and we address the second problem by allowing lag of labels, where labels of corresponding features are received after a lag in time. For the third problem, our proposed method automatically decides what data to use to retrain based on the signals. Extensive experiments on structured and unstructured data for different type of data changes establish that our method consistently outperforms the state-of-the-art methods by a large margin.
[ { "version": "v1", "created": "Tue, 8 Dec 2020 21:57:05 GMT" }, { "version": "v2", "created": "Sat, 12 Dec 2020 20:48:31 GMT" }, { "version": "v3", "created": "Tue, 15 Dec 2020 03:49:59 GMT" } ]
1,608,076,800,000
[ [ "Xu", "Yiming", "" ], [ "Klabjan", "Diego", "" ] ]
2012.05123
Sander Beckers
Sander Beckers
The Counterfactual NESS Definition of Causation
Preprint of accepted AAAI2021 paper
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In previous work with Joost Vennekens I proposed a definition of actual causation that is based on certain plausible principles, thereby allowing the debate on causation to shift away from its heavy focus on examples towards a more systematic analysis. This paper contributes to that analysis in two ways. First, I show that our definition is in fact a formalization of Wright's famous NESS definition of causation combined with a counterfactual difference-making condition. This means that our definition integrates two highly influential approaches to causation that are claimed to stand in opposition to each other. Second, I modify our definition to offer a substantial improvement: I weaken the difference-making condition in such a way that it avoids the problematic analysis of cases of preemption. The resulting Counterfactual NESS definition of causation forms a natural compromise between counterfactual approaches and the NESS approach.
[ { "version": "v1", "created": "Wed, 9 Dec 2020 15:57:56 GMT" }, { "version": "v2", "created": "Tue, 15 Dec 2020 21:46:12 GMT" } ]
1,608,163,200,000
[ [ "Beckers", "Sander", "" ] ]
2012.05603
Sander Beckers
Sander Beckers
Equivalent Causal Models
Preprint of accepted AAAI2021 paper
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of this paper is to offer the first systematic exploration and definition of equivalent causal models in the context where both models are not made up of the same variables. The idea is that two models are equivalent when they agree on all "essential" causal information that can be expressed using their common variables. I do so by focussing on the two main features of causal models, namely their structural relations and their functional relations. In particular, I define several relations of causal ancestry and several relations of causal sufficiency, and require that the most general of these relations are preserved across equivalent models.
[ { "version": "v1", "created": "Thu, 10 Dec 2020 11:43:35 GMT" } ]
1,607,644,800,000
[ [ "Beckers", "Sander", "" ] ]
2012.05766
Antonio Rago
Emanuele Albini, Piyawat Lertvittayakumjorn, Antonio Rago and Francesca Toni
Deep Argumentative Explanations
16 pages, 10 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the recent, widespread focus on eXplainable AI (XAI), explanations computed by XAI methods tend to provide little insight into the functioning of Neural Networks (NNs). We propose a novel framework for obtaining (local) explanations from NNs while providing transparency about their inner workings, and show how to deploy it for various neural architectures and tasks. We refer to our novel explanations collectively as Deep Argumentative eXplanations (DAXs in short), given that they reflect the deep structure of the underlying NNs and that they are defined in terms of notions from computational argumentation, a form of symbolic AI offering useful reasoning abstractions for explanation. We evaluate DAXs empirically showing that they exhibit deep fidelity and low computational cost. We also conduct human experiments indicating that DAXs are comprehensible to humans and align with their judgement, while also being competitive, in terms of user acceptance, with some existing approaches to XAI that also have an argumentative spirit.
[ { "version": "v1", "created": "Thu, 10 Dec 2020 15:55:09 GMT" }, { "version": "v2", "created": "Mon, 1 Mar 2021 16:46:05 GMT" }, { "version": "v3", "created": "Wed, 10 Mar 2021 17:12:30 GMT" }, { "version": "v4", "created": "Mon, 14 Jun 2021 12:29:14 GMT" } ]
1,623,715,200,000
[ [ "Albini", "Emanuele", "" ], [ "Lertvittayakumjorn", "Piyawat", "" ], [ "Rago", "Antonio", "" ], [ "Toni", "Francesca", "" ] ]
2012.05773
Antonio Rago
Antonio Rago, Emanuele Albini, Pietro Baroni and Francesca Toni
Influence-Driven Explanations for Bayesian Network Classifiers
11 pages, 2 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most pressing issues in AI in recent years has been the need to address the lack of explainability of many of its models. We focus on explanations for discrete Bayesian network classifiers (BCs), targeting greater transparency of their inner workings by including intermediate variables in explanations, rather than just the input and output variables as is standard practice. The proposed influence-driven explanations (IDXs) for BCs are systematically generated using the causal relationships between variables within the BC, called influences, which are then categorised by logical requirements, called relation properties, according to their behaviour. These relation properties both provide guarantees beyond heuristic explanation methods and allow the information underpinning an explanation to be tailored to a particular context's and user's requirements, e.g., IDXs may be dialectical or counterfactual. We demonstrate IDXs' capability to explain various forms of BCs, e.g., naive or multi-label, binary or categorical, and also integrate recent approaches to explanations for BCs from the literature. We evaluate IDXs with theoretical and empirical analyses, demonstrating their considerable advantages when compared with existing explanation methods.
[ { "version": "v1", "created": "Thu, 10 Dec 2020 16:00:51 GMT" }, { "version": "v2", "created": "Mon, 1 Mar 2021 16:54:24 GMT" }, { "version": "v3", "created": "Wed, 10 Mar 2021 17:04:12 GMT" } ]
1,615,420,800,000
[ [ "Rago", "Antonio", "" ], [ "Albini", "Emanuele", "" ], [ "Baroni", "Pietro", "" ], [ "Toni", "Francesca", "" ] ]
2012.05860
Daoming Zong
Daoming Zong and Shiliang Sun
GNN-XML: Graph Neural Networks for Extreme Multi-label Text Classification
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Extreme multi-label text classification (XMTC) aims to tag a text instance with the most relevant subset of labels from an extremely large label set. XMTC has attracted much recent attention due to massive label sets yielded by modern applications, such as news annotation and product recommendation. The main challenges of XMTC are the data scalability and sparsity, thereby leading to two issues: i) the intractability to scale to the extreme label setting, ii) the presence of long-tailed label distribution, implying that a large fraction of labels have few positive training instances. To overcome these problems, we propose GNN-XML, a scalable graph neural network framework tailored for XMTC problems. Specifically, we exploit label correlations via mining their co-occurrence patterns and build a label graph based on the correlation matrix. We then conduct the attributed graph clustering by performing graph convolution with a low-pass graph filter to jointly model label dependencies and label features, which induces semantic label clusters. We further propose a bilateral-branch graph isomorphism network to decouple representation learning and classifier learning for better modeling tail labels. Experimental results on multiple benchmark datasets show that GNN-XML significantly outperforms state-of-the-art methods while maintaining comparable prediction efficiency and model size.
[ { "version": "v1", "created": "Thu, 10 Dec 2020 18:18:34 GMT" } ]
1,607,644,800,000
[ [ "Zong", "Daoming", "" ], [ "Sun", "Shiliang", "" ] ]
2012.05893
Sharada Mohanty
Sharada Mohanty, Erik Nygren, Florian Laurent, Manuel Schneider, Christian Scheller, Nilabha Bhattacharya, Jeremy Watson, Adrian Egli, Christian Eichenberger, Christian Baumberger, Gereon Vienken, Irene Sturm, Guillaume Sartoretti, Giacomo Spigler
Flatland-RL : Multi-Agent Reinforcement Learning on Trains
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Efficient automated scheduling of trains remains a major challenge for modern railway systems. The underlying vehicle rescheduling problem (VRSP) has been a major focus of Operations Research (OR) since decades. Traditional approaches use complex simulators to study VRSP, where experimenting with a broad range of novel ideas is time consuming and has a huge computational overhead. In this paper, we introduce a two-dimensional simplified grid environment called "Flatland" that allows for faster experimentation. Flatland does not only reduce the complexity of the full physical simulation, but also provides an easy-to-use interface to test novel approaches for the VRSP, such as Reinforcement Learning (RL) and Imitation Learning (IL). In order to probe the potential of Machine Learning (ML) research on Flatland, we (1) ran a first series of RL and IL experiments and (2) design and executed a public Benchmark at NeurIPS 2020 to engage a large community of researchers to work on this problem. Our own experimental results, on the one hand, demonstrate that ML has potential in solving the VRSP on Flatland. On the other hand, we identify key topics that need further research. Overall, the Flatland environment has proven to be a robust and valuable framework to investigate the VRSP for railway networks. Our experiments provide a good starting point for further research and for the participants of the NeurIPS 2020 Flatland Benchmark. All of these efforts together have the potential to have a substantial impact on shaping the mobility of the future.
[ { "version": "v1", "created": "Thu, 10 Dec 2020 18:54:27 GMT" }, { "version": "v2", "created": "Fri, 11 Dec 2020 14:51:22 GMT" } ]
1,607,904,000,000
[ [ "Mohanty", "Sharada", "" ], [ "Nygren", "Erik", "" ], [ "Laurent", "Florian", "" ], [ "Schneider", "Manuel", "" ], [ "Scheller", "Christian", "" ], [ "Bhattacharya", "Nilabha", "" ], [ "Watson", "Jeremy", "" ], [ "Egli", "Adrian", "" ], [ "Eichenberger", "Christian", "" ], [ "Baumberger", "Christian", "" ], [ "Vienken", "Gereon", "" ], [ "Sturm", "Irene", "" ], [ "Sartoretti", "Guillaume", "" ], [ "Spigler", "Giacomo", "" ] ]
2012.05997
Atefeh Keshavarzi Zafarghandi
Atefeh Keshavarzi Zafarghandi, Rineke Verbrugge and Bart Verheij
Strong Admissibility for Abstract Dialectical Frameworks
9 pages, 3 Figures, SAC '21 conference: The 36th ACM/SIGAPP Symposium on Applied Computing
null
10.1145/3412841.3441962
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Abstract dialectical frameworks (ADFs) have been introduced as a formalism for modeling and evaluating argumentation allowing general logical satisfaction conditions. Different criteria used to settle the acceptance of arguments are called semantics. Semantics of ADFs have so far mainly been defined based on the concept of admissibility. However, the notion of strongly admissible semantics studied for abstract argumentation frameworks has not yet been introduced for ADFs. In the current work we present the concept of strong admissibility of interpretations for ADFs. Further, we show that strongly admissible interpretations of ADFs form a lattice with the grounded interpretation as top element.
[ { "version": "v1", "created": "Thu, 10 Dec 2020 21:50:35 GMT" } ]
1,607,904,000,000
[ [ "Zafarghandi", "Atefeh Keshavarzi", "" ], [ "Verbrugge", "Rineke", "" ], [ "Verheij", "Bart", "" ] ]
2012.06000
Thomas P Quinn
Thomas P. Quinn, Stephan Jacobs, Manisha Senadeera, Vuong Le, Simon Coghlan
The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Our title alludes to the three Christmas ghosts encountered by Ebenezer Scrooge in \textit{A Christmas Carol}, who guide Ebenezer through the past, present, and future of Christmas holiday events. Similarly, our article will take readers through a journey of the past, present, and future of medical AI. In doing so, we focus on the crux of modern machine learning: the reliance on powerful but intrinsically opaque models. When applied to the healthcare domain, these models fail to meet the needs for transparency that their clinician and patient end-users require. We review the implications of this failure, and argue that opaque models (1) lack quality assurance, (2) fail to elicit trust, and (3) restrict physician-patient dialogue. We then discuss how upholding transparency in all aspects of model design and model validation can help ensure the reliability of medical AI.
[ { "version": "v1", "created": "Thu, 10 Dec 2020 22:22:30 GMT" } ]
1,607,904,000,000
[ [ "Quinn", "Thomas P.", "" ], [ "Jacobs", "Stephan", "" ], [ "Senadeera", "Manisha", "" ], [ "Le", "Vuong", "" ], [ "Coghlan", "Simon", "" ] ]
2012.06005
Taoan Huang
Taoan Huang, Bistra Dilkina, Sven Koenig
Learning to Resolve Conflicts for Multi-Agent Path Finding with Conflict-Based Search
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conflict-Based Search (CBS) is a state-of-the-art algorithm for multi-agent path finding. At the high level, CBS repeatedly detects conflicts and resolves one of them by splitting the current problem into two subproblems. Previous work chooses the conflict to resolve by categorizing the conflict into three classes and always picking a conflict from the highest-priority class. In this work, we propose an oracle for conflict selection that results in smaller search tree sizes than the one used in previous work. However, the computation of the oracle is slow. Thus, we propose a machine-learning framework for conflict selection that observes the decisions made by the oracle and learns a conflict-selection strategy represented by a linear ranking function that imitates the oracle's decisions accurately and quickly. Experiments on benchmark maps indicate that our method significantly improves the success rates, the search tree sizes and runtimes over the current state-of-the-art CBS solver.
[ { "version": "v1", "created": "Thu, 10 Dec 2020 22:44:35 GMT" } ]
1,607,904,000,000
[ [ "Huang", "Taoan", "" ], [ "Dilkina", "Bistra", "" ], [ "Koenig", "Sven", "" ] ]
2012.06008
Liang Han
Liang Han, Zhaozheng Yin, Zhurong Xia, Mingqian Tang, Rong Jin
Price Suggestion for Online Second-hand Items with Texts and Images
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an intelligent price suggestion system for online second-hand listings based on their uploaded images and text descriptions. The goal of price prediction is to help sellers set effective and reasonable prices for their second-hand items with the images and text descriptions uploaded to the online platforms. Specifically, we design a multi-modal price suggestion system which takes as input the extracted visual and textual features along with some statistical item features collected from the second-hand item shopping platform to determine whether the image and text of an uploaded second-hand item are qualified for reasonable price suggestion with a binary classification model, and provide price suggestions for second-hand items with qualified images and text descriptions with a regression model. To satisfy different demands, two different constraints are added into the joint training of the classification model and the regression model. Moreover, a customized loss function is designed for optimizing the regression model to provide price suggestions for second-hand items, which can not only maximize the gain of the sellers but also facilitate the online transaction. We also derive a set of metrics to better evaluate the proposed price suggestion system. Extensive experiments on a large real-world dataset demonstrate the effectiveness of the proposed multi-modal price suggestion system.
[ { "version": "v1", "created": "Thu, 10 Dec 2020 22:50:42 GMT" } ]
1,607,904,000,000
[ [ "Han", "Liang", "" ], [ "Yin", "Zhaozheng", "" ], [ "Xia", "Zhurong", "" ], [ "Tang", "Mingqian", "" ], [ "Jin", "Rong", "" ] ]
2012.06157
Rupam Acharyya
Ankani Chattoraj, Rupam Acharyya, Shouman Das, Md. Iftekhar Tanveer, Ehsan Hoque
Fairness in Rating Prediction by Awareness of Verbal and Gesture Quality of Public Speeches
null
null
null
null
cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
The role of verbal and non-verbal cues towards great public speaking has been a topic of exploration for many decades. We identify a commonality across present theories, the element of "variety or heterogeneity" in channels or modes of communication (e.g. resorting to stories, scientific facts, emotional connections, facial expressions etc.) which is essential for effectively communicating information. We use this observation to formalize a novel HEterogeneity Metric, HEM, that quantifies the quality of a talk both in the verbal and non-verbal domain (transcript and facial gestures). We use TED talks as an input repository of public speeches because it consists of speakers from a diverse community besides having a wide outreach. We show that there is an interesting relationship between HEM and the ratings of TED talks given to speakers by viewers. It emphasizes that HEM inherently and successfully represents the quality of a talk based on "variety or heterogeneity". Further, we also discover that HEM successfully captures the prevalent bias in ratings with respect to race and gender, that we call sensitive attributes (because prediction based on these might result in unfair outcome). We incorporate the HEM metric into the loss function of a neural network with the goal to reduce unfairness in rating predictions with respect to race and gender. Our results show that the modified loss function improves fairness in prediction without considerably affecting prediction accuracy of the neural network. Our work ties together a novel metric for public speeches in both verbal and non-verbal domain with the computational power of a neural network to design a fair prediction system for speakers.
[ { "version": "v1", "created": "Fri, 11 Dec 2020 06:36:55 GMT" }, { "version": "v2", "created": "Wed, 16 Dec 2020 20:48:35 GMT" }, { "version": "v3", "created": "Tue, 16 Nov 2021 04:59:04 GMT" } ]
1,637,107,200,000
[ [ "Chattoraj", "Ankani", "" ], [ "Acharyya", "Rupam", "" ], [ "Das", "Shouman", "" ], [ "Tanveer", "Md. Iftekhar", "" ], [ "Hoque", "Ehsan", "" ] ]
2012.06306
Simon Gottschalk
Simon Gottschalk and Elena Demidova
EventKG+BT: Generation of Interactive Biography Timelines from a Knowledge Graph
ESWC 2020 Satellite Events pp 91-97
null
10.1007/978-3-030-62327-2_16
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Research on notable accomplishments and important events in the life of people of public interest usually requires close reading of long encyclopedic or biographical sources, which is a tedious and time-consuming task. Whereas semantic reference sources, such as the EventKG knowledge graph, provide structured representations of relevant facts, they often include hundreds of events and temporal relations for particular entities. In this paper, we present EventKG+BT - a timeline generation system that creates concise and interactive spatio-temporal representations of biographies from a knowledge graph using distant supervision.
[ { "version": "v1", "created": "Fri, 4 Dec 2020 13:06:27 GMT" } ]
1,607,904,000,000
[ [ "Gottschalk", "Simon", "" ], [ "Demidova", "Elena", "" ] ]
2012.06344
Raffaele Marino
Raffaele Marino
Learning from Survey Propagation: a Neural Network for MAX-E-$3$-SAT
null
Mach. Learn.: Sci. Technol. 2 (2021) 035032
10.1088/2632-2153/ac0496
null
cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
Many natural optimization problems are NP-hard, which implies that they are probably hard to solve exactly in the worst-case. However, it suffices to get reasonably good solutions for all (or even most) instances in practice. This paper presents a new algorithm for computing approximate solutions in ${\Theta(N})$ for the Maximum Exact 3-Satisfiability (MAX-E-$3$-SAT) problem by using deep learning methodology. This methodology allows us to create a learning algorithm able to fix Boolean variables by using local information obtained by the Survey Propagation algorithm. By performing an accurate analysis, on random CNF instances of the MAX-E-$3$-SAT with several Boolean variables, we show that this new algorithm, avoiding any decimation strategy, can build assignments better than a random one, even if the convergence of the messages is not found. Although this algorithm is not competitive with state-of-the-art Maximum Satisfiability (MAX-SAT) solvers, it can solve substantially larger and more complicated problems than it ever saw during training.
[ { "version": "v1", "created": "Thu, 10 Dec 2020 07:59:54 GMT" }, { "version": "v2", "created": "Sun, 14 Feb 2021 09:22:57 GMT" } ]
1,663,027,200,000
[ [ "Marino", "Raffaele", "" ] ]
2012.06474
Alessandro Zonta
A. Zonta, S.K. Smit and A.E. Eiben
Generating Human-Like Movement: A Comparison Between Two Approaches Based on Environmental Features
31 pages, 16 figures, submitted to Expert Systems with Applications
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Modelling realistic human behaviours in simulation is an ongoing challenge that resides between several fields like social sciences, philosophy, and artificial intelligence. Human movement is a special type of behaviour driven by intent (e.g. to get groceries) and the surrounding environment (e.g. curiosity to see new interesting places). Services available online and offline do not normally consider the environment when planning a path, which is decisive especially on a leisure trip. Two novel algorithms have been presented to generate human-like trajectories based on environmental features. The Attraction-Based A* algorithm includes in its computation information from the environmental features meanwhile, the Feature-Based A* algorithm also injects information from the real trajectories in its computation. The human-likeness aspect has been tested by a human expert judging the final generated trajectories as realistic. This paper presents a comparison between the two approaches in some key metrics like efficiency, efficacy, and hyper-parameters sensitivity. We show how, despite generating trajectories that are closer to the real one according to our predefined metrics, the Feature-Based A* algorithm fall short in time efficiency compared to the Attraction-Based A* algorithm, hindering the usability of the model in the real world.
[ { "version": "v1", "created": "Fri, 11 Dec 2020 16:45:32 GMT" } ]
1,607,904,000,000
[ [ "Zonta", "A.", "" ], [ "Smit", "S. K.", "" ], [ "Eiben", "A. E.", "" ] ]
2012.06686
Raymond Anneborg
Raymond Anneborg
Computing Machinery and Knowledge
7 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The purpose of this paper is to discuss the possibilities for computing machinery, or AI agents, to know and to possess knowledge. This is done mainly from a virtue epistemology perspective and definition of knowledge. However, this inquiry also shed light on the human condition, what it means for a human to know, and to possess knowledge. The paper argues that it is possible for an AI agent to know and examines this from both current state-of-the-art in artificial intelligence as well as from the perspective of what the future AI development might bring in terms of superintelligent AI agents.
[ { "version": "v1", "created": "Sat, 31 Oct 2020 09:27:53 GMT" } ]
1,607,990,400,000
[ [ "Anneborg", "Raymond", "" ] ]
2012.07195
Qi Zhang
Qi Zhang, Edmund H. Durfee, Satinder Singh
Efficient Querying for Cooperative Probabilistic Commitments
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Multiagent systems can use commitments as the core of a general coordination infrastructure, supporting both cooperative and non-cooperative interactions. Agents whose objectives are aligned, and where one agent can help another achieve greater reward by sacrificing some of its own reward, should choose a cooperative commitment to maximize their joint reward. We present a solution to the problem of how cooperative agents can efficiently find an (approximately) optimal commitment by querying about carefully-selected commitment choices. We prove structural properties of the agents' values as functions of the parameters of the commitment specification, and develop a greedy method for composing a query with provable approximation bounds, which we empirically show can find nearly optimal commitments in a fraction of the time methods that lack our insights require.
[ { "version": "v1", "created": "Mon, 14 Dec 2020 00:47:09 GMT" } ]
1,607,990,400,000
[ [ "Zhang", "Qi", "" ], [ "Durfee", "Edmund H.", "" ], [ "Singh", "Satinder", "" ] ]
2012.07228
Lei Li
Lei Li, Minghe Xue, Huanhuan Chen, Xindong Wu
Trustworthy Preference Completion in Social Choice
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
As from time to time it is impractical to ask agents to provide linear orders over all alternatives, for these partial rankings it is necessary to conduct preference completion. Specifically, the personalized preference of each agent over all the alternatives can be estimated with partial rankings from neighboring agents over subsets of alternatives. However, since the agents' rankings are nondeterministic, where they may provide rankings with noise, it is necessary and important to conduct the trustworthy preference completion. Hence, in this paper firstly, a trust-based anchor-kNN algorithm is proposed to find $k$-nearest trustworthy neighbors of the agent with trust-oriented Kendall-Tau distances, which will handle the cases when an agent exhibits irrational behaviors or provides only noisy rankings. Then, for alternative pairs, a bijection can be built from the ranking space to the preference space, and its certainty and conflict can be evaluated based on a well-built statistical measurement Probability-Certainty Density Function. Therefore, a certain common voting rule for the first $k$ trustworthy neighboring agents based on certainty and conflict can be taken to conduct the trustworthy preference completion. The properties of the proposed certainty and conflict have been studied empirically, and the proposed approach has been experimentally validated compared to state-of-arts approaches with several data sets.
[ { "version": "v1", "created": "Mon, 14 Dec 2020 03:03:13 GMT" } ]
1,607,990,400,000
[ [ "Li", "Lei", "" ], [ "Xue", "Minghe", "" ], [ "Chen", "Huanhuan", "" ], [ "Wu", "Xindong", "" ] ]
2012.07464
Alejandro Su\'arez Hern\'andez
Alejandro Su\'arez-Hern\'andez and Javier Segovia-Aguas and Carme Torras and Guillem Aleny\`a
Online Action Recognition
Accepted version in AAAI 21: https://ojs.aaai.org/index.php/AAAI/article/view/17423
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognition in planning seeks to find agent intentions, goals or activities given a set of observations and a knowledge library (e.g. goal states, plans or domain theories). In this work we introduce the problem of Online Action Recognition. It consists in recognizing, in an open world, the planning action that best explains a partially observable state transition from a knowledge library of first-order STRIPS actions, which is initially empty. We frame this as an optimization problem, and propose two algorithms to address it: Action Unification (AU) and Online Action Recognition through Unification (OARU). The former builds on logic unification and generalizes two input actions using weighted partial MaxSAT. The latter looks for an action within the library that explains an observed transition. If there is such action, it generalizes it making use of AU, building in this way an AU hierarchy. Otherwise, OARU inserts a Trivial Grounded Action (TGA) in the library that explains just that transition. We report results on benchmarks from the International Planning Competition and PDDLGym, where OARU recognizes actions accurately with respect to expert knowledge, and shows real-time performance.
[ { "version": "v1", "created": "Mon, 14 Dec 2020 12:37:20 GMT" }, { "version": "v2", "created": "Tue, 3 Aug 2021 14:38:17 GMT" } ]
1,628,035,200,000
[ [ "Suárez-Hernández", "Alejandro", "" ], [ "Segovia-Aguas", "Javier", "" ], [ "Torras", "Carme", "" ], [ "Alenyà", "Guillem", "" ] ]
2012.08033
Blai Bonet
Blai Bonet and Hector Geffner
General Policies, Serializations, and Planning Width
Longer version of AAAI-2021 paper that includes proofs and more explanations
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
It has been observed that in many of the benchmark planning domains, atomic goals can be reached with a simple polynomial exploration procedure, called IW, that runs in time exponential in the problem width. Such problems have indeed a bounded width: a width that does not grow with the number of problem variables and is often no greater than two. Yet, while the notion of width has become part of the state-of-the-art planning algorithms like BFWS, there is still no good explanation for why so many benchmark domains have bounded width. In this work, we address this question by relating bounded width and serialized width to ideas of generalized planning, where general policies aim to solve multiple instances of a planning problem all at once. We show that bounded width is a property of planning domains that admit optimal general policies in terms of features that are explicitly or implicitly represented in the domain encoding. The results are extended to much larger class of domains with bounded serialized width where the general policies do not have to be optimal. The study leads also to a new simple, meaningful, and expressive language for specifying domain serializations in the form of policy sketches which can be used for encoding domain control knowledge by hand or for learning it from traces. The use of sketches and the meaning of the theoretical results are all illustrated through a number of examples.
[ { "version": "v1", "created": "Tue, 15 Dec 2020 01:33:59 GMT" }, { "version": "v2", "created": "Wed, 23 Dec 2020 16:14:01 GMT" } ]
1,608,768,000,000
[ [ "Bonet", "Blai", "" ], [ "Geffner", "Hector", "" ] ]
2012.08479
Hiroyuki Kido
Hiroyuki Kido, Keishi Okamoto
Bayes Meets Entailment and Prediction: Commonsense Reasoning with Non-monotonicity, Paraconsistency and Predictive Accuracy
This paper was submitted to AAAI 2021 and rejected
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent success of Bayesian methods in neuroscience and artificial intelligence gives rise to the hypothesis that the brain is a Bayesian machine. Since logic and learning are both practices of the human brain, it leads to another hypothesis that there is a Bayesian interpretation underlying both logical reasoning and machine learning. In this paper, we introduce a generative model of logical consequence relations. It formalises the process of how the truth value of a sentence is probabilistically generated from the probability distribution over states of the world. We show that the generative model characterises a classical consequence relation, paraconsistent consequence relation and nonmonotonic consequence relation. In particular, the generative model gives a new consequence relation that outperforms them in reasoning with inconsistent knowledge. We also show that the generative model gives a new classification algorithm that outperforms several representative algorithms in predictive accuracy and complexity on the Kaggle Titanic dataset.
[ { "version": "v1", "created": "Tue, 15 Dec 2020 18:22:27 GMT" }, { "version": "v2", "created": "Wed, 16 Dec 2020 02:18:21 GMT" }, { "version": "v3", "created": "Wed, 27 Jan 2021 18:13:00 GMT" } ]
1,611,792,000,000
[ [ "Kido", "Hiroyuki", "" ], [ "Okamoto", "Keishi", "" ] ]
2012.08564
Eleni Nisioti
Eleni Nisioti and Cl\'ement Moulin-Frier
Grounding Artificial Intelligence in the Origins of Human Behavior
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in Artificial Intelligence (AI) have revived the quest for agents able to acquire an open-ended repertoire of skills. However, although this ability is fundamentally related to the characteristics of human intelligence, research in this field rarely considers the processes that may have guided the emergence of complex cognitive capacities during the evolution of the species. Research in Human Behavioral Ecology (HBE) seeks to understand how the behaviors characterizing human nature can be conceived as adaptive responses to major changes in the structure of our ecological niche. In this paper, we propose a framework highlighting the role of environmental complexity in open-ended skill acquisition, grounded in major hypotheses from HBE and recent contributions in Reinforcement learning (RL). We use this framework to highlight fundamental links between the two disciplines, as well as to identify feedback loops that bootstrap ecological complexity and create promising research directions for AI researchers.
[ { "version": "v1", "created": "Tue, 15 Dec 2020 19:28:45 GMT" }, { "version": "v2", "created": "Thu, 17 Dec 2020 14:07:50 GMT" } ]
1,608,249,600,000
[ [ "Nisioti", "Eleni", "" ], [ "Moulin-Frier", "Clément", "" ] ]
2012.08622
Bilal Farooq
Ali Yazdizadeh and Bilal Farooq
Smart Mobility Ontology: Current Trends and Future Directions
Published as a book chapter in: Handbook of Smart Cities, Springer, 2021
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ontology is the explicit and formal representation of the concepts in a domain and relations among them. Transportation science is a wide domain dealing with mobility over various complex and interconnected transportation systems, such as land, aviation, and maritime transport, and can take considerable advantage from ontology development. While several studies can be found in the recent literature, there exists a large potential to improve and develop a comprehensive smart mobility ontology. The current chapter aims to present different aspects of ontology development in general, such as ontology development methods, languages, tools, and software. Subsequently, it presents the currently available mobility-related ontologies developed across different domains, such as transportation, smart cities, goods mobility, sensors. Current gaps in the available ontologies are identified, and future directions regarding ontology development are proposed that can incorporate the forthcoming autonomous and connected vehicles, mobility as a service (MaaS), and other disruptive transportation technologies and services.
[ { "version": "v1", "created": "Tue, 15 Dec 2020 21:28:43 GMT" } ]
1,608,163,200,000
[ [ "Yazdizadeh", "Ali", "" ], [ "Farooq", "Bilal", "" ] ]
2012.08888
Peipei Kang
Lei Yang, Zitong Zhang, Xiaotian Jia, Peipei Kang, Wensheng Zhang, Dongya Wang
Solving the Travelling Thief Problem based on Item Selection Weight and Reverse Order Allocation
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The Travelling Thief Problem (TTP) is a challenging combinatorial optimization problem that attracts many scholars. The TTP interconnects two well-known NP-hard problems: the Travelling Salesman Problem (TSP) and the 0-1 Knapsack Problem (KP). Increasingly algorithms have been proposed for solving this novel problem that combines two interdependent sub-problems. In this paper, TTP is investigated theoretically and empirically. An algorithm based on the score value calculated by our proposed formulation in picking items and sorting items in the reverse order in the light of the scoring value is proposed to solve the problem. Different approaches for solving the TTP are compared and analyzed; the experimental investigations suggest that our proposed approach is very efficient in meeting or beating current state-of-the-art heuristic solutions on a comprehensive set of benchmark TTP instances.
[ { "version": "v1", "created": "Wed, 16 Dec 2020 12:06:05 GMT" } ]
1,608,163,200,000
[ [ "Yang", "Lei", "" ], [ "Zhang", "Zitong", "" ], [ "Jia", "Xiaotian", "" ], [ "Kang", "Peipei", "" ], [ "Zhang", "Wensheng", "" ], [ "Wang", "Dongya", "" ] ]
2012.08911
Sijie Mai
Sijie Mai, Shuangjia Zheng, Yuedong Yang, Haifeng Hu
Communicative Message Passing for Inductive Relation Reasoning
Accepted by AAAI-2021
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Relation prediction for knowledge graphs aims at predicting missing relationships between entities. Despite the importance of inductive relation prediction, most previous works are limited to a transductive setting and cannot process previously unseen entities. The recent proposed subgraph-based relation reasoning models provided alternatives to predict links from the subgraph structure surrounding a candidate triplet inductively. However, we observe that these methods often neglect the directed nature of the extracted subgraph and weaken the role of relation information in the subgraph modeling. As a result, they fail to effectively handle the asymmetric/anti-symmetric triplets and produce insufficient embeddings for the target triplets. To this end, we introduce a \textbf{C}\textbf{o}mmunicative \textbf{M}essage \textbf{P}assing neural network for \textbf{I}nductive re\textbf{L}ation r\textbf{E}asoning, \textbf{CoMPILE}, that reasons over local directed subgraph structures and has a vigorous inductive bias to process entity-independent semantic relations. In contrast to existing models, CoMPILE strengthens the message interactions between edges and entitles through a communicative kernel and enables a sufficient flow of relation information. Moreover, we demonstrate that CoMPILE can naturally handle asymmetric/anti-symmetric relations without the need for explosively increasing the number of model parameters by extracting the directed enclosing subgraphs. Extensive experiments show substantial performance gains in comparison to state-of-the-art methods on commonly used benchmark datasets with variant inductive settings.
[ { "version": "v1", "created": "Wed, 16 Dec 2020 12:42:06 GMT" }, { "version": "v2", "created": "Mon, 26 Jul 2021 12:18:21 GMT" } ]
1,627,344,000,000
[ [ "Mai", "Sijie", "" ], [ "Zheng", "Shuangjia", "" ], [ "Yang", "Yuedong", "" ], [ "Hu", "Haifeng", "" ] ]
2012.09049
Jorge Martinez Gil Ph.D.
Georg Buchgeher, David Gabauer, Jorge Martinez-Gil, Lisa Ehrlinger
Knowledge Graphs in Manufacturing and Production: A Systematic Literature Review
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge graphs in manufacturing and production aim to make production lines more efficient and flexible with higher quality output. This makes knowledge graphs attractive for companies to reach Industry 4.0 goals. However, existing research in the field is quite preliminary, and more research effort on analyzing how knowledge graphs can be applied in the field of manufacturing and production is needed. Therefore, we have conducted a systematic literature review as an attempt to characterize the state-of-the-art in this field, i.e., by identifying exiting research and by identifying gaps and opportunities for further research. To do that, we have focused on finding the primary studies in the existing literature, which were classified and analyzed according to four criteria: bibliometric key facts, research type facets, knowledge graph characteristics, and application scenarios. Besides, an evaluation of the primary studies has also been carried out to gain deeper insights in terms of methodology, empirical evidence, and relevance. As a result, we can offer a complete picture of the domain, which includes such interesting aspects as the fact that knowledge fusion is currently the main use case for knowledge graphs, that empirical research and industrial application are still missing to a large extent, that graph embeddings are not fully exploited, and that technical literature is fast-growing but seems to be still far from its peak.
[ { "version": "v1", "created": "Wed, 16 Dec 2020 16:15:28 GMT" } ]
1,608,163,200,000
[ [ "Buchgeher", "Georg", "" ], [ "Gabauer", "David", "" ], [ "Martinez-Gil", "Jorge", "" ], [ "Ehrlinger", "Lisa", "" ] ]
2012.09424
Zelong Yang
Zelong Yang, Yan Wang, Piji Li, Shaobin Lin, Shuming Shi, Shao-Lun Huang, Wei Bi
Predicting Events in MOBA Games: Prediction, Attribution, and Evaluation
null
null
10.1109/TG.2022.3159704
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The multiplayer online battle arena (MOBA) games have become increasingly popular in recent years. Consequently, many efforts have been devoted to providing pre-game or in-game predictions for them. However, these works are limited in the following two aspects: 1) the lack of sufficient in-game features; 2) the absence of interpretability in the prediction results. These two limitations greatly restrict the practical performance and industrial application of the current works. In this work, we collect and release a large-scale dataset containing rich in-game features for the popular MOBA game Honor of Kings. We then propose to predict four types of important events in an interpretable way by attributing the predictions to the input features using two gradient-based attribution methods: Integrated Gradients and SmoothGrad. To evaluate the explanatory power of different models and attribution methods, a fidelity-based evaluation metric is further proposed. Finally, we evaluate the accuracy and Fidelity of several competitive methods on the collected dataset to assess how well machines predict events in MOBA games.
[ { "version": "v1", "created": "Thu, 17 Dec 2020 07:28:35 GMT" }, { "version": "v2", "created": "Wed, 23 Dec 2020 07:42:51 GMT" }, { "version": "v3", "created": "Thu, 24 Dec 2020 07:47:19 GMT" }, { "version": "v4", "created": "Tue, 22 Mar 2022 06:54:14 GMT" }, { "version": "v5", "created": "Mon, 28 Mar 2022 14:12:55 GMT" } ]
1,648,512,000,000
[ [ "Yang", "Zelong", "" ], [ "Wang", "Yan", "" ], [ "Li", "Piji", "" ], [ "Lin", "Shaobin", "" ], [ "Shi", "Shuming", "" ], [ "Huang", "Shao-Lun", "" ], [ "Bi", "Wei", "" ] ]
2012.10147
Manfred Eppe
Manfred Eppe, Christian Gumbsch, Matthias Kerzel, Phuong D.H. Nguyen, Martin V. Butz and Stefan Wermter
Hierarchical principles of embodied reinforcement learning: A review
null
Nature Machine Intelligence, 4(1) (2022)
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cognitive Psychology and related disciplines have identified several critical mechanisms that enable intelligent biological agents to learn to solve complex problems. There exists pressing evidence that the cognitive mechanisms that enable problem-solving skills in these species build on hierarchical mental representations. Among the most promising computational approaches to provide comparable learning-based problem-solving abilities for artificial agents and robots is hierarchical reinforcement learning. However, so far the existing computational approaches have not been able to equip artificial agents with problem-solving abilities that are comparable to intelligent animals, including human and non-human primates, crows, or octopuses. Here, we first survey the literature in Cognitive Psychology, and related disciplines, and find that many important mental mechanisms involve compositional abstraction, curiosity, and forward models. We then relate these insights with contemporary hierarchical reinforcement learning methods, and identify the key machine intelligence approaches that realise these mechanisms. As our main result, we show that all important cognitive mechanisms have been implemented independently in isolated computational architectures, and there is simply a lack of approaches that integrate them appropriately. We expect our results to guide the development of more sophisticated cognitively inspired hierarchical methods, so that future artificial agents achieve a problem-solving performance on the level of intelligent animals.
[ { "version": "v1", "created": "Fri, 18 Dec 2020 10:19:38 GMT" }, { "version": "v2", "created": "Thu, 18 Aug 2022 09:45:25 GMT" } ]
1,660,867,200,000
[ [ "Eppe", "Manfred", "" ], [ "Gumbsch", "Christian", "" ], [ "Kerzel", "Matthias", "" ], [ "Nguyen", "Phuong D. H.", "" ], [ "Butz", "Martin V.", "" ], [ "Wermter", "Stefan", "" ] ]
2012.10171
Menghui Zhu
Sheng Chen, Menghui Zhu, Deheng Ye, Weinan Zhang, Qiang Fu, Wei Yang
Which Heroes to Pick? Learning to Draft in MOBA Games with Neural Networks and Tree Search
IEEE Transactions on Games
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hero drafting is essential in MOBA game playing as it builds the team of each side and directly affects the match outcome. State-of-the-art drafting methods fail to consider: 1) drafting efficiency when the hero pool is expanded; 2) the multi-round nature of a MOBA 5v5 match series, i.e., two teams play best-of-N and the same hero is only allowed to be drafted once throughout the series. In this paper, we formulate the drafting process as a multi-round combinatorial game and propose a novel drafting algorithm based on neural networks and Monte-Carlo tree search, named JueWuDraft. Specifically, we design a long-term value estimation mechanism to handle the best-of-N drafting case. Taking Honor of Kings, one of the most popular MOBA games at present, as a running case, we demonstrate the practicality and effectiveness of JueWuDraft when compared to state-of-the-art drafting methods.
[ { "version": "v1", "created": "Fri, 18 Dec 2020 11:19:00 GMT" }, { "version": "v2", "created": "Thu, 1 Jul 2021 03:34:40 GMT" }, { "version": "v3", "created": "Fri, 2 Jul 2021 03:48:24 GMT" }, { "version": "v4", "created": "Thu, 5 Aug 2021 09:01:42 GMT" } ]
1,628,208,000,000
[ [ "Chen", "Sheng", "" ], [ "Zhu", "Menghui", "" ], [ "Ye", "Deheng", "" ], [ "Zhang", "Weinan", "" ], [ "Fu", "Qiang", "" ], [ "Yang", "Wei", "" ] ]
2012.10232
Bata Vasic Dr
Iva Vasic, Bata Vasic, and Zorica Nikolic
Artificial Intelligence ordered 3D vertex importance
8 pages, 4 figures
FBIM Transactions, Vol. 8 No. 2, pp. 193-201, 2020
10.12709/fbim.08.08.02.21
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Ranking vertices of multidimensional networks is crucial in many areas of research, including selecting and determining the importance of decisions. Some decisions are significantly more important than others, and their weight categorization is also imortant. This paper defines a completely new method for determining the weight decisions using artificial intelligence for importance ranking of three-dimensional network vertices, improving the existing Ordered Statistics Vertex Extraction and Tracking Algorithm (OSVETA) based on modulation of quantized indices (QIM) and error correction codes. The technique we propose in this paper offers significant improvements the efficiency of determination the importance of network vertices in relation to statistical OSVETA criteria, replacing heuristic methods with methods of precise prediction of modern neural networks. The new artificial intelligence technique enables a significantly better definition of the 3D meshes and a better assessment of their topological features. The new method contributions result in a greater precision in defining stable vertices, significantly reducing the probability of deleting mesh vertices.
[ { "version": "v1", "created": "Thu, 17 Dec 2020 06:54:59 GMT" } ]
1,608,508,800,000
[ [ "Vasic", "Iva", "" ], [ "Vasic", "Bata", "" ], [ "Nikolic", "Zorica", "" ] ]
2012.10473
Tim Ritmeester
Tim Ritmeester and Hildegard Meyer-Ortmanns
State Estimation of Power Flows for Smart Grids via Belief Propagation
15 pages, 16 figures
Phys. Rev. E 102, 012311 (2020)
10.1103/PhysRevE.102.012311
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Belief propagation is an algorithm that is known from statistical physics and computer science. It provides an efficient way of calculating marginals that involve large sums of products which are efficiently rearranged into nested products of sums to approximate the marginals. It allows a reliable estimation of the state and its variance of power grids that is needed for the control and forecast of power grid management. At prototypical examples of IEEE-grids we show that belief propagation not only scales linearly with the grid size for the state estimation itself, but also facilitates and accelerates the retrieval of missing data and allows an optimized positioning of measurement units. Based on belief propagation, we give a criterion for how to assess whether other algorithms, using only local information, are adequate for state estimation for a given grid. We also demonstrate how belief propagation can be utilized for coarse-graining power grids towards representations that reduce the computational effort when the coarse-grained version is integrated into a larger grid. It provides a criterion for partitioning power grids into areas in order to minimize the error of flow estimates between different areas.
[ { "version": "v1", "created": "Fri, 18 Dec 2020 19:22:03 GMT" } ]
1,608,595,200,000
[ [ "Ritmeester", "Tim", "" ], [ "Meyer-Ortmanns", "Hildegard", "" ] ]
2012.10489
Joyjit Chatterjee
Joyjit Chatterjee, Nina Dethlefs
XAI4Wind: A Multimodal Knowledge Graph Database for Explainable Decision Support in Operations & Maintenance of Wind Turbines
Updated version of knowledge graph resource paper - updates include additions to the Appendix on more properties in the knowledge graph, corrected typos/grammatical errors etc
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Condition-based monitoring (CBM) has been widely utilised in the wind industry for monitoring operational inconsistencies and failures in turbines, with techniques ranging from signal processing and vibration analysis to artificial intelligence (AI) models using Supervisory Control & Acquisition (SCADA) data. However, existing studies do not present a concrete basis to facilitate explainable decision support in operations and maintenance (O&M), particularly for automated decision support through recommendation of appropriate maintenance action reports corresponding to failures predicted by CBM techniques. Knowledge graph databases (KGs) model a collection of domain-specific information and have played an intrinsic role for real-world decision support in domains such as healthcare and finance, but have seen very limited attention in the wind industry. We propose XAI4Wind, a multimodal knowledge graph for explainable decision support in real-world operational turbines and demonstrate through experiments several use-cases of the proposed KG towards O&M planning through interactive query and reasoning and providing novel insights using graph data science algorithms. The proposed KG combines multimodal knowledge like SCADA parameters and alarms with natural language maintenance actions, images etc. By integrating our KG with an Explainable AI model for anomaly prediction, we show that it can provide effective human-intelligible O&M strategies for predicted operational inconsistencies in various turbine sub-components. This can help instil better trust and confidence in conventionally black-box AI models. We make our KG publicly available and envisage that it can serve as the building ground for providing autonomous decision support in the wind industry.
[ { "version": "v1", "created": "Fri, 18 Dec 2020 19:54:19 GMT" }, { "version": "v2", "created": "Wed, 24 Feb 2021 04:38:47 GMT" } ]
1,614,211,200,000
[ [ "Chatterjee", "Joyjit", "" ], [ "Dethlefs", "Nina", "" ] ]
2012.10592
Lixing Tan
Lixing Tan, Zhaohui Zhu, Jinjin Zhang
More on extension-based semantics of argumentation
86 pages, 10 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
After a few decades of development, computational argumentation has become one of the active realms in AI. This paper considers extension-based concrete and abstract semantics of argumentation. For concrete ones, based on Grossi and Modgil's recent work, this paper considers some issues on graded extension-based semantics of abstract argumentation framework (AAF, for short). First, an alternative fundamental lemma is given, which generalizes the corresponding result due to Grossi and Modgil by relaxing the constraint on parameters. This lemma provides a new sufficient condition for preserving conflict-freeness and brings a Galois adjunction between admissible sets and complete extensions, which is of vital importance in constructing some special extensions in terms of iterations of the defense function. Applying such a lemma, some flaws in Grossi and Modgil's work are corrected, and the structural property and universal definability of various extension-based semantics are given. Second, an operator so-called reduced meet modulo an ultrafilter is presented, which is a simple but powerful tool in exploring infinite AAFs. The neutrality function and the defense function, which play central roles in Dung's abstract argumentation theory, are shown to be distributive over reduced meets modulo any ultrafilter. A variety of fundamental semantics of AAFs, including conflict-free, admissible, complete and stable semantics, etc, are shown to be closed under this operator. Based on this fact, a number of applications of such operators are considered. In particular, we provide a simple and uniform method to prove the universal definability of a family of range related semantics. Since all graded concrete semantics considered in this paper are generalizations of corresponding non-graded ones, all results about them obtained in this paper also hold in the traditional situation.
[ { "version": "v1", "created": "Sat, 19 Dec 2020 04:32:19 GMT" }, { "version": "v2", "created": "Sun, 27 Dec 2020 01:41:18 GMT" }, { "version": "v3", "created": "Thu, 20 May 2021 04:58:41 GMT" } ]
1,621,555,200,000
[ [ "Tan", "Lixing", "" ], [ "Zhu", "Zhaohui", "" ], [ "Zhang", "Jinjin", "" ] ]
2012.10700
Quentin Cohen-Solal
Quentin Cohen-Solal and Tristan Cazenave
Minimax Strikes Back
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Reinforcement Learning (DRL) reaches a superhuman level of play in many complete information games. The state of the art search algorithm used in combination with DRL is Monte Carlo Tree Search (MCTS). We take another approach to DRL using a Minimax algorithm instead of MCTS and learning only the evaluation of states, not the policy. We show that for multiple games it is competitive with the state of the art DRL for the learning performances and for the confrontations.
[ { "version": "v1", "created": "Sat, 19 Dec 2020 14:42:41 GMT" } ]
1,608,595,200,000
[ [ "Cohen-Solal", "Quentin", "" ], [ "Cazenave", "Tristan", "" ] ]
2012.10928
Milad Moradi
Milad Moradi, Matthias Samwald
Explaining Black-box Models for Biomedical Text Classification
null
null
10.1109/JBHI.2021.3056748
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel method named Biomedical Confident Itemsets Explanation (BioCIE), aiming at post-hoc explanation of black-box machine learning models for biomedical text classification. Using sources of domain knowledge and a confident itemset mining method, BioCIE discretizes the decision space of a black-box into smaller subspaces and extracts semantic relationships between the input text and class labels in different subspaces. Confident itemsets discover how biomedical concepts are related to class labels in the black-box's decision space. BioCIE uses the itemsets to approximate the black-box's behavior for individual predictions. Optimizing fidelity, interpretability, and coverage measures, BioCIE produces class-wise explanations that represent decision boundaries of the black-box. Results of evaluations on various biomedical text classification tasks and black-box models demonstrated that BioCIE can outperform perturbation-based and decision set methods in terms of producing concise, accurate, and interpretable explanations. BioCIE improved the fidelity of instance-wise and class-wise explanations by 11.6% and 7.5%, respectively. It also improved the interpretability of explanations by 8%. BioCIE can be effectively used to explain how a black-box biomedical text classification model semantically relates input texts to class labels. The source code and supplementary material are available at https://github.com/mmoradi-iut/BioCIE.
[ { "version": "v1", "created": "Sun, 20 Dec 2020 13:58:52 GMT" } ]
1,612,742,400,000
[ [ "Moradi", "Milad", "" ], [ "Samwald", "Matthias", "" ] ]
2012.11078
Patrick Rodler
Patrick Rodler
DynamicHS: Streamlining Reiter's Hitting-Set Tree for Sequential Diagnosis
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Given a system that does not work as expected, Sequential Diagnosis (SD) aims at suggesting a series of system measurements to isolate the true explanation for the system's misbehavior from a potentially exponential set of possible explanations. To reason about the best next measurement, SD methods usually require a sample of possible fault explanations at each step of the iterative diagnostic process. The computation of this sample can be accomplished by various diagnostic search algorithms. Among those, Reiter's HS-Tree is one of the most popular due its desirable properties and general applicability. Usually, HS-Tree is used in a stateless fashion throughout the SD process to (re)compute a sample of possible fault explanations in each iteration, each time given the latest (updated) system knowledge including all so-far collected measurements. At this, the built search tree is discarded between two iterations, although often large parts of the tree have to be rebuilt in the next iteration, involving redundant operations and calls to costly reasoning services. As a remedy to this, we propose DynamicHS, a variant of HS-Tree that maintains state throughout the diagnostic session and additionally embraces special strategies to minimize the number of expensive reasoner invocations. In this vein, DynamicHS provides an answer to a longstanding question posed by Raymond Reiter in his seminal paper from 1987. Extensive evaluations on real-world diagnosis problems prove the reasonability of the DynamicHS and testify its clear superiority to HS-Tree wrt. computation time. More specifically, DynamicHS outperformed HS-Tree in 96% of the executed sequential diagnosis sessions and, per run, the latter required up to 800% the time of the former. Remarkably, DynamicHS achieves these performance improvements while preserving all desirable properties as well as the general applicability of HS-Tree.
[ { "version": "v1", "created": "Mon, 21 Dec 2020 01:59:19 GMT" } ]
1,608,595,200,000
[ [ "Rodler", "Patrick", "" ] ]
2012.11154
Isaac Godfried
Isaac Godfried, Kriti Mahajan, Maggie Wang, Kevin Li, Pranjalya Tiwari
FlowDB a large scale precipitation, river, and flash flood dataset
NeurIPS 2020 Workshop Tackling Climate Change with Machine Learning
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Flooding results in 8 billion dollars of damage annually in the US and causes the most deaths of any weather related event. Due to climate change scientists expect more heavy precipitation events in the future. However, no current datasets exist that contain both hourly precipitation and river flow data. We introduce a novel hourly river flow and precipitation dataset and a second subset of flash flood events with damage estimates and injury counts. Using these datasets we create two challenges (1) general stream flow forecasting and (2) flash flood damage estimation. We have created several publicly available benchmarks and an easy to use package. Additionally, in the future we aim to augment our dataset with snow pack data and soil index moisture data to improve predictions.
[ { "version": "v1", "created": "Mon, 21 Dec 2020 07:08:41 GMT" } ]
1,608,595,200,000
[ [ "Godfried", "Isaac", "" ], [ "Mahajan", "Kriti", "" ], [ "Wang", "Maggie", "" ], [ "Li", "Kevin", "" ], [ "Tiwari", "Pranjalya", "" ] ]
2012.11243
Yaman K Singla
Yaman Kumar, Swati Aggarwal, Debanjan Mahata, Rajiv Ratn Shah, Ponnurangam Kumaraguru, Roger Zimmermann
Get It Scored Using AutoSAS -- An Automated System for Scoring Short Answers
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the era of MOOCs, online exams are taken by millions of candidates, where scoring short answers is an integral part. It becomes intractable to evaluate them by human graders. Thus, a generic automated system capable of grading these responses should be designed and deployed. In this paper, we present a fast, scalable, and accurate approach towards automated Short Answer Scoring (SAS). We propose and explain the design and development of a system for SAS, namely AutoSAS. Given a question along with its graded samples, AutoSAS can learn to grade that prompt successfully. This paper further lays down the features such as lexical diversity, Word2Vec, prompt, and content overlap that plays a pivotal role in building our proposed model. We also present a methodology for indicating the factors responsible for scoring an answer. The trained model is evaluated on an extensively used public dataset, namely Automated Student Assessment Prize Short Answer Scoring (ASAP-SAS). AutoSAS shows state-of-the-art performance and achieves better results by over 8% in some of the question prompts as measured by Quadratic Weighted Kappa (QWK), showing performance comparable to humans.
[ { "version": "v1", "created": "Mon, 21 Dec 2020 10:47:30 GMT" } ]
1,608,595,200,000
[ [ "Kumar", "Yaman", "" ], [ "Aggarwal", "Swati", "" ], [ "Mahata", "Debanjan", "" ], [ "Shah", "Rajiv Ratn", "" ], [ "Kumaraguru", "Ponnurangam", "" ], [ "Zimmermann", "Roger", "" ] ]
2012.11634
Henrique Santos
Henrique Santos, Minor Gordon, Zhicheng Liang, Gretchen Forbush, Deborah L. McGuinness
Exploring and Analyzing Machine Commonsense Benchmarks
Commonsense Knowledge Graphs Workshop 2021 (CSKGs) @AAAI-21
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Commonsense question-answering (QA) tasks, in the form of benchmarks, are constantly being introduced for challenging and comparing commonsense QA systems. The benchmarks provide question sets that systems' developers can use to train and test new models before submitting their implementations to official leaderboards. Although these tasks are created to evaluate systems in identified dimensions (e.g. topic, reasoning type), this metadata is limited and largely presented in an unstructured format or completely not present. Because machine common sense is a fast-paced field, the problem of fully assessing current benchmarks and systems with regards to these evaluation dimensions is aggravated. We argue that the lack of a common vocabulary for aligning these approaches' metadata limits researchers in their efforts to understand systems' deficiencies and in making effective choices for future tasks. In this paper, we first discuss this MCS ecosystem in terms of its elements and their metadata. Then, we present how we are supporting the assessment of approaches by initially focusing on commonsense benchmarks. We describe our initial MCS Benchmark Ontology, an extensible common vocabulary that formalizes benchmark metadata, and showcase how it is supporting the development of a Benchmark tool that enables benchmark exploration and analysis.
[ { "version": "v1", "created": "Mon, 21 Dec 2020 19:01:55 GMT" } ]
1,608,681,600,000
[ [ "Santos", "Henrique", "" ], [ "Gordon", "Minor", "" ], [ "Liang", "Zhicheng", "" ], [ "Forbush", "Gretchen", "" ], [ "McGuinness", "Deborah L.", "" ] ]
2012.11689
Kai Wei
Jixuan Wang, Kai Wei, Martin Radfar, Weiwei Zhang, Clement Chung
Encoding Syntactic Knowledge in Transformer Encoder for Intent Detection and Slot Filling
This is a pre-print version of paper accepted by AAAI2021
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
We propose a novel Transformer encoder-based architecture with syntactical knowledge encoded for intent detection and slot filling. Specifically, we encode syntactic knowledge into the Transformer encoder by jointly training it to predict syntactic parse ancestors and part-of-speech of each token via multi-task learning. Our model is based on self-attention and feed-forward layers and does not require external syntactic information to be available at inference time. Experiments show that on two benchmark datasets, our models with only two Transformer encoder layers achieve state-of-the-art results. Compared to the previously best performed model without pre-training, our models achieve absolute F1 score and accuracy improvement of 1.59% and 0.85% for slot filling and intent detection on the SNIPS dataset, respectively. Our models also achieve absolute F1 score and accuracy improvement of 0.1% and 0.34% for slot filling and intent detection on the ATIS dataset, respectively, over the previously best performed model. Furthermore, the visualization of the self-attention weights illustrates the benefits of incorporating syntactic information during training.
[ { "version": "v1", "created": "Mon, 21 Dec 2020 21:25:11 GMT" } ]
1,608,681,600,000
[ [ "Wang", "Jixuan", "" ], [ "Wei", "Kai", "" ], [ "Radfar", "Martin", "" ], [ "Zhang", "Weiwei", "" ], [ "Chung", "Clement", "" ] ]
2012.11792
Mehrdad Zakershahrak
Mehrdad Zakershahrak and Samira Ghodratnama
Are We On The Same Page? Hierarchical Explanation Generation for Planning Tasks in Human-Robot Teaming using Reinforcement Learning
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Providing explanations is considered an imperative ability for an AI agent in a human-robot teaming framework. The right explanation provides the rationale behind an AI agent's decision-making. However, to maintain the human teammate's cognitive demand to comprehend the provided explanations, prior works have focused on providing explanations in a specific order or intertwining the explanation generation with plan execution. Moreover, these approaches do not consider the degree of details required to share throughout the provided explanations. In this work, we argue that the agent-generated explanations, especially the complex ones, should be abstracted to be aligned with the level of details the human teammate desires to maintain the recipient's cognitive load. Therefore, learning a hierarchical explanations model is a challenging task. Moreover, the agent needs to follow a consistent high-level policy to transfer the learned teammate preferences to a new scenario while lower-level detailed plans are different. Our evaluation confirmed the process of understanding an explanation, especially a complex and detailed explanation, is hierarchical. The human preference that reflected this aspect corresponded exactly to creating and employing abstraction for knowledge assimilation hidden deeper in our cognitive process. We showed that hierarchical explanations achieved better task performance and behavior interpretability while reduced cognitive load. These results shed light on designing explainable agents utilizing reinforcement learning and planning across various domains.
[ { "version": "v1", "created": "Tue, 22 Dec 2020 02:14:52 GMT" }, { "version": "v2", "created": "Fri, 26 Feb 2021 03:42:47 GMT" } ]
1,614,556,800,000
[ [ "Zakershahrak", "Mehrdad", "" ], [ "Ghodratnama", "Samira", "" ] ]
2012.11835
Xuefei Ning
Xuefei Ning, Junbo Zhao, Wenshuo Li, Tianchen Zhao, Yin Zheng, Huazhong Yang, Yu Wang
Discovering Robust Convolutional Architecture at Targeted Capacity: A Multi-Shot Approach
9 pages, 9 pages appendices
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural networks (CNNs) are vulnerable to adversarial examples, and studies show that increasing the model capacity of an architecture topology (e.g., width expansion) can bring consistent robustness improvements. This reveals a clear robustness-efficiency trade-off that should be considered in architecture design. In this paper, considering scenarios with capacity budget, we aim to discover adversarially robust architecture at targeted capacities. Recent studies employed one-shot neural architecture search (NAS) to discover robust architectures. However, since the capacities of different topologies cannot be aligned in the search process, one-shot NAS methods favor topologies with larger capacities in the supernet. And the discovered topology might be suboptimal when augmented to the targeted capacity. We propose a novel multi-shot NAS method to address this issue and explicitly search for robust architectures at targeted capacities. At the targeted FLOPs of 2000M, the discovered MSRobNet-2000 outperforms the recent NAS-discovered architecture RobNet-large under various criteria by a large margin of 4%-7%. And at the targeted FLOPs of 1560M, MSRobNet-1560 surpasses another NAS-discovered architecture RobNet-free by 2.3% and 1.3% in the clean and PGD-7 accuracies, respectively. All codes are available at https://github.com/walkerning/aw\_nas.
[ { "version": "v1", "created": "Tue, 22 Dec 2020 05:21:25 GMT" }, { "version": "v2", "created": "Fri, 1 Jan 2021 09:44:52 GMT" }, { "version": "v3", "created": "Sat, 27 Mar 2021 03:36:02 GMT" } ]
1,617,062,400,000
[ [ "Ning", "Xuefei", "" ], [ "Zhao", "Junbo", "" ], [ "Li", "Wenshuo", "" ], [ "Zhao", "Tianchen", "" ], [ "Zheng", "Yin", "" ], [ "Yang", "Huazhong", "" ], [ "Wang", "Yu", "" ] ]
2012.11936
Valentina Anita Carriero
Nacira Abbas, Kholoud Alghamdi, Mortaza Alinam, Francesca Alloatti, Glenda Amaral, Claudia d'Amato, Luigi Asprino, Martin Beno, Felix Bensmann, Russa Biswas, Ling Cai, Riley Capshaw, Valentina Anita Carriero, Irene Celino, Amine Dadoun, Stefano De Giorgis, Harm Delva, John Domingue, Michel Dumontier, Vincent Emonet, Marieke van Erp, Paola Espinoza Arias, Omaima Fallatah, Sebasti\'an Ferrada, Marc Gallofr\'e Oca\~na, Michalis Georgiou, Genet Asefa Gesese, Frances Gillis-Webber, Francesca Giovannetti, Mar\`ia Granados Buey, Ismail Harrando, Ivan Heibi, Vitor Horta, Laurine Huber, Federico Igne, Mohamad Yaser Jaradeh, Neha Keshan, Aneta Koleva, Bilal Koteich, Kabul Kurniawan, Mengya Liu, Chuangtao Ma, Lientje Maas, Martin Mansfield, Fabio Mariani, Eleonora Marzi, Sepideh Mesbah, Maheshkumar Mistry, Alba Catalina Morales Tirado, Anna Nguyen, Viet Bach Nguyen, Allard Oelen, Valentina Pasqual, Heiko Paulheim, Axel Polleres, Margherita Porena, Jan Portisch, Valentina Presutti, Kader Pustu-Iren, Ariam Rivas Mendez, Soheil Roshankish, Sebastian Rudolph, Harald Sack, Ahmad Sakor, Jaime Salas, Thomas Schleider, Meilin Shi, Gianmarco Spinaci, Chang Sun, Tabea Tietz, Molka Tounsi Dhouib, Alessandro Umbrico, Wouter van den Berg, Weiqin Xu
Knowledge Graphs Evolution and Preservation -- A Technical Report from ISWS 2019
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the grand challenges discussed during the Dagstuhl Seminar "Knowledge Graphs: New Directions for Knowledge Representation on the Semantic Web" and described in its report is that of a: "Public FAIR Knowledge Graph of Everything: We increasingly see the creation of knowledge graphs that capture information about the entirety of a class of entities. [...] This grand challenge extends this further by asking if we can create a knowledge graph of "everything" ranging from common sense concepts to location based entities. This knowledge graph should be "open to the public" in a FAIR manner democratizing this mass amount of knowledge." Although linked open data (LOD) is one knowledge graph, it is the closest realisation (and probably the only one) to a public FAIR Knowledge Graph (KG) of everything. Surely, LOD provides a unique testbed for experimenting and evaluating research hypotheses on open and FAIR KG. One of the most neglected FAIR issues about KGs is their ongoing evolution and long term preservation. We want to investigate this problem, that is to understand what preserving and supporting the evolution of KGs means and how these problems can be addressed. Clearly, the problem can be approached from different perspectives and may require the development of different approaches, including new theories, ontologies, metrics, strategies, procedures, etc. This document reports a collaborative effort performed by 9 teams of students, each guided by a senior researcher as their mentor, attending the International Semantic Web Research School (ISWS 2019). Each team provides a different perspective to the problem of knowledge graph evolution substantiated by a set of research questions as the main subject of their investigation. In addition, they provide their working definition for KG preservation and evolution.
[ { "version": "v1", "created": "Tue, 22 Dec 2020 11:21:09 GMT" } ]
1,608,681,600,000
[ [ "Abbas", "Nacira", "" ], [ "Alghamdi", "Kholoud", "" ], [ "Alinam", "Mortaza", "" ], [ "Alloatti", "Francesca", "" ], [ "Amaral", "Glenda", "" ], [ "d'Amato", "Claudia", "" ], [ "Asprino", "Luigi", "" ], [ "Beno", "Martin", "" ], [ "Bensmann", "Felix", "" ], [ "Biswas", "Russa", "" ], [ "Cai", "Ling", "" ], [ "Capshaw", "Riley", "" ], [ "Carriero", "Valentina Anita", "" ], [ "Celino", "Irene", "" ], [ "Dadoun", "Amine", "" ], [ "De Giorgis", "Stefano", "" ], [ "Delva", "Harm", "" ], [ "Domingue", "John", "" ], [ "Dumontier", "Michel", "" ], [ "Emonet", "Vincent", "" ], [ "van Erp", "Marieke", "" ], [ "Arias", "Paola Espinoza", "" ], [ "Fallatah", "Omaima", "" ], [ "Ferrada", "Sebastián", "" ], [ "Ocaña", "Marc Gallofré", "" ], [ "Georgiou", "Michalis", "" ], [ "Gesese", "Genet Asefa", "" ], [ "Gillis-Webber", "Frances", "" ], [ "Giovannetti", "Francesca", "" ], [ "Buey", "Marìa Granados", "" ], [ "Harrando", "Ismail", "" ], [ "Heibi", "Ivan", "" ], [ "Horta", "Vitor", "" ], [ "Huber", "Laurine", "" ], [ "Igne", "Federico", "" ], [ "Jaradeh", "Mohamad Yaser", "" ], [ "Keshan", "Neha", "" ], [ "Koleva", "Aneta", "" ], [ "Koteich", "Bilal", "" ], [ "Kurniawan", "Kabul", "" ], [ "Liu", "Mengya", "" ], [ "Ma", "Chuangtao", "" ], [ "Maas", "Lientje", "" ], [ "Mansfield", "Martin", "" ], [ "Mariani", "Fabio", "" ], [ "Marzi", "Eleonora", "" ], [ "Mesbah", "Sepideh", "" ], [ "Mistry", "Maheshkumar", "" ], [ "Tirado", "Alba Catalina Morales", "" ], [ "Nguyen", "Anna", "" ], [ "Nguyen", "Viet Bach", "" ], [ "Oelen", "Allard", "" ], [ "Pasqual", "Valentina", "" ], [ "Paulheim", "Heiko", "" ], [ "Polleres", "Axel", "" ], [ "Porena", "Margherita", "" ], [ "Portisch", "Jan", "" ], [ "Presutti", "Valentina", "" ], [ "Pustu-Iren", "Kader", "" ], [ "Mendez", "Ariam Rivas", "" ], [ "Roshankish", "Soheil", "" ], [ "Rudolph", "Sebastian", "" ], [ "Sack", "Harald", "" ], [ "Sakor", "Ahmad", "" ], [ "Salas", "Jaime", "" ], [ "Schleider", "Thomas", "" ], [ "Shi", "Meilin", "" ], [ "Spinaci", "Gianmarco", "" ], [ "Sun", "Chang", "" ], [ "Tietz", "Tabea", "" ], [ "Dhouib", "Molka Tounsi", "" ], [ "Umbrico", "Alessandro", "" ], [ "Berg", "Wouter van den", "" ], [ "Xu", "Weiqin", "" ] ]
2012.11957
Yao Zhang
Yao Zhang, Xu Zhang, Jun Wang, Hongru Liang, Wenqiang Lei, Zhe Sun, Adam Jatowt, Zhenglu Yang
Generalized Relation Learning with Semantic Correlation Awareness for Link Prediction
Preprint of accepted AAAI2021 paper
null
null
null
cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
Developing link prediction models to automatically complete knowledge graphs has recently been the focus of significant research interest. The current methods for the link prediction taskhavetwonaturalproblems:1)the relation distributions in KGs are usually unbalanced, and 2) there are many unseen relations that occur in practical situations. These two problems limit the training effectiveness and practical applications of the existing link prediction models. We advocate a holistic understanding of KGs and we propose in this work a unified Generalized Relation Learning framework GRL to address the above two problems, which can be plugged into existing link prediction models. GRL conducts a generalized relation learning, which is aware of semantic correlations between relations that serve as a bridge to connect semantically similar relations. After training with GRL, the closeness of semantically similar relations in vector space and the discrimination of dissimilar relations are improved. We perform comprehensive experiments on six benchmarks to demonstrate the superior capability of GRL in the link prediction task. In particular, GRL is found to enhance the existing link prediction models making them insensitive to unbalanced relation distributions and capable of learning unseen relations.
[ { "version": "v1", "created": "Tue, 22 Dec 2020 12:22:03 GMT" }, { "version": "v2", "created": "Sun, 18 Apr 2021 08:57:36 GMT" } ]
1,618,876,800,000
[ [ "Zhang", "Yao", "" ], [ "Zhang", "Xu", "" ], [ "Wang", "Jun", "" ], [ "Liang", "Hongru", "" ], [ "Lei", "Wenqiang", "" ], [ "Sun", "Zhe", "" ], [ "Jatowt", "Adam", "" ], [ "Yang", "Zhenglu", "" ] ]
2012.12186
Rinu Boney
Rinu Boney, Alexander Ilin, Juho Kannala, Jarno Sepp\"anen
Learning to Play Imperfect-Information Games by Imitating an Oracle Planner
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider learning to play multiplayer imperfect-information games with simultaneous moves and large state-action spaces. Previous attempts to tackle such challenging games have largely focused on model-free learning methods, often requiring hundreds of years of experience to produce competitive agents. Our approach is based on model-based planning. We tackle the problem of partial observability by first building an (oracle) planner that has access to the full state of the environment and then distilling the knowledge of the oracle to a (follower) agent which is trained to play the imperfect-information game by imitating the oracle's choices. We experimentally show that planning with naive Monte Carlo tree search does not perform very well in large combinatorial action spaces. We therefore propose planning with a fixed-depth tree search and decoupled Thompson sampling for action selection. We show that the planner is able to discover efficient playing strategies in the games of Clash Royale and Pommerman and the follower policy successfully learns to implement them by training on a few hundred battles.
[ { "version": "v1", "created": "Tue, 22 Dec 2020 17:29:57 GMT" } ]
1,608,681,600,000
[ [ "Boney", "Rinu", "" ], [ "Ilin", "Alexander", "" ], [ "Kannala", "Juho", "" ], [ "Seppänen", "Jarno", "" ] ]
2012.12192
Liang Ma
Liang Ma
Query Answering via Decentralized Search
Updated author list
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Expert networks are formed by a group of expert-professionals with different specialties to collaboratively resolve specific queries posted to the network. In such networks, when a query reaches an expert who does not have sufficient expertise, this query needs to be routed to other experts for further processing until it is completely solved; therefore, query answering efficiency is sensitive to the underlying query routing mechanism being used. Among all possible query routing mechanisms, decentralized search, operating purely on each expert's local information without any knowledge of network global structure, represents the most basic and scalable routing mechanism, which is applicable to any network scenarios even in dynamic networks. However, there is still a lack of fundamental understanding of the efficiency of decentralized search in expert networks. In this regard, we investigate decentralized search by quantifying its performance under a variety of network settings. Our key findings reveal the existence of network conditions, under which decentralized search can achieve significantly short query routing paths (i.e., between $O(\log n)$ and $O(\log^2 n)$ hops, $n$: total number of experts in the network). Based on such theoretical foundation, we further study how the unique properties of decentralized search in expert networks is related to the anecdotal small-world phenomenon. In addition, we demonstrate that decentralized search is robust against estimation errors introduced by misinterpreting the required expertise levels. To the best of our knowledge, this is the first work studying fundamental behaviors of decentralized search in expert networks. The developed performance bounds, confirmed by real datasets, are able to assist in predicting network performance and designing complex expert networks.
[ { "version": "v1", "created": "Fri, 18 Dec 2020 14:46:49 GMT" }, { "version": "v2", "created": "Sat, 26 Dec 2020 22:26:13 GMT" } ]
1,609,200,000,000
[ [ "Ma", "Liang", "" ] ]
2012.12218
Sein Minn
Sein Minn
BKT-LSTM: Efficient Student Modeling for knowledge tracing and student performance prediction
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, we have seen a rapid rise in usage of online educational platforms. The personalized education became crucially important in future learning environments. Knowledge tracing (KT) refers to the detection of students' knowledge states and predict future performance given their past outcomes for providing adaptive solution to Intelligent Tutoring Systems (ITS). Bayesian Knowledge Tracing (BKT) is a model to capture mastery level of each skill with psychologically meaningful parameters and widely used in successful tutoring systems. However, it is unable to detect learning transfer across skills because each skill model is learned independently and shows lower efficiency in student performance prediction. While recent KT models based on deep neural networks shows impressive predictive power but it came with a price. Ten of thousands of parameters in neural networks are unable to provide psychologically meaningful interpretation that reflect to cognitive theory. In this paper, we proposed an efficient student model called BKT-LSTM. It contains three meaningful components: individual \textit{skill mastery} assessed by BKT, \textit{ability profile} (learning transfer across skills) detected by k-means clustering and \textit{problem difficulty}. All these components are taken into account in student's future performance prediction by leveraging predictive power of LSTM. BKT-LSTM outperforms state-of-the-art student models in student's performance prediction by considering these meaningful features instead of using binary values of student's past interaction in DKT. We also conduct ablation studies on each of BKT-LSTM model components to examine their value and each component shows significant contribution in student's performance prediction. Thus, it has potential for providing adaptive and personalized instruction in real-world educational systems.
[ { "version": "v1", "created": "Tue, 22 Dec 2020 18:05:36 GMT" }, { "version": "v2", "created": "Wed, 6 Jan 2021 03:46:09 GMT" }, { "version": "v3", "created": "Tue, 8 Jun 2021 16:03:53 GMT" } ]
1,623,196,800,000
[ [ "Minn", "Sein", "" ] ]
2012.12262
Mohammad Reza Davahli
Mohammad Reza Davahli
The Last State of Artificial Intelligence in Project Management
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Artificial intelligence (AI) has been used to advance different fields, such as education, healthcare, and finance. However, the application of AI in the field of project management (PM) has not progressed equally. This paper reports on a systematic review of the published studies used to investigate the application of AI in PM. This systematic review identified relevant papers using Web of Science, Science Direct, and Google Scholar databases. Of the 652 articles found, 58 met the predefined criteria and were included in the review. Included papers were classified per the following dimensions: PM knowledge areas, PM processes, and AI techniques. The results indicated that the application of AI in PM was in its early stages and AI models have not applied for multiple PM processes especially in processes groups of project stakeholder management, project procurements management, and project communication management. However, the most popular PM processes among included papers were project effort prediction and cost estimation, and the most popular AI techniques were support vector machines, neural networks, and genetic algorithms.
[ { "version": "v1", "created": "Wed, 16 Dec 2020 05:10:08 GMT" } ]
1,608,681,600,000
[ [ "Davahli", "Mohammad Reza", "" ] ]
2012.12335
Carlos N\'u\~nez Molina
Carlos N\'u\~nez-Molina, Vladislav Nikolov, Ignacio Vellido, Juan Fern\'andez-Olivares
Goal Reasoning by Selecting Subgoals with Deep Q-Learning
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we propose a goal reasoning method which learns to select subgoals with Deep Q-Learning in order to decrease the load of a planner when faced with scenarios with tight time restrictions, such as online execution systems. We have designed a CNN-based goal selection module and trained it on a standard video game environment, testing it on different games (planning domains) and levels (planning problems) to measure its generalization abilities. When comparing its performance with a satisfying planner, the results obtained show both approaches are able to find plans of good quality, but our method greatly decreases planning time. We conclude our approach can be successfully applied to different types of domains (games), and shows good generalization properties when evaluated on new levels (problems) of the same game (domain).
[ { "version": "v1", "created": "Tue, 22 Dec 2020 20:12:29 GMT" } ]
1,608,768,000,000
[ [ "Núñez-Molina", "Carlos", "" ], [ "Nikolov", "Vladislav", "" ], [ "Vellido", "Ignacio", "" ], [ "Fernández-Olivares", "Juan", "" ] ]
2012.12588
Jean-Guy Mailly
Jean-Guy Mailly and Julien Rossit
Stability in Abstract Argumentation
7 pages, 7 figures, accepted to the 18th International Workshop on Non-Monotonic Reasoning (NMR 2020)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The notion of stability in a structured argumentation setup characterizes situations where the acceptance status associated with a given literal will not be impacted by any future evolution of this setup. In this paper, we abstract away from the logical structure of arguments, and we transpose this notion of stability to the context of Dungean argumentation frameworks. In particular, we show how this problem can be translated into reasoning with Argument-Incomplete AFs. Then we provide preliminary complexity results for stability under four prominent semantics, in the case of both credulous and skeptical reasoning. Finally, we illustrate to what extent this notion can be useful with an application to argument-based negotiation.
[ { "version": "v1", "created": "Wed, 23 Dec 2020 10:34:38 GMT" } ]
1,608,768,000,000
[ [ "Mailly", "Jean-Guy", "" ], [ "Rossit", "Julien", "" ] ]
2012.12634
Simin Liu
Simin Liu
Overview of FPGA deep learning acceleration based on convolutional neural network
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In recent years, deep learning has become more and more mature, and as a commonly used algorithm in deep learning, convolutional neural networks have been widely used in various visual tasks. In the past, research based on deep learning algorithms mainly relied on hardware such as GPUs and CPUs. However, with the increasing development of FPGAs, both field programmable logic gate arrays, it has become the main implementation hardware platform that combines various neural network deep learning algorithms This article is a review article, which mainly introduces the related theories and algorithms of convolution. It summarizes the application scenarios of several existing FPGA technologies based on convolutional neural networks, and mainly introduces the application of accelerators. At the same time, it summarizes some accelerators' under-utilization of logic resources or under-utilization of memory bandwidth, so that they can't get the best performance.
[ { "version": "v1", "created": "Wed, 23 Dec 2020 12:44:24 GMT" } ]
1,608,768,000,000
[ [ "Liu", "Simin", "" ] ]
2012.12732
Giulio Mazzi
Giulio Mazzi, Alberto Castellini, Alessandro Farinelli
Identification of Unexpected Decisions in Partially Observable Monte-Carlo Planning: a Rule-Based Approach
AAMAS 2021, 3-7 May 2021, London-UK (Virtual)
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Partially Observable Monte-Carlo Planning (POMCP) is a powerful online algorithm able to generate approximate policies for large Partially Observable Markov Decision Processes. The online nature of this method supports scalability by avoiding complete policy representation. The lack of an explicit representation however hinders interpretability. In this work, we propose a methodology based on Satisfiability Modulo Theory (SMT) for analyzing POMCP policies by inspecting their traces, namely sequences of belief-action-observation triplets generated by the algorithm. The proposed method explores local properties of policy behavior to identify unexpected decisions. We propose an iterative process of trace analysis consisting of three main steps, i) the definition of a question by means of a parametric logical formula describing (probabilistic) relationships between beliefs and actions, ii) the generation of an answer by computing the parameters of the logical formula that maximize the number of satisfied clauses (solving a MAX-SMT problem), iii) the analysis of the generated logical formula and the related decision boundaries for identifying unexpected decisions made by POMCP with respect to the original question. We evaluate our approach on Tiger, a standard benchmark for POMDPs, and a real-world problem related to mobile robot navigation. Results show that the approach can exploit human knowledge on the domain, outperforming state-of-the-art anomaly detection methods in identifying unexpected decisions. An improvement of the Area Under Curve up to 47\% has been achieved in our tests.
[ { "version": "v1", "created": "Wed, 23 Dec 2020 15:09:28 GMT" }, { "version": "v2", "created": "Wed, 28 Apr 2021 14:16:54 GMT" } ]
1,619,654,400,000
[ [ "Mazzi", "Giulio", "" ], [ "Castellini", "Alberto", "" ], [ "Farinelli", "Alessandro", "" ] ]
2012.13026
Siqi Wang
Xiren Zhou and Siqi Wang and Ruisheng Diao and Desong Bian and Jiahui Duan and Di Shi
Rethink AI-based Power Grid Control: Diving Into Algorithm Design
Accepted by 34th NeurIPS Ml4eng Workshop, 2020
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recently, deep reinforcement learning (DRL)-based approach has shown promisein solving complex decision and control problems in power engineering domain.In this paper, we present an in-depth analysis of DRL-based voltage control fromaspects of algorithm selection, state space representation, and reward engineering.To resolve observed issues, we propose a novel imitation learning-based approachto directly map power grid operating points to effective actions without any interimreinforcement learning process. The performance results demonstrate that theproposed approach has strong generalization ability with much less training time.The agent trained by imitation learning is effective and robust to solve voltagecontrol problem and outperforms the former RL agents.
[ { "version": "v1", "created": "Wed, 23 Dec 2020 23:38:41 GMT" } ]
1,608,854,400,000
[ [ "Zhou", "Xiren", "" ], [ "Wang", "Siqi", "" ], [ "Diao", "Ruisheng", "" ], [ "Bian", "Desong", "" ], [ "Duan", "Jiahui", "" ], [ "Shi", "Di", "" ] ]
2012.13037
Daniel Kasenberg
Vasanth Sarathy, Daniel Kasenberg, Shivam Goel, Jivko Sinapov, Matthias Scheutz
SPOTTER: Extending Symbolic Planning Operators through Targeted Reinforcement Learning
Accepted to AAMAS 2021
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Symbolic planning models allow decision-making agents to sequence actions in arbitrary ways to achieve a variety of goals in dynamic domains. However, they are typically handcrafted and tend to require precise formulations that are not robust to human error. Reinforcement learning (RL) approaches do not require such models, and instead learn domain dynamics by exploring the environment and collecting rewards. However, RL approaches tend to require millions of episodes of experience and often learn policies that are not easily transferable to other tasks. In this paper, we address one aspect of the open problem of integrating these approaches: how can decision-making agents resolve discrepancies in their symbolic planning models while attempting to accomplish goals? We propose an integrated framework named SPOTTER that uses RL to augment and support ("spot") a planning agent by discovering new operators needed by the agent to accomplish goals that are initially unreachable for the agent. SPOTTER outperforms pure-RL approaches while also discovering transferable symbolic knowledge and does not require supervision, successful plan traces or any a priori knowledge about the missing planning operator.
[ { "version": "v1", "created": "Thu, 24 Dec 2020 00:31:02 GMT" } ]
1,608,854,400,000
[ [ "Sarathy", "Vasanth", "" ], [ "Kasenberg", "Daniel", "" ], [ "Goel", "Shivam", "" ], [ "Sinapov", "Jivko", "" ], [ "Scheutz", "Matthias", "" ] ]
2012.13136
Naeha Sharif
Naeha Sharif and Lyndon White and Mohammed Bennamoun and Wei Liu and Syed Afaq Ali Shah
LCEval: Learned Composite Metric for Caption Evaluation
18 pages
International Journal of Computer Vision (October 2019)
10.1007/s11263-019-01206-z
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic evaluation metrics hold a fundamental importance in the development and fine-grained analysis of captioning systems. While current evaluation metrics tend to achieve an acceptable correlation with human judgements at the system level, they fail to do so at the caption level. In this work, we propose a neural network-based learned metric to improve the caption-level caption evaluation. To get a deeper insight into the parameters which impact a learned metrics performance, this paper investigates the relationship between different linguistic features and the caption-level correlation of the learned metrics. We also compare metrics trained with different training examples to measure the variations in their evaluation. Moreover, we perform a robustness analysis, which highlights the sensitivity of learned and handcrafted metrics to various sentence perturbations. Our empirical analysis shows that our proposed metric not only outperforms the existing metrics in terms of caption-level correlation but it also shows a strong system-level correlation against human assessments.
[ { "version": "v1", "created": "Thu, 24 Dec 2020 06:38:24 GMT" } ]
1,608,854,400,000
[ [ "Sharif", "Naeha", "" ], [ "White", "Lyndon", "" ], [ "Bennamoun", "Mohammed", "" ], [ "Liu", "Wei", "" ], [ "Shah", "Syed Afaq Ali", "" ] ]
2012.13204
Nassim Dehouche
Nassim Dehouche
Predicting Seminal Quality with the Dominance-Based Rough Sets Approach
null
ICIC Express Letters Volume 14, Number 7, July 2020
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The paper relies on the clinical data of a previously published study. We identify two very questionable assumptions of said work, namely confusing evidence of absence and absence of evidence, and neglecting the ordinal nature of attributes' domains. We then show that using an adequate ordinal methodology such as the dominance-based rough sets approach (DRSA) can significantly improve the predictive accuracy of the expert system, resulting in almost complete accuracy for a dataset of 100 instances. Beyond the performance of DRSA in solving the diagnosis problem at hand, these results suggest the inadequacy and triviality of the underlying dataset. We provide links to open data from the UCI machine learning repository to allow for an easy verification/refutation of the claims made in this paper.
[ { "version": "v1", "created": "Thu, 24 Dec 2020 11:45:32 GMT" } ]
1,608,854,400,000
[ [ "Dehouche", "Nassim", "" ] ]
2012.13300
Abhishek Dubey
Geoffrey Pettet and Ayan Mukhopadhyay and Mykel Kochenderfer and Abhishek Dubey
Hierarchical Planning for Resource Allocation in Emergency Response Systems
Accepted for publication in the proceedings of the 12th ACM/IEEE International Conference on Cyber-Physical Systems (ICCPS-2021)
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
A classical problem in city-scale cyber-physical systems (CPS) is resource allocation under uncertainty. Typically, such problems are modeled as Markov (or semi-Markov) decision processes. While online, offline, and decentralized approaches have been applied to such problems, they have difficulty scaling to large decision problems. We present a general approach to hierarchical planning that leverages structure in city-level CPS problems for resource allocation under uncertainty. We use the emergency response as a case study and show how a large resource allocation problem can be split into smaller problems. We then create a principled framework for solving the smaller problems and tackling the interaction between them. Finally, we use real-world data from Nashville, Tennessee, a major metropolitan area in the United States, to validate our approach. Our experiments show that the proposed approach outperforms state-of-the-art approaches used in the field of emergency response.
[ { "version": "v1", "created": "Thu, 24 Dec 2020 15:55:23 GMT" }, { "version": "v2", "created": "Thu, 4 Mar 2021 03:17:15 GMT" } ]
1,614,902,400,000
[ [ "Pettet", "Geoffrey", "" ], [ "Mukhopadhyay", "Ayan", "" ], [ "Kochenderfer", "Mykel", "" ], [ "Dubey", "Abhishek", "" ] ]
2012.13315
Ellen Vitercik
Maria-Florina Balcan, Tuomas Sandholm, and Ellen Vitercik
Generalization in portfolio-based algorithm selection
AAAI 2021
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Portfolio-based algorithm selection has seen tremendous practical success over the past two decades. This algorithm configuration procedure works by first selecting a portfolio of diverse algorithm parameter settings, and then, on a given problem instance, using an algorithm selector to choose a parameter setting from the portfolio with strong predicted performance. Oftentimes, both the portfolio and the algorithm selector are chosen using a training set of typical problem instances from the application domain at hand. In this paper, we provide the first provable guarantees for portfolio-based algorithm selection. We analyze how large the training set should be to ensure that the resulting algorithm selector's average performance over the training set is close to its future (expected) performance. This involves analyzing three key reasons why these two quantities may diverge: 1) the learning-theoretic complexity of the algorithm selector, 2) the size of the portfolio, and 3) the learning-theoretic complexity of the algorithm's performance as a function of its parameters. We introduce an end-to-end learning-theoretic analysis of the portfolio construction and algorithm selection together. We prove that if the portfolio is large, overfitting is inevitable, even with an extremely simple algorithm selector. With experiments, we illustrate a tradeoff exposed by our theoretical analysis: as we increase the portfolio size, we can hope to include a well-suited parameter setting for every possible problem instance, but it becomes impossible to avoid overfitting.
[ { "version": "v1", "created": "Thu, 24 Dec 2020 16:33:17 GMT" } ]
1,608,854,400,000
[ [ "Balcan", "Maria-Florina", "" ], [ "Sandholm", "Tuomas", "" ], [ "Vitercik", "Ellen", "" ] ]
2012.13387
Samira Ghodratnama
Samira Ghodratnama and Mehrdad Zakershahrak and Fariborz Sobhanmanesh
Adaptive Summaries: A Personalized Concept-based Summarization Approach by Learning from Users' Feedback
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Exploring the tremendous amount of data efficiently to make a decision, similar to answering a complicated question, is challenging with many real-world application scenarios. In this context, automatic summarization has substantial importance as it will provide the foundation for big data analytic. Traditional summarization approaches optimize the system to produce a short static summary that fits all users that do not consider the subjectivity aspect of summarization, i.e., what is deemed valuable for different users, making these approaches impractical in real-world use cases. This paper proposes an interactive concept-based summarization model, called Adaptive Summaries, that helps users make their desired summary instead of producing a single inflexible summary. The system learns from users' provided information gradually while interacting with the system by giving feedback in an iterative loop. Users can choose either reject or accept action for selecting a concept being included in the summary with the importance of that concept from users' perspectives and confidence level of their feedback. The proposed approach can guarantee interactive speed to keep the user engaged in the process. Furthermore, it eliminates the need for reference summaries, which is a challenging issue for summarization tasks. Evaluations show that Adaptive Summaries helps users make high-quality summaries based on their preferences by maximizing the user-desired content in the generated summaries.
[ { "version": "v1", "created": "Thu, 24 Dec 2020 18:27:50 GMT" }, { "version": "v2", "created": "Sun, 19 Dec 2021 02:05:08 GMT" } ]
1,640,044,800,000
[ [ "Ghodratnama", "Samira", "" ], [ "Zakershahrak", "Mehrdad", "" ], [ "Sobhanmanesh", "Fariborz", "" ] ]
2012.13400
Athirai A. Irissappane
Athirai A. Irissappane, Hanfei Yu, Yankun Shen, Anubha Agrawal, Gray Stanton
Leveraging GPT-2 for Classifying Spam Reviews with Limited Labeled Data via Adversarial Training
arXiv admin note: text overlap with arXiv:1903.08289
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online reviews are a vital source of information when purchasing a service or a product. Opinion spammers manipulate these reviews, deliberately altering the overall perception of the service. Though there exists a corpus of online reviews, only a few have been labeled as spam or non-spam, making it difficult to train spam detection models. We propose an adversarial training mechanism leveraging the capabilities of Generative Pre-Training 2 (GPT-2) for classifying opinion spam with limited labeled data and a large set of unlabeled data. Experiments on TripAdvisor and YelpZip datasets show that the proposed model outperforms state-of-the-art techniques by at least 7% in terms of accuracy when labeled data is limited. The proposed model can also generate synthetic spam/non-spam reviews with reasonable perplexity, thereby, providing additional labeled data during training.
[ { "version": "v1", "created": "Thu, 24 Dec 2020 18:59:51 GMT" } ]
1,608,854,400,000
[ [ "Irissappane", "Athirai A.", "" ], [ "Yu", "Hanfei", "" ], [ "Shen", "Yankun", "" ], [ "Agrawal", "Anubha", "" ], [ "Stanton", "Gray", "" ] ]
2012.14474
Benjamin Goertzel
Ben Goertzel
Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is argued that 4-valued paraconsistent truth values (called here "p-bits") can serve as a conceptual, mathematical and practical foundation for highly AI-relevant forms of probabilistic logic and probabilistic programming and concept formation. First it is shown that appropriate averaging-across-situations and renormalization of 4-valued p-bits operating in accordance with Constructible Duality (CD) logic yields PLN (Probabilistic Logic Networks) strength-and-confidence truth values. Then variations on the Curry-Howard correspondence are used to map these paraconsistent and probabilistic logics into probabilistic types suitable for use within dependent type based programming languages. Zach Weber's paraconsistent analysis of the sorites paradox is extended to form a paraconsistent / probabilistic / fuzzy analysis of concept boundaries; and a paraconsistent version of concept formation via Formal Concept Analysis is presented, building on a definition of fuzzy property-value degrees in terms of relative entropy on paraconsistent probability distributions. These general points are fleshed out via reference to the realization of probabilistic reasoning and programming and concept formation in the OpenCog AGI framework which is centered on collaborative multi-algorithm updating of a common knowledge metagraph.
[ { "version": "v1", "created": "Mon, 28 Dec 2020 20:14:49 GMT" }, { "version": "v2", "created": "Thu, 14 Jan 2021 17:51:17 GMT" } ]
1,610,668,800,000
[ [ "Goertzel", "Ben", "" ] ]
2012.14762
Davide Mario Longo
Georg Gottlob, Matthias Lanzinger, Davide Mario Longo, Cem Okulmus and Reinhard Pichler
The HyperTrac Project: Recent Progress and Future Research Directions on Hypergraph Decompositions
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Constraint Satisfaction Problems (CSPs) play a central role in many applications in Artificial Intelligence and Operations Research. In general, solving CSPs is NP-complete. The structure of CSPs is best described by hypergraphs. Therefore, various forms of hypergraph decompositions have been proposed in the literature to identify tractable fragments of CSPs. However, also the computation of a concrete hypergraph decomposition is a challenging task in itself. In this paper, we report on recent progress in the study of hypergraph decompositions and we outline several directions for future research.
[ { "version": "v1", "created": "Tue, 29 Dec 2020 14:21:54 GMT" } ]
1,609,459,200,000
[ [ "Gottlob", "Georg", "" ], [ "Lanzinger", "Matthias", "" ], [ "Longo", "Davide Mario", "" ], [ "Okulmus", "Cem", "" ], [ "Pichler", "Reinhard", "" ] ]
2012.15835
Robert B. Allen
Robert B. Allen
Semantic Modeling with SUMO
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
We explore using the Suggested Upper Merged Ontology (SUMO) to develop a semantic simulation. We provide two proof-of-concept demonstrations modeling transitions in a simulated gasoline engine using a general-purpose programming language. Rather than focusing on computationally highly intensive techniques, we explore a less computationally intensive approach related to familiar software engineering testing procedures. In addition, we propose structured representations of terms based on linguistic approaches to lexicography.
[ { "version": "v1", "created": "Thu, 31 Dec 2020 18:53:59 GMT" }, { "version": "v2", "created": "Tue, 5 Jan 2021 14:53:38 GMT" }, { "version": "v3", "created": "Tue, 12 Jan 2021 18:13:42 GMT" } ]
1,610,496,000,000
[ [ "Allen", "Robert B.", "" ] ]
2101.00058
Mark Law
Mark Law
Conflict-driven Inductive Logic Programming
Under consideration in Theory and Practice of Logic Programming (TPLP)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of Inductive Logic Programming (ILP) is to learn a program that explains a set of examples. Until recently, most research on ILP targeted learning Prolog programs. The ILASP system instead learns Answer Set Programs (ASP). Learning such expressive programs widens the applicability of ILP considerably; for example, enabling preference learning, learning common-sense knowledge, including defaults and exceptions, and learning non-deterministic theories. Early versions of ILASP can be considered meta-level ILP approaches, which encode a learning task as a logic program and delegate the search to an ASP solver. More recently, ILASP has shifted towards a new method, inspired by conflict-driven SAT and ASP solvers. The fundamental idea of the approach, called Conflict-driven ILP (CDILP), is to iteratively interleave the search for a hypothesis with the generation of constraints which explain why the current hypothesis does not cover a particular example. These coverage constraints allow ILASP to rule out not just the current hypothesis, but an entire class of hypotheses that do not satisfy the coverage constraint. This paper formalises the CDILP approach and presents the ILASP3 and ILASP4 systems for CDILP, which are demonstrated to be more scalable than previous ILASP systems, particularly in the presence of noise. Under consideration in Theory and Practice of Logic Programming (TPLP).
[ { "version": "v1", "created": "Thu, 31 Dec 2020 20:24:28 GMT" }, { "version": "v2", "created": "Thu, 23 Dec 2021 10:17:55 GMT" }, { "version": "v3", "created": "Fri, 14 Jan 2022 19:21:47 GMT" } ]
1,642,550,400,000
[ [ "Law", "Mark", "" ] ]
2101.00280
Joar Skalse
Joar Skalse
A General Counterexample to Any Decision Theory and Some Responses
4 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper I present an argument and a general schema which can be used to construct a problem case for any decision theory, in a way that could be taken to show that one cannot formulate a decision theory that is never outperformed by any other decision theory. I also present and discuss a number of possible responses to this argument. One of these responses raises the question of what it means for two decision problems to be "equivalent" in the relevant sense, and gives an answer to this question which would invalidate the first argument. However, this position would have further consequences for how we compare different decision theories in decision problems already discussed in the literature (including e.g. Newcomb's problem).
[ { "version": "v1", "created": "Fri, 1 Jan 2021 17:47:11 GMT" } ]
1,609,804,800,000
[ [ "Skalse", "Joar", "" ] ]
2101.00286
Valentina Anita Carriero
Valentina Anita Carriero, Aldo Gangemi, Andrea Giovanni Nuzzolese, Valentina Presutti
An Ontology Design Pattern for representing Recurrent Situations
null
null
10.3233/SSW210013
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, we present an Ontology Design Pattern for representing situations that recur at regular periods and share some invariant factors, which unify them conceptually: we refer to this set of recurring situations as recurrent situation series. The proposed pattern appears to be foundational, since it can be generalised for modelling the top-level domain-independent concept of recurrence, which is strictly associated with invariance. The pattern reuses other foundational patterns such as Collection, Description and Situation, Classification, Sequence. Indeed, a recurrent situation series is formalised as both a collection of situations occurring regularly over time and unified according to some properties that are common to all the members, and a situation itself, which provides a relational context to its members that satisfy a reference description. Besides including some exemplifying instances of this pattern, we show how it has been implemented and specialised to model recurrent cultural events and ceremonies in ArCo, the Knowledge Graph of Italian cultural heritage.
[ { "version": "v1", "created": "Fri, 1 Jan 2021 18:20:13 GMT" } ]
1,654,560,000,000
[ [ "Carriero", "Valentina Anita", "" ], [ "Gangemi", "Aldo", "" ], [ "Nuzzolese", "Andrea Giovanni", "" ], [ "Presutti", "Valentina", "" ] ]
2101.00675
Mohamad Alissa
Mohamad Alissa, Issa Haddad, Jonathan Meyer, Jade Obeid, Nicolas Wiecek, Sukrit Wongariyakavee
Sentiment Analysis for Open Domain Conversational Agent
9 pages, 3 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The applicability of common sentiment analysis models to open domain human robot interaction is investigated within this paper. The models are used on a dataset specific to user interaction with the Alana system (a Alexa prize system) in order to determine which would be more appropriate for the task of identifying sentiment when a user interacts with a non-human driven socialbot. With the identification of a model, various improvements are attempted and detailed prior to integration into the Alana system. The study showed that a Random Forest Model with 25 trees trained on the dataset specific to user interaction with the Alana system combined with the dataset present in NLTK Vader outperforms other models. The new system (called 'Rob') matches it's output utterance sentiment with the user's utterance sentiment. This method is expected to improve user experience because it builds upon the overall sentiment detection which makes it seem that new system sympathises with user feelings. Furthermore, the results obtained from the user feedback confirms our expectation.
[ { "version": "v1", "created": "Sun, 3 Jan 2021 18:03:52 GMT" }, { "version": "v2", "created": "Thu, 15 Jul 2021 23:33:53 GMT" } ]
1,626,652,800,000
[ [ "Alissa", "Mohamad", "" ], [ "Haddad", "Issa", "" ], [ "Meyer", "Jonathan", "" ], [ "Obeid", "Jade", "" ], [ "Wiecek", "Nicolas", "" ], [ "Wongariyakavee", "Sukrit", "" ] ]
2101.00692
Guillem Franc\`es
Guillem Franc\`es, Blai Bonet, Hector Geffner
Learning General Policies from Small Examples Without Supervision
AAAI 2021, version extended with appendix containing full proofs and experimental details
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generalized planning is concerned with the computation of general policies that solve multiple instances of a planning domain all at once. It has been recently shown that these policies can be computed in two steps: first, a suitable abstraction in the form of a qualitative numerical planning problem (QNP) is learned from sample plans, then the general policies are obtained from the learned QNP using a planner. In this work, we introduce an alternative approach for computing more expressive general policies which does not require sample plans or a QNP planner. The new formulation is very simple and can be cast in terms that are more standard in machine learning: a large but finite pool of features is defined from the predicates in the planning examples using a general grammar, and a small subset of features is sought for separating "good" from "bad" state transitions, and goals from non-goals. The problems of finding such a "separating surface" while labeling the transitions as "good" or "bad" are jointly addressed as a single combinatorial optimization problem expressed as a Weighted Max-SAT problem. The advantage of looking for the simplest policy in the given feature space that solves the given examples, possibly non-optimally, is that many domains have no general, compact policies that are optimal. The approach yields general policies for a number of benchmark domains.
[ { "version": "v1", "created": "Sun, 3 Jan 2021 19:44:13 GMT" }, { "version": "v2", "created": "Wed, 17 Feb 2021 19:52:39 GMT" } ]
1,613,692,800,000
[ [ "Francès", "Guillem", "" ], [ "Bonet", "Blai", "" ], [ "Geffner", "Hector", "" ] ]
2101.00774
Fengbin Zhu
Fengbin Zhu, Wenqiang Lei, Chao Wang, Jianming Zheng, Soujanya Poria, Tat-Seng Chua
Retrieving and Reading: A Comprehensive Survey on Open-domain Question Answering
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open-domain Question Answering (OpenQA) is an important task in Natural Language Processing (NLP), which aims to answer a question in the form of natural language based on large-scale unstructured documents. Recently, there has been a surge in the amount of research literature on OpenQA, particularly on techniques that integrate with neural Machine Reading Comprehension (MRC). While these research works have advanced performance to new heights on benchmark datasets, they have been rarely covered in existing surveys on QA systems. In this work, we review the latest research trends in OpenQA, with particular attention to systems that incorporate neural MRC techniques. Specifically, we begin with revisiting the origin and development of OpenQA systems. We then introduce modern OpenQA architecture named "Retriever-Reader" and analyze the various systems that follow this architecture as well as the specific techniques adopted in each of the components. We then discuss key challenges to developing OpenQA systems and offer an analysis of benchmarks that are commonly used. We hope our work would enable researchers to be informed of the recent advancement and also the open challenges in OpenQA research, so as to stimulate further progress in this field.
[ { "version": "v1", "created": "Mon, 4 Jan 2021 04:47:46 GMT" }, { "version": "v2", "created": "Fri, 23 Apr 2021 07:25:37 GMT" }, { "version": "v3", "created": "Sat, 8 May 2021 16:16:50 GMT" } ]
1,620,691,200,000
[ [ "Zhu", "Fengbin", "" ], [ "Lei", "Wenqiang", "" ], [ "Wang", "Chao", "" ], [ "Zheng", "Jianming", "" ], [ "Poria", "Soujanya", "" ], [ "Chua", "Tat-Seng", "" ] ]
2101.00843
Dennis Soemers
Cameron Browne and Dennis J. N. J. Soemers and Eric Piette
Strategic Features for General Games
Paper exactly as it appeared at KEG Workshop held at AAAI 2019
Proceedings of the 2nd Workshop on Knowledge Extraction from Games co-located with 33rd AAAI Conference on Artificial Intelligence (AAAI 2019)
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This short paper describes an ongoing research project that requires the automated self-play learning and evaluation of a large number of board games in digital form. We describe the approach we are taking to determine relevant features, for biasing MCTS playouts for arbitrary games played on arbitrary geometries. Benefits of our approach include efficient implementation, the potential to transfer learnt knowledge to new contexts, and the potential to explain strategic knowledge embedded in features in human-comprehensible terms.
[ { "version": "v1", "created": "Mon, 4 Jan 2021 09:30:07 GMT" } ]
1,609,804,800,000
[ [ "Browne", "Cameron", "" ], [ "Soemers", "Dennis J. N. J.", "" ], [ "Piette", "Eric", "" ] ]
2101.01067
Khan Md. Hasib
Md. Ashek-Al-Aziz, Sagar Mahmud, Md. Azizul Islam, Jubayer Al Mahmud, Khan Md. Hasib
A Comparative Study of AHP and Fuzzy AHP Method for Inconsistent Data
22 Pages, 9 Figures
International Journal of Sciences: Basic and Applied Research (IJSBAR), Volume 54 Issue 4, Year 2020, Page - 16 -37
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In various cases of decision analysis we use two popular methods: Analytical Hierarchical Process (AHP) and Fuzzy based AHP or Fuzzy AHP. Both the methods deal with stochastic data and can determine decision result through Multi Criteria Decision Making (MCDM) process. Obviously resulting values of the two methods are not same though same set of data is fed into them. In this research work, we have tried to observe similarities and dissimilarities between two methods outputs. Almost same trend or fluctuations in outputs have been seen for both methods for same set of input data which are not consistent. Both method outputs ups and down fluctuations are same for fifty percent cases.
[ { "version": "v1", "created": "Wed, 23 Dec 2020 06:08:23 GMT" } ]
1,609,804,800,000
[ [ "Ashek-Al-Aziz", "Md.", "" ], [ "Mahmud", "Sagar", "" ], [ "Islam", "Md. Azizul", "" ], [ "Mahmud", "Jubayer Al", "" ], [ "Hasib", "Khan Md.", "" ] ]
2101.01510
Xiaowang Zhang
Peiyun Wu and Yunjie Wu and Linjuan Wu and Xiaowang Zhang and Zhiyong Feng
Modeling Global Semantics for Question Answering over Knowledge Bases
7 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semantic parsing, as an important approach to question answering over knowledge bases (KBQA), transforms a question into the complete query graph for further generating the correct logical query. Existing semantic parsing approaches mainly focus on relations matching with paying less attention to the underlying internal structure of questions (e.g., the dependencies and relations between all entities in a question) to select the query graph. In this paper, we present a relational graph convolutional network (RGCN)-based model gRGCN for semantic parsing in KBQA. gRGCN extracts the global semantics of questions and their corresponding query graphs, including structure semantics via RGCN and relational semantics (label representation of relations between entities) via a hierarchical relation attention mechanism. Experiments evaluated on benchmarks show that our model outperforms off-the-shelf models.
[ { "version": "v1", "created": "Tue, 5 Jan 2021 13:51:14 GMT" } ]
1,609,891,200,000
[ [ "Wu", "Peiyun", "" ], [ "Wu", "Yunjie", "" ], [ "Wu", "Linjuan", "" ], [ "Zhang", "Xiaowang", "" ], [ "Feng", "Zhiyong", "" ] ]
2101.01625
Devleena Das
Devleena Das, Siddhartha Banerjee, Sonia Chernova
Explainable AI for Robot Failures: Generating Explanations that Improve User Assistance in Fault Recovery
null
null
10.1145/3434073.3444657
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the growing capabilities of intelligent systems, the integration of robots in our everyday life is increasing. However, when interacting in such complex human environments, the occasional failure of robotic systems is inevitable. The field of explainable AI has sought to make complex-decision making systems more interpretable but most existing techniques target domain experts. On the contrary, in many failure cases, robots will require recovery assistance from non-expert users. In this work, we introduce a new type of explanation, that explains the cause of an unexpected failure during an agent's plan execution to non-experts. In order for error explanations to be meaningful, we investigate what types of information within a set of hand-scripted explanations are most helpful to non-experts for failure and solution identification. Additionally, we investigate how such explanations can be autonomously generated, extending an existing encoder-decoder model, and generalized across environments. We investigate such questions in the context of a robot performing a pick-and-place manipulation task in the home environment. Our results show that explanations capturing the context of a failure and history of past actions, are the most effective for failure and solution identification among non-experts. Furthermore, through a second user evaluation, we verify that our model-generated explanations can generalize to an unseen office environment, and are just as effective as the hand-scripted explanations.
[ { "version": "v1", "created": "Tue, 5 Jan 2021 16:16:39 GMT" } ]
1,628,121,600,000
[ [ "Das", "Devleena", "" ], [ "Banerjee", "Siddhartha", "" ], [ "Chernova", "Sonia", "" ] ]
2101.01883
Takahisa Imagawa
Takahisa Imagawa, Takuya Hiraoka, Yoshimasa Tsuruoka
Off-Policy Meta-Reinforcement Learning Based on Feature Embedding Spaces
14pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Meta-reinforcement learning (RL) addresses the problem of sample inefficiency in deep RL by using experience obtained in past tasks for a new task to be solved. However, most meta-RL methods require partially or fully on-policy data, i.e., they cannot reuse the data collected by past policies, which hinders the improvement of sample efficiency. To alleviate this problem, we propose a novel off-policy meta-RL method, embedding learning and evaluation of uncertainty (ELUE). An ELUE agent is characterized by the learning of a feature embedding space shared among tasks. It learns a belief model over the embedding space and a belief-conditional policy and Q-function. Then, for a new task, it collects data by the pretrained policy, and updates its belief based on the belief model. Thanks to the belief update, the performance can be improved with a small amount of data. In addition, it updates the parameters of the neural networks to adjust the pretrained relationships when there are enough data. We demonstrate that ELUE outperforms state-of-the-art meta RL methods through experiments on meta-RL benchmarks.
[ { "version": "v1", "created": "Wed, 6 Jan 2021 05:51:38 GMT" } ]
1,609,977,600,000
[ [ "Imagawa", "Takahisa", "" ], [ "Hiraoka", "Takuya", "" ], [ "Tsuruoka", "Yoshimasa", "" ] ]
2101.01953
Alexis de Colnet
Alexis de Colnet
A Lower Bound on DNNF Encodings of Pseudo-Boolean Constraints
8 pages, 10 pages including references
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two major considerations when encoding pseudo-Boolean (PB) constraints into SAT are the size of the encoding and its propagation strength, that is, the guarantee that it has a good behaviour under unit propagation. Several encodings with propagation strength guarantees rely upon prior compilation of the constraints into DNNF (decomposable negation normal form), BDD (binary decision diagram), or some other sub-variants. However it has been shown that there exist PB-constraints whose ordered BDD (OBDD) representations, and thus the inferred CNF encodings, all have exponential size. Since DNNFs are more succinct than OBDDs, preferring encodings via DNNF to avoid size explosion seems a legitimate choice. Yet in this paper, we prove the existence of PB-constraints whose DNNFs all require exponential size.
[ { "version": "v1", "created": "Wed, 6 Jan 2021 10:25:22 GMT" } ]
1,609,977,600,000
[ [ "de Colnet", "Alexis", "" ] ]
2101.02046
Junyi Li
Junyi Li, Tianyi Tang, Gaole He, Jinhao Jiang, Xiaoxuan Hu, Puzhao Xie, Zhipeng Chen, Zhuohao Yu, Wayne Xin Zhao, Ji-Rong Wen
TextBox: A Unified, Modularized, and Extensible Framework for Text Generation
9 pages, 2 figures, 4 tables. For our GitHub page, see https://github.com/RUCAIBox/TextBox
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we release an open-source library, called TextBox, to provide a unified, modularized, and extensible text generation framework. TextBox aims to support a broad set of text generation tasks and models. In our library, we implement 21 text generation models on 9 benchmark datasets, covering the categories of VAE, GAN, and pretrained language models. Meanwhile, our library maintains sufficient modularity and extensibility by properly decomposing the model architecture, inference, and learning process into highly reusable modules, which allows users to easily incorporate new models into our framework. The above features make TextBox specially suitable for researchers and practitioners to quickly reproduce baseline models and develop new models. TextBox is implemented based on PyTorch, and released under Apache License 2.0 at https://github.com/RUCAIBox/TextBox.
[ { "version": "v1", "created": "Wed, 6 Jan 2021 14:02:42 GMT" }, { "version": "v2", "created": "Thu, 7 Jan 2021 09:28:10 GMT" }, { "version": "v3", "created": "Mon, 19 Apr 2021 08:36:14 GMT" } ]
1,618,876,800,000
[ [ "Li", "Junyi", "" ], [ "Tang", "Tianyi", "" ], [ "He", "Gaole", "" ], [ "Jiang", "Jinhao", "" ], [ "Hu", "Xiaoxuan", "" ], [ "Xie", "Puzhao", "" ], [ "Chen", "Zhipeng", "" ], [ "Yu", "Zhuohao", "" ], [ "Zhao", "Wayne Xin", "" ], [ "Wen", "Ji-Rong", "" ] ]