id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2001.07578 | Nicholas Asher | Nicholas Asher, Soumya Paul, Chris Russell | Adequate and fair explanations | null | Machine Learning and Knowledge Extraction, eds. Andreas Holzinger,
Peter Kieseberg, A Min Tjoa, Edgar Weippl, Lecture Notes in Computer Science
12844, Springer, pp. 79-99, 2021 | 10.1007/978-3-030-84060-0 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explaining sophisticated machine-learning based systems is an important issue
at the foundations of AI. Recent efforts have shown various methods for
providing explanations. These approaches can be broadly divided into two
schools: those that provide a local and human interpreatable approximation of a
machine learning algorithm, and logical approaches that exactly characterise
one aspect of the decision. In this paper we focus upon the second school of
exact explanations with a rigorous logical foundation. There is an
epistemological problem with these exact methods. While they can furnish
complete explanations, such explanations may be too complex for humans to
understand or even to write down in human readable form. Interpretability
requires epistemically accessible explanations, explanations humans can grasp.
Yet what is a sufficiently complete epistemically accessible explanation still
needs clarification. We do this here in terms of counterfactuals, following
[Wachter et al., 2017]. With counterfactual explanations, many of the
assumptions needed to provide a complete explanation are left implicit. To do
so, counterfactual explanations exploit the properties of a particular data
point or sample, and as such are also local as well as partial explanations. We
explore how to move from local partial explanations to what we call complete
local explanations and then to global ones. But to preserve accessibility we
argue for the need for partiality. This partiality makes it possible to hide
explicit biases present in the algorithm that may be injurious or unfair.We
investigate how easy it is to uncover these biases in providing complete and
fair explanations by exploiting the structure of the set of counterfactuals
providing a complete local explanation.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2020 14:42:51 GMT"
},
{
"version": "v2",
"created": "Sat, 21 Aug 2021 08:55:22 GMT"
}
] | 1,629,763,200,000 | [
[
"Asher",
"Nicholas",
""
],
[
"Paul",
"Soumya",
""
],
[
"Russell",
"Chris",
""
]
] |
2001.08193 | Veronique Ventos | J Li, S Thepaut, V Ventos | StarAI: Reducing incompleteness in the game of Bridge using PLP | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bridge is a trick-taking card game requiring the ability to evaluate
probabilities since it is a game of incomplete information where each player
only sees its cards. In order to choose a strategy, a player needs to gather
information about the hidden cards in the other players' hand. We present a
methodology allowing us to model a part of card playing in Bridge using
Probabilistic Logic Programming.
| [
{
"version": "v1",
"created": "Wed, 22 Jan 2020 18:27:51 GMT"
}
] | 1,579,737,600,000 | [
[
"Li",
"J",
""
],
[
"Thepaut",
"S",
""
],
[
"Ventos",
"V",
""
]
] |
2001.08372 | Andreas Hinterreiter | Andreas Hinterreiter and Christian Steinparz and Moritz Sch\"ofl and
Holger Stitz and Marc Streit | ProjectionPathExplorer: Exploring Visual Patterns in Projected
Decision-Making Paths | Corrected in-paper reference to accepted version; fixed outdated
links | ACM Trans. Interact. Intell. Syst. 11, 3-4, Article 22 (December
2021), 29 pages | 10.1145/3387165 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In problem-solving, a path towards solutions can be viewed as a sequence of
decisions. The decisions, made by humans or computers, describe a trajectory
through a high-dimensional representation space of the problem. By means of
dimensionality reduction, these trajectories can be visualized in
lower-dimensional space. Such embedded trajectories have previously been
applied to a wide variety of data, but analysis has focused almost exclusively
on the self-similarity of single trajectories. In contrast, we describe
patterns emerging from drawing many trajectories -- for different initial
conditions, end states, and solution strategies -- in the same embedding space.
We argue that general statements about the problem-solving tasks and solving
strategies can be made by interpreting these patterns. We explore and
characterize such patterns in trajectories resulting from human and
machine-made decisions in a variety of application domains: logic puzzles
(Rubik's cube), strategy games (chess), and optimization problems (neural
network training). We also discuss the importance of suitably chosen
representation spaces and similarity metrics for the embedding.
| [
{
"version": "v1",
"created": "Mon, 20 Jan 2020 13:29:11 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Oct 2020 15:39:05 GMT"
},
{
"version": "v3",
"created": "Mon, 18 Jul 2022 10:02:59 GMT"
}
] | 1,658,188,800,000 | [
[
"Hinterreiter",
"Andreas",
""
],
[
"Steinparz",
"Christian",
""
],
[
"Schöfl",
"Moritz",
""
],
[
"Stitz",
"Holger",
""
],
[
"Streit",
"Marc",
""
]
] |
2001.09293 | Gavin Rens | Gavin Rens, Jean-Fran\c{c}ois Raskin | Learning Non-Markovian Reward Models in MDPs | 18 pages, single column, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are situations in which an agent should receive rewards only after
having accomplished a series of previous tasks. In other words, the reward that
the agent receives is non-Markovian. One natural and quite general way to
represent history-dependent rewards is via a Mealy machine; a finite state
automaton that produces output sequences (rewards in our case) from input
sequences (state/action observations in our case). In our formal setting, we
consider a Markov decision process (MDP) that models the dynamic of the
environment in which the agent evolves and a Mealy machine synchronised with
this MDP to formalise the non-Markovian reward function. While the MDP is known
by the agent, the reward function is unknown from the agent and must be learnt.
Learning non-Markov reward functions is a challenge. Our approach to overcome
this challenging problem is a careful combination of the Angluin's L* active
learning algorithm to learn finite automata, testing techniques for
establishing conformance of finite model hypothesis and optimisation techniques
for computing optimal strategies in Markovian (immediate) reward MDPs. We also
show how our framework can be combined with classical heuristics such as Monte
Carlo Tree Search. We illustrate our algorithms and a preliminary
implementation on two typical examples for AI.
| [
{
"version": "v1",
"created": "Sat, 25 Jan 2020 10:51:42 GMT"
}
] | 1,580,169,600,000 | [
[
"Rens",
"Gavin",
""
],
[
"Raskin",
"Jean-François",
""
]
] |
2001.09398 | Wenjie Zhang | Wenjie Zhang, Zeyu Sun, Qihao Zhu, Ge Li, Shaowei Cai, Yingfei Xiong,
and Lu Zhang | NLocalSAT: Boosting Local Search with Solution Prediction | Accepted by IJCAI 2020 | null | 10.24963/ijcai.2020/164 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Boolean satisfiability problem (SAT) is a famous NP-complete problem in
computer science. An effective way for solving a satisfiable SAT problem is the
stochastic local search (SLS). However, in this method, the initialization is
assigned in a random manner, which impacts the effectiveness of SLS solvers. To
address this problem, we propose NLocalSAT. NLocalSAT combines SLS with a
solution prediction model, which boosts SLS by changing initialization
assignments with a neural network. We evaluated NLocalSAT on five SLS solvers
(CCAnr, Sparrow, CPSparrow, YalSAT, and probSAT) with instances in the random
track of SAT Competition 2018. The experimental results show that solvers with
NLocalSAT achieve 27% ~ 62% improvement over the original SLS solvers.
| [
{
"version": "v1",
"created": "Sun, 26 Jan 2020 04:22:53 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Apr 2020 09:38:01 GMT"
},
{
"version": "v3",
"created": "Wed, 13 May 2020 04:05:35 GMT"
},
{
"version": "v4",
"created": "Wed, 9 Dec 2020 07:01:26 GMT"
}
] | 1,607,558,400,000 | [
[
"Zhang",
"Wenjie",
""
],
[
"Sun",
"Zeyu",
""
],
[
"Zhu",
"Qihao",
""
],
[
"Li",
"Ge",
""
],
[
"Cai",
"Shaowei",
""
],
[
"Xiong",
"Yingfei",
""
],
[
"Zhang",
"Lu",
""
]
] |
2001.09403 | Abhishek Nan | Abhishek Nan, Anandh Perumal, Osmar R. Zaiane | Sentiment and Knowledge Based Algorithmic Trading with Deep
Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Algorithmic trading, due to its inherent nature, is a difficult problem to
tackle; there are too many variables involved in the real world which make it
almost impossible to have reliable algorithms for automated stock trading. The
lack of reliable labelled data that considers physical and physiological
factors that dictate the ups and downs of the market, has hindered the
supervised learning attempts for dependable predictions. To learn a good policy
for trading, we formulate an approach using reinforcement learning which uses
traditional time series stock price data and combines it with news headline
sentiments, while leveraging knowledge graphs for exploiting news about
implicit relationships.
| [
{
"version": "v1",
"created": "Sun, 26 Jan 2020 05:27:53 GMT"
}
] | 1,580,169,600,000 | [
[
"Nan",
"Abhishek",
""
],
[
"Perumal",
"Anandh",
""
],
[
"Zaiane",
"Osmar R.",
""
]
] |
2001.09442 | Ulrich Furbach | Ulrike Barthelme{\ss} and Ulrich Furbach and Claudia Schon | Consciousness and Automated Reasoning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper aims at demonstrating how a first-order logic reasoning system in
combination with a large knowledge base can be understood as an artificial
consciousness system. For this we review some aspects from the area of
philosophy of mind and in particular Tononi's Information Integration Theory
(IIT) and Baars' Global Workspace Theory. These will be applied to the
reasoning system Hyper with ConceptNet as a knowledge base within a scenario of
commonsense and cognitive reasoning. Finally we demonstrate that such a system
is very well able to do conscious mind wandering.
| [
{
"version": "v1",
"created": "Sun, 26 Jan 2020 11:43:48 GMT"
},
{
"version": "v2",
"created": "Sat, 30 May 2020 14:13:18 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Jul 2020 10:08:33 GMT"
}
] | 1,595,462,400,000 | [
[
"Barthelmeß",
"Ulrike",
""
],
[
"Furbach",
"Ulrich",
""
],
[
"Schon",
"Claudia",
""
]
] |
2001.09956 | Zhe Xu | Zhe Xu, Yuxin Chen and Ufuk Topcu | Adaptive Teaching of Temporal Logic Formulas to Learners with
Preferences | 25 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine teaching is an algorithmic framework for teaching a target hypothesis
via a sequence of examples or demonstrations. We investigate machine teaching
for temporal logic formulas -- a novel and expressive hypothesis class amenable
to time-related task specifications. In the context of teaching temporal logic
formulas, an exhaustive search even for a myopic solution takes exponential
time (with respect to the time span of the task). We propose an efficient
approach for teaching parametric linear temporal logic formulas. Concretely, we
derive a necessary condition for the minimal time length of a demonstration to
eliminate a set of hypotheses. Utilizing this condition, we propose a myopic
teaching algorithm by solving a sequence of integer programming problems. We
further show that, under two notions of teaching complexity, the proposed
algorithm has near-optimal performance. The results strictly generalize the
previous results on teaching preference-based version space learners. We
evaluate our algorithm extensively under a variety of learner types (i.e.,
learners with different preference models) and interactive protocols (e.g.,
batched and adaptive). The results show that the proposed algorithms can
efficiently teach a given target temporal logic formula under various settings,
and that there are significant gains of teaching efficacy when the teacher
adapts to the learner's current hypotheses or uses oracles.
| [
{
"version": "v1",
"created": "Mon, 27 Jan 2020 18:22:53 GMT"
}
] | 1,580,169,600,000 | [
[
"Xu",
"Zhe",
""
],
[
"Chen",
"Yuxin",
""
],
[
"Topcu",
"Ufuk",
""
]
] |
2001.10730 | Sicui Zhang | Sicui Zhang (1 and 2), Laura Genga (2), Hui Yan (1 and 2), Xudong Lu
(1 and 2), Huilong Duan (1), Uzay Kaymak (2 and 1) ((1) School of Biomedical
Engineering and Instrumental Science, Zhejiang University, Hangzhou, P.R.
China, (2) School of Industrial Engineering, Eindhoven University of
Technology, Eindhoven, The Netherlands) | Towards Multi-perspective conformance checking with fuzzy sets | 15 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conformance checking techniques are widely adopted to pinpoint possible
discrepancies between process models and the execution of the process in
reality. However, state of the art approaches adopt a crisp evaluation of
deviations, with the result that small violations are considered at the same
level of significant ones. This affects the quality of the provided
diagnostics, especially when there exists some tolerance with respect to
reasonably small violations, and hampers the flexibility of the process. In
this work, we propose a novel approach which allows to represent actors'
tolerance with respect to violations and to account for severity of deviations
when assessing executions compliance. We argue that besides improving the
quality of the provided diagnostics, allowing some tolerance in deviations
assessment also enhances the flexibility of conformance checking techniques
and, indirectly, paves the way for improving the resilience of the overall
process management system.
| [
{
"version": "v1",
"created": "Wed, 29 Jan 2020 09:02:23 GMT"
}
] | 1,580,342,400,000 | [
[
"Zhang",
"Sicui",
"",
"1 and 2"
],
[
"Genga",
"Laura",
"",
"1 and 2"
],
[
"Yan",
"Hui",
"",
"1 and 2"
],
[
"Lu",
"Xudong",
"",
"1 and 2"
],
[
"Duan",
"Huilong",
"",
"2 and 1"
],
[
"Kaymak",
"Uzay",
"",
"2 and 1"
]
] |
2001.10828 | Ji\v{r}\'i Fink | Ji\v{r}\'i Fink, Martin Loebl, Petra Pelik\'anov\'a | A New Arc-Routing Algorithm Applied to Winter Road Maintenance | 15 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies large scale instances of a fairly general arc-routing
problem as well as incorporate practical constraints in particular coming from
the scheduling problem of the winter road maintenance (e.g. different
priorities for and methods of road maintenance). We develop a new algorithm
based on a bin-packing heuristic which is well-scalable and able to solve road
networks on thousands of crossroads and road segments in few minutes. Since it
is impossible to find an optimal solution for such a large instances to compare
it with a result of our algorithm, we also develop techniques to compute lower
bounds which are based on Integer Linear Programming and Lazy Constraints.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2020 08:44:42 GMT"
}
] | 1,580,342,400,000 | [
[
"Fink",
"Jiří",
""
],
[
"Loebl",
"Martin",
""
],
[
"Pelikánová",
"Petra",
""
]
] |
2001.10905 | Ioannis Papantonis | Ioannis Papantonis, Vaishak Belle | Interventions and Counterfactuals in Tractable Probabilistic Models:
Limitations of Contemporary Transformations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, there has been an increasing interest in studying
causality-related properties in machine learning models generally, and in
generative models in particular. While that is well motivated, it inherits the
fundamental computational hardness of probabilistic inference, making exact
reasoning intractable. Probabilistic tractable models have also recently
emerged, which guarantee that conditional marginals can be computed in time
linear in the size of the model, where the model is usually learned from data.
Although initially limited to low tree-width models, recent tractable models
such as sum product networks (SPNs) and probabilistic sentential decision
diagrams (PSDDs) exploit efficient function representations and also capture
high tree-width models.
In this paper, we ask the following technical question: can we use the
distributions represented or learned by these models to perform causal queries,
such as reasoning about interventions and counterfactuals? By appealing to some
existing ideas on transforming such models to Bayesian networks, we answer
mostly in the negative. We show that when transforming SPNs to a causal graph
interventional reasoning reduces to computing marginal distributions; in other
words, only trivial causal reasoning is possible. For PSDDs the situation is
only slightly better. We first provide an algorithm for constructing a causal
graph from a PSDD, which introduces augmented variables. Intervening on the
original variables, once again, reduces to marginal distributions, but when
intervening on the augmented variables, a deterministic but nonetheless
causal-semantics can be provided for PSDDs.
| [
{
"version": "v1",
"created": "Wed, 29 Jan 2020 15:45:47 GMT"
}
] | 1,580,342,400,000 | [
[
"Papantonis",
"Ioannis",
""
],
[
"Belle",
"Vaishak",
""
]
] |
2001.10922 | Jason Bernard | Jason Bernard, Ian McQuillan | Stochastic L-system Inference from Multiple String Sequence Inputs | 24 pages, 5 figures, submitted to Applied Soft Computing | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lindenmayer systems (L-systems) are a grammar system that consist of string
rewriting rules. The rules replace every symbol in a string in parallel with a
successor to produce the next string, and this procedure iterates. In a
stochastic context-free L-system (S0L-system), every symbol may have one or
more rewriting rule, each with an associated probability of selection. Properly
constructed rewriting rules have been found to be useful for modeling and
simulating some natural and human engineered processes where each derived
string describes a step in the simulation. Typically, processes are modeled by
experts who meticulously construct the rules based on measurements or domain
knowledge of the process. This paper presents an automated approach to finding
stochastic L-systems, given a set of string sequences as input. The implemented
tool is called the Plant Model Inference Tool for S0L-systems (PMIT-S0L).
PMIT-S0L is evaluated using 960 procedurally generated S0L-systems in a test
suite, which are each used to generate input strings, and PMIT-S0L is then used
to infer the system from only the sequences. The evaluation shows that PMIT-S0L
infers S0L-systems with up to 9 rewriting rules each in under 12 hours.
Additionally, it is found that 3 sequences of strings is sufficient to find the
correct original rewriting rules in 100% of the cases in the test suite, and 6
sequences of strings reduces the difference in the associated probabilities to
approximately 1% or less.
| [
{
"version": "v1",
"created": "Wed, 29 Jan 2020 16:11:02 GMT"
}
] | 1,580,342,400,000 | [
[
"Bernard",
"Jason",
""
],
[
"McQuillan",
"Ian",
""
]
] |
2001.10953 | Nihar Shrikant Bendre | Nihar Bendre, Nima Ebadi, John J Prevost and Paul Rad | Human Action Performance using Deep Neuro-Fuzzy Recurrent Attention
Model | 1 pages, 6 figures, 2 algorithms. Published at IEEE Access | null | 10.1109/ACCESS.2020.2982364 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A great number of computer vision publications have focused on distinguishing
between human action recognition and classification rather than the intensity
of actions performed. Indexing the intensity which determines the performance
of human actions is a challenging task due to the uncertainty and information
deficiency that exists in the video inputs. To remedy this uncertainty, in this
paper we coupled fuzzy logic rules with the neural-based action recognition
model to rate the intensity of a human action as intense or mild. In our
approach, we used a Spatio-Temporal LSTM to generate the weights of the
fuzzy-logic model, and then demonstrate through experiments that indexing of
the action intensity is possible. We analyzed the integrated model by applying
it to videos of human actions with different action intensities and were able
to achieve an accuracy of 89.16% on our intensity indexing generated dataset.
The integrated model demonstrates the ability of a neuro-fuzzy inference module
to effectively estimate the intensity index of human actions.
| [
{
"version": "v1",
"created": "Wed, 29 Jan 2020 16:56:39 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Feb 2020 17:40:08 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Mar 2020 20:40:07 GMT"
}
] | 1,585,267,200,000 | [
[
"Bendre",
"Nihar",
""
],
[
"Ebadi",
"Nima",
""
],
[
"Prevost",
"John J",
""
],
[
"Rad",
"Paul",
""
]
] |
2001.11390 | Thomas Chaboud | Thomas Chaboud, C\'edric Pralet, Nicolas Schmidt | Tackling Air Traffic Conflicts as a Weighted CSP : Experiments with the
Lumberjack Method | Keywords: Constraints Programming, ATC, graph algorithms, clique
searching. 15 pages, 6 figures, 2 tables. Creative Commons
Attribution-Noncommercial-ShareAlike license (CC BY-NC-SA 4.0) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this paper, we present an extension to an air traffic conflicts resolution
method consisting in generating a large number of trajectories for a set of
aircraft, and efficiently selecting the best compatible ones. We propose a
multimanoeuvre version which encapsulates different conflict-solving
algorithms, in particular an original "smart brute-force" method and the
well-known ToulBar2 CSP toolset. Experiments on several benchmarks show that
the first one is very efficient on cases involving few aircraft (representative
of what actually happens in operations), allowing us to search through a large
pool of manoeuvres and trajectories; however, this method is overtaken by its
complexity when the number of aircraft increases to 7 and more. Conversely,
within acceptable times, the ToulBar2 toolset can handle conflicts involving
more aircraft, but with fewer possible trajectories for each.
| [
{
"version": "v1",
"created": "Thu, 30 Jan 2020 15:22:45 GMT"
}
] | 1,580,428,800,000 | [
[
"Chaboud",
"Thomas",
""
],
[
"Pralet",
"Cédric",
""
],
[
"Schmidt",
"Nicolas",
""
]
] |
2001.11457 | Alejandro Su\'arez Hern\'andez | Alejandro Su\'arez-Hern\'andez and Javier Segovia-Aguas and Carme
Torras and Guillem Aleny\`a | STRIPS Action Discovery | Presented to Genplan 2020 workshop, held in the AAAI 2020 conference
(https://sites.google.com/view/genplan20) (2021/03/05: included missing
acknowledgments) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of specifying high-level knowledge bases for planning becomes a
hard task in realistic environments. This knowledge is usually handcrafted and
is hard to keep updated, even for system experts. Recent approaches have shown
the success of classical planning at synthesizing action models even when all
intermediate states are missing. These approaches can synthesize action schemas
in Planning Domain Definition Language (PDDL) from a set of execution traces
each consisting, at least, of an initial and final state. In this paper, we
propose a new algorithm to unsupervisedly synthesize STRIPS action models with
a classical planner when action signatures are unknown. In addition, we
contribute with a compilation to classical planning that mitigates the problem
of learning static predicates in the action model preconditions, exploits the
capabilities of SAT planners with parallel encodings to compute action schemas
and validate all instances. Our system is flexible in that it supports the
inclusion of partial input information that may speed up the search. We show
through several experiments how learned action models generalize over unseen
planning instances.
| [
{
"version": "v1",
"created": "Thu, 30 Jan 2020 17:08:39 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Feb 2020 10:57:37 GMT"
},
{
"version": "v3",
"created": "Fri, 5 Mar 2021 10:37:52 GMT"
}
] | 1,615,161,600,000 | [
[
"Suárez-Hernández",
"Alejandro",
""
],
[
"Segovia-Aguas",
"Javier",
""
],
[
"Torras",
"Carme",
""
],
[
"Alenyà",
"Guillem",
""
]
] |
2001.11797 | Kenny Schlegel | Kenny Schlegel, Peer Neubert, Peter Protzel | A comparison of Vector Symbolic Architectures | 32 pages, 11 figures, preprint - accepted journal version | Artificial Intelligence Review (2021) | 10.1007/s10462-021-10110-3 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vector Symbolic Architectures combine a high-dimensional vector space with a
set of carefully designed operators in order to perform symbolic computations
with large numerical vectors. Major goals are the exploitation of their
representational power and ability to deal with fuzziness and ambiguity. Over
the past years, several VSA implementations have been proposed. The available
implementations differ in the underlying vector space and the particular
implementations of the VSA operators. This paper provides an overview of eleven
available VSA implementations and discusses their commonalities and differences
in the underlying vector space and operators. We create a taxonomy of available
binding operations and show an important ramification for non self-inverse
binding operations using an example from analogical reasoning. A main
contribution is the experimental comparison of the available implementations in
order to evaluate (1) the capacity of bundles, (2) the approximation quality of
non-exact unbinding operations, (3) the influence of combining binding and
bundling operations on the query answering performance, and (4) the performance
on two example applications: visual place- and language-recognition. We expect
this comparison and systematization to be relevant for development of VSAs, and
to support the selection of an appropriate VSA for a particular task. The
implementations are available.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2020 12:42:38 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Feb 2020 07:49:13 GMT"
},
{
"version": "v3",
"created": "Wed, 11 Nov 2020 18:05:22 GMT"
},
{
"version": "v4",
"created": "Thu, 16 Dec 2021 09:28:06 GMT"
}
] | 1,639,699,200,000 | [
[
"Schlegel",
"Kenny",
""
],
[
"Neubert",
"Peer",
""
],
[
"Protzel",
"Peter",
""
]
] |
2002.00429 | Eduardo C\'esar Garrido Merch\'an | Eduardo C. Garrido-Merch\'an, C. Puente, A. Sobrino, J.A. Olivas | Uncertainty Weighted Causal Graphs | 12 pages, 7 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Causality has traditionally been a scientific way to generate knowledge by
relating causes to effects. From an imaginery point of view, causal graphs are
a helpful tool for representing and infering new causal information. In
previous works, we have generated automatically causal graphs associated to a
given concept by analyzing sets of documents and extracting and representing
the found causal information in that visual way. The retrieved information
shows that causality is frequently imperfect rather than exact, feature
gathered by the graph. In this work we will attempt to go a step further
modelling the uncertainty in the graph through probabilistic improving the
management of the imprecision in the quoted graph.
| [
{
"version": "v1",
"created": "Sun, 2 Feb 2020 16:32:04 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Feb 2020 13:39:26 GMT"
}
] | 1,581,033,600,000 | [
[
"Garrido-Merchán",
"Eduardo C.",
""
],
[
"Puente",
"C.",
""
],
[
"Sobrino",
"A.",
""
],
[
"Olivas",
"J. A.",
""
]
] |
2002.00434 | Ekim Yurtsever | Ekim Yurtsever, Linda Capito, Keith Redmill, Umit Ozguner | Integrating Deep Reinforcement Learning with Model-based Path Planners
for Automated Driving | 6 pages, 5 figures. Accepted for IEEE Intelligent Vehicles Symposium
2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated driving in urban settings is challenging. Human participant
behavior is difficult to model, and conventional, rule-based Automated Driving
Systems (ADSs) tend to fail when they face unmodeled dynamics. On the other
hand, the more recent, end-to-end Deep Reinforcement Learning (DRL) based
model-free ADSs have shown promising results. However, pure learning-based
approaches lack the hard-coded safety measures of model-based controllers. Here
we propose a hybrid approach for integrating a path planning pipe into a vision
based DRL framework to alleviate the shortcomings of both worlds. In summary,
the DRL agent is trained to follow the path planner's waypoints as close as
possible. The agent learns this policy by interacting with the environment. The
reward function contains two major terms: the penalty of straying away from the
path planner and the penalty of having a collision. The latter has precedence
in the form of having a significantly greater numerical value. Experimental
results show that the proposed method can plan its path and navigate between
randomly chosen origin-destination points in CARLA, a dynamic urban simulation
environment. Our code is open-source and available online.
| [
{
"version": "v1",
"created": "Sun, 2 Feb 2020 17:10:19 GMT"
},
{
"version": "v2",
"created": "Tue, 19 May 2020 17:03:49 GMT"
}
] | 1,589,932,800,000 | [
[
"Yurtsever",
"Ekim",
""
],
[
"Capito",
"Linda",
""
],
[
"Redmill",
"Keith",
""
],
[
"Ozguner",
"Umit",
""
]
] |
2002.00509 | Eduardo C\'esar Garrido Merch\'an | Eduardo C. Garrido Merch\'an, Mart\'in Molina | A Machine Consciousness architecture based on Deep Learning and Gaussian
Processes | 12 pages, 3 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent developments in machine learning have pushed the tasks that machines
can do outside the boundaries of what was thought to be possible years ago.
Methodologies such as deep learning or generative models have achieved complex
tasks such as generating art pictures or literature automatically. On the other
hand, symbolic resources have also been developed further and behave well in
problems such as the ones proposed by common sense reasoning. Machine
Consciousness is a field that has been deeply studied and several theories
based in the functionalism philosophical theory like the global workspace
theory or information integration have been proposed that try to explain the
ariseness of consciousness in machines. In this work, we propose an
architecture that may arise consciousness in a machine based in the global
workspace theory and in the assumption that consciousness appear in machines
that has cognitive processes and exhibit conscious behaviour. This architecture
is based in processes that use the recent developments in artificial
intelligence models which output are these correlated activities. For every one
of the modules of this architecture, we provide detailed explanations of the
models involved and how they communicate with each other to create the
cognitive architecture.
| [
{
"version": "v1",
"created": "Sun, 2 Feb 2020 23:18:17 GMT"
},
{
"version": "v2",
"created": "Sat, 14 Mar 2020 00:01:23 GMT"
}
] | 1,584,403,200,000 | [
[
"Merchán",
"Eduardo C. Garrido",
""
],
[
"Molina",
"Martín",
""
]
] |
2002.01080 | Sarath Sreedharan | Sarath Sreedharan, Utkarsh Soni, Mudit Verma, Siddharth Srivastava,
Subbarao Kambhampati | Bridging the Gap: Providing Post-Hoc Symbolic Explanations for
Sequential Decision-Making Problems with Inscrutable Representations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As increasingly complex AI systems are introduced into our daily lives, it
becomes important for such systems to be capable of explaining the rationale
for their decisions and allowing users to contest these decisions. A
significant hurdle to allowing for such explanatory dialogue could be the
vocabulary mismatch between the user and the AI system. This paper introduces
methods for providing contrastive explanations in terms of user-specified
concepts for sequential decision-making settings where the system's model of
the task may be best represented as an inscrutable model. We do this by
building partial symbolic models of a local approximation of the task that can
be leveraged to answer the user queries. We test these methods on a popular
Atari game (Montezuma's Revenge) and variants of Sokoban (a well-known planning
benchmark) and report the results of user studies to evaluate whether people
find explanations generated in this form useful.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2020 01:37:56 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Sep 2020 19:46:15 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Oct 2021 00:17:50 GMT"
},
{
"version": "v4",
"created": "Sat, 19 Mar 2022 22:47:40 GMT"
}
] | 1,647,907,200,000 | [
[
"Sreedharan",
"Sarath",
""
],
[
"Soni",
"Utkarsh",
""
],
[
"Verma",
"Mudit",
""
],
[
"Srivastava",
"Siddharth",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2002.01088 | Thommen George Karimpanal | Thommen George Karimpanal | Neuro-evolutionary Frameworks for Generalized Learning Agents | 13 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent successes of deep learning and deep reinforcement learning have
firmly established their statuses as state-of-the-art artificial learning
techniques. However, longstanding drawbacks of these approaches, such as their
poor sample efficiencies and limited generalization capabilities point to a
need for re-thinking the way such systems are designed and deployed. In this
paper, we emphasize how the use of these learning systems, in conjunction with
a specific variation of evolutionary algorithms could lead to the emergence of
unique characteristics such as the automated acquisition of a variety of
desirable behaviors and useful sets of behavior priors. This could pave the way
for learning to occur in a generalized and continual manner, with minimal
interactions with the environment. We discuss the anticipated improvements from
such neuro-evolutionary frameworks, along with the associated challenges, as
well as its potential for application to a number of research areas.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2020 02:11:56 GMT"
}
] | 1,580,860,800,000 | [
[
"Karimpanal",
"Thommen George",
""
]
] |
2002.01640 | Zahra Zahedi | Zahra Zahedi, Sailik Sengupta, Subbarao Kambhampati | `Why didn't you allocate this task to them?' Negotiation-Aware
Explicable Task Allocation and Contrastive Explanation Generation | null | AAMAS 2023 (Extended Abstract), CoopAI workshop, NeurIPS2020 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Task allocation is an important problem in multi-agent systems. It becomes
more challenging when the team-members are humans with imperfect knowledge
about their teammates' costs and the overall performance metric. In this paper,
we propose a centralized Artificial Intelligence Task Allocation (AITA) that
simulates a negotiation and produces a negotiation-aware explicable task
allocation. If a team-member is unhappy with the proposed allocation, we allow
them to question the proposed allocation using a counterfactual. By using parts
of the simulated negotiation, we are able to provide contrastive explanations
that provide minimum information about other's cost to refute their foil. With
human studies, we show that (1) the allocation proposed using our method
appears fair to the majority, and (2) when a counterfactual is raised,
explanations generated are easy to comprehend and convincing. Finally, we
empirically study the effect of different kinds of incompleteness on the
explanation-length and find that underestimation of a teammate's costs often
increases it.
| [
{
"version": "v1",
"created": "Wed, 5 Feb 2020 04:58:26 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Feb 2020 21:04:57 GMT"
},
{
"version": "v3",
"created": "Thu, 18 Mar 2021 02:30:32 GMT"
},
{
"version": "v4",
"created": "Thu, 25 May 2023 21:00:57 GMT"
}
] | 1,685,318,400,000 | [
[
"Zahedi",
"Zahra",
""
],
[
"Sengupta",
"Sailik",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2002.02080 | Wen-Ji Zhou | Wen-Ji Zhou, Yang Yu | Temporal-adaptive Hierarchical Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical reinforcement learning (HRL) helps address large-scale and
sparse reward issues in reinforcement learning. In HRL, the policy model has an
inner representation structured in levels. With this structure, the
reinforcement learning task is expected to be decomposed into corresponding
levels with sub-tasks, and thus the learning can be more efficient. In HRL,
although it is intuitive that a high-level policy only needs to make macro
decisions in a low frequency, the exact frequency is hard to be simply
determined. Previous HRL approaches often employed a fixed-time skip strategy
or learn a terminal condition without taking account of the context, which,
however, not only requires manual adjustments but also sacrifices some decision
granularity. In this paper, we propose the \emph{temporal-adaptive hierarchical
policy learning} (TEMPLE) structure, which uses a temporal gate to adaptively
control the high-level policy decision frequency. We train the TEMPLE structure
with PPO and test its performance in a range of environments including 2-D
rooms, Mujoco tasks, and Atari games. The results show that the TEMPLE
structure can lead to improved performance in these environments with a
sequential adaptive high-level control.
| [
{
"version": "v1",
"created": "Thu, 6 Feb 2020 02:52:21 GMT"
}
] | 1,581,033,600,000 | [
[
"Zhou",
"Wen-Ji",
""
],
[
"Yu",
"Yang",
""
]
] |
2002.02334 | Yigit Oktar | Yigit Oktar, Erdem Okur, Mehmet Turkan | Self-recognition in conversational agents | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a standard Turing test, a machine has to prove its humanness to the
judges. By successfully imitating a thinking entity such as a human, this
machine then proves that it can also think. Some objections claim that Turing
test is not a tool to demonstrate the existence of general intelligence or
thinking activity. A compelling alternative is the Lovelace test, in which the
agent must originate a product that the agent's creator cannot explain.
Therefore, the agent must be the owner of an original product. However, for
this to happen the agent must exhibit the idea of self and distinguish oneself
from others. Sustaining the idea of self within the Turing test is still
possible if the judge decides to act as a textual mirror. Self-recognition
tests applied on animals through mirrors appear to be viable tools to
demonstrate the existence of a type of general intelligence. Methodology here
constructs a textual version of the mirror test by placing the agent as the one
and only judge to figure out whether the contacted one is an other, a mimicker,
or oneself in an unsupervised manner. This textual version of the mirror test
is objective, self-contained, and devoid of humanness. Any agent passing this
textual mirror test should have or can acquire a thought mechanism that can be
referred to as the inner-voice, answering the original and long lasting
question of Turing "Can machines think?" in a constructive manner still within
the bounds of the Turing test. Moreover, it is possible that a successful
self-recognition might pave way to stronger notions of self-awareness in
artificial beings.
| [
{
"version": "v1",
"created": "Thu, 6 Feb 2020 16:32:46 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Mar 2021 08:55:28 GMT"
},
{
"version": "v3",
"created": "Sun, 5 Sep 2021 11:04:19 GMT"
}
] | 1,630,972,800,000 | [
[
"Oktar",
"Yigit",
""
],
[
"Okur",
"Erdem",
""
],
[
"Turkan",
"Mehmet",
""
]
] |
2002.02938 | Cameron Reid | Cameron Reid | Student/Teacher Advising through Reward Augmentation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transfer learning is an important new subfield of multiagent reinforcement
learning that aims to help an agent learn about a problem by using knowledge
that it has gained solving another problem, or by using knowledge that is
communicated to it by an agent who already knows the problem. This is useful
when one wishes to change the architecture or learning algorithm of an agent
(so that the new knowledge need not be built "from scratch"), when new agents
are frequently introduced to the environment with no knowledge, or when an
agent must adapt to similar but different problems. Great progress has been
made in the agent-to-agent case using the Teacher/Student framework proposed by
(Torrey and Taylor 2013). However, that approach requires that learning from a
teacher be treated differently from learning in every other reinforcement
learning context. In this paper, I propose a method which allows the
teacher/student framework to be applied in a way that fits directly and
naturally into the more general reinforcement learning framework by integrating
the teacher feedback into the reward signal received by the learning agent. I
show that this approach can significantly improve the rate of learning for an
agent playing a one-player stochastic game; I give examples of potential
pitfalls of the approach; and I propose further areas of research building on
this framework.
| [
{
"version": "v1",
"created": "Fri, 7 Feb 2020 18:15:51 GMT"
}
] | 1,581,292,800,000 | [
[
"Reid",
"Cameron",
""
]
] |
2002.03256 | Margaret Mitchell | Margaret Mitchell, Dylan Baker, Nyalleng Moorosi, Emily Denton, Ben
Hutchinson, Alex Hanna, Timnit Gebru, Jamie Morgenstern | Diversity and Inclusion Metrics in Subset Selection | null | AIES 2020: Proceedings of the AAAI/ACM Conference on AI, Ethics,
and Society | 10.1145/3375627.3375832 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ethical concept of fairness has recently been applied in machine learning
(ML) settings to describe a wide range of constraints and objectives. When
considering the relevance of ethical concepts to subset selection problems, the
concepts of diversity and inclusion are additionally applicable in order to
create outputs that account for social power and access differentials. We
introduce metrics based on these concepts, which can be applied together,
separately, and in tandem with additional fairness constraints. Results from
human subject experiments lend support to the proposed criteria. Social choice
methods can additionally be leveraged to aggregate and choose preferable sets,
and we detail how these may be applied.
| [
{
"version": "v1",
"created": "Sun, 9 Feb 2020 00:29:40 GMT"
}
] | 1,581,379,200,000 | [
[
"Mitchell",
"Margaret",
""
],
[
"Baker",
"Dylan",
""
],
[
"Moorosi",
"Nyalleng",
""
],
[
"Denton",
"Emily",
""
],
[
"Hutchinson",
"Ben",
""
],
[
"Hanna",
"Alex",
""
],
[
"Gebru",
"Timnit",
""
],
[
"Morgenstern",
"Jamie",
""
]
] |
2002.03514 | Ibrahim Abdelaziz | Bassem Makni, Ibrahim Abdelaziz, James Hendler | Explainable Deep RDFS Reasoner | StarAI 2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent research efforts aiming to bridge the Neural-Symbolic gap for RDFS
reasoning proved empirically that deep learning techniques can be used to learn
RDFS inference rules. However, one of their main deficiencies compared to
rule-based reasoners is the lack of derivations for the inferred triples (i.e.
explainability in AI terms). In this paper, we build on these approaches to
provide not only the inferred graph but also explain how these triples were
inferred. In the graph words approach, RDF graphs are represented as a sequence
of graph words where inference can be achieved through neural machine
translation. To achieve explainability in RDFS reasoning, we revisit this
approach and introduce a new neural network model that gets the input graph--as
a sequence of graph words-- as well as the encoding of the inferred triple and
outputs the derivation for the inferred triple. We evaluated our justification
model on two datasets: a synthetic dataset-- LUBM benchmark-- and a real-world
dataset --ScholarlyData about conferences-- where the lowest validation
accuracy approached 96%.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2020 03:20:31 GMT"
}
] | 1,581,379,200,000 | [
[
"Makni",
"Bassem",
""
],
[
"Abdelaziz",
"Ibrahim",
""
],
[
"Hendler",
"James",
""
]
] |
2002.03766 | Daya Gaur | Daya Gaur and Muhammad Khan | Testing Unsatisfiability of Constraint Satisfaction Problems via Tensor
Products | ISAIM 2020, International Symposium on Artificial Intelligence and
Mathematics | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the design of stochastic local search methods to prove
unsatisfiability of a constraint satisfaction problem (CSP). For a binary CSP,
such methods have been designed using the microstructure of the CSP. Here, we
develop a method to decompose the microstructure into graph tensors. We show
how to use the tensor decomposition to compute a proof of unsatisfiability
efficiently and in parallel. We also offer substantial empirical evidence that
our approach improves the praxis. For instance, one decomposition yields proofs
of unsatisfiability in half the time without sacrificing the quality. Another
decomposition is twenty times faster and effective three-tenths of the times
compared to the prior method. Our method is applicable to arbitrary CSPs using
the well known dual and hidden variable transformations from an arbitrary CSP
to a binary CSP.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2020 18:06:52 GMT"
}
] | 1,581,379,200,000 | [
[
"Gaur",
"Daya",
""
],
[
"Khan",
"Muhammad",
""
]
] |
2002.03842 | Stefan Br\"ase | Christian Pachl, Nils Frank, Jan Breitbart, Stefan Br\"ase | Overview of chemical ontologies | 2 Figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Ontologies order and interconnect knowledge of a certain field in a formal
and semantic way so that they are machine-parsable. They try to define allwhere
acceptable definition of concepts and objects, classify them, provide
properties as well as interconnect them with relations (e.g. "A is a special
case of B"). More precisely, Tom Gruber defines Ontologies as a "specification
of a conceptualization; [...] a description (like a formal specification of a
program) of the concepts and relationships that can exist for an agent or a
community of agents." [1] An Ontology is made of Individuals which are
organized in Classes. Both can have Attributes and Relations among themselves.
Some complex Ontologies define Restrictions, Rules and Events which change
attributes or relations. To be computer accessible they are written in certain
ontology languages, like the OBO language or the more used Common Algebraic
Specification Language. With the rising of a digitalized, interconnected and
globalized world, where common standards have to be found, ontologies are of
great interest. [2] Yet, the development of chemical ontologies is in the
beginning. Indeed, some interesting basic approaches towards chemical
ontologies can be found, but nevertheless they suffer from two main flaws.
Firstly, we found that they are mostly only fragmentary completed or are still
in an architecture state. Secondly, apparently no chemical ontology is
widespread accepted. Therefore, we herein try to describe the major
ontology-developments in the chemical related fields Ontologies about chemical
analytical methods, Ontologies about name reactions and Ontologies about
scientific units.
| [
{
"version": "v1",
"created": "Fri, 7 Feb 2020 10:42:22 GMT"
}
] | 1,581,379,200,000 | [
[
"Pachl",
"Christian",
""
],
[
"Frank",
"Nils",
""
],
[
"Breitbart",
"Jan",
""
],
[
"Bräse",
"Stefan",
""
]
] |
2002.04733 | M Charity | M Charity, Michael Cerny Green, Ahmed Khalifa, Julian Togelius | Mech-Elites: Illuminating the Mechanic Space of GVGAI | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a fully automatic method of mechanic illumination for
general video game level generation. Using the Constrained MAP-Elites algorithm
and the GVG-AI framework, this system generates the simplest tile based levels
that contain specific sets of game mechanics and also satisfy playability
constraints. We apply this method to illuminate mechanic space for $4$
different games in GVG-AI: Zelda, Solarfox, Plants, and RealPortals.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2020 23:40:09 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2022 15:41:03 GMT"
}
] | 1,661,385,600,000 | [
[
"Charity",
"M",
""
],
[
"Green",
"Michael Cerny",
""
],
[
"Khalifa",
"Ahmed",
""
],
[
"Togelius",
"Julian",
""
]
] |
2002.04827 | Alessandro Antonucci | Alessandro Antonucci and Thomas Tiotto | Approximate MMAP by Marginal Search | To be presented at the 33rd International Florida Artificial
Intelligence Research Society Conference (Flairs-33) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a heuristic strategy for marginal MAP (MMAP) queries in graphical
models. The algorithm is based on a reduction of the task to a polynomial
number of marginal inference computations. Given an input evidence, the
marginals mass functions of the variables to be explained are computed.
Marginal information gain is used to decide the variables to be explained
first, and their most probable marginal states are consequently moved to the
evidence. The sequential iteration of this procedure leads to a MMAP
explanation and the minimum information gain obtained during the process can be
regarded as a confidence measure for the explanation. Preliminary experiments
show that the proposed confidence measure is properly detecting instances for
which the algorithm is accurate and, for sufficiently high confidence levels,
the algorithm gives the exact solution or an approximation whose Hamming
distance from the exact one is small.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2020 07:41:13 GMT"
}
] | 1,581,552,000,000 | [
[
"Antonucci",
"Alessandro",
""
],
[
"Tiotto",
"Thomas",
""
]
] |
2002.04852 | Jorn Op Den Buijs | Cliff Laschet, Jorn op den Buijs, Mark H. M. Winands, Steffen Pauws | Service Selection using Predictive Models and Monte-Carlo Tree Search | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article proposes a method for automated service selection to improve
treatment efficacy and reduce re-hospitalization costs. A predictive model is
developed using the National Home and Hospice Care Survey (NHHCS) dataset to
quantify the effect of care services on the risk of re-hospitalization. By
taking the patient's characteristics and other selected services into account,
the model is able to indicate the overall effectiveness of a combination of
services for a specific NHHCS patient. The developed model is incorporated in
Monte-Carlo Tree Search (MCTS) to determine optimal combinations of services
that minimize the risk of emergency re-hospitalization. MCTS serves as a risk
minimization algorithm in this case, using the predictive model for guidance
during the search. Using this method on the NHHCS dataset, a significant
reduction in risk of re-hospitalization is observed compared to the original
selections made by clinicians. An 11.89 percentage points risk reduction is
achieved on average. Higher reductions of roughly 40 percentage points on
average are observed for NHHCS patients in the highest risk categories. These
results seem to indicate that there is enormous potential for improving service
selection in the near future.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2020 09:04:30 GMT"
}
] | 1,581,552,000,000 | [
[
"Laschet",
"Cliff",
""
],
[
"Buijs",
"Jorn op den",
""
],
[
"Winands",
"Mark H. M.",
""
],
[
"Pauws",
"Steffen",
""
]
] |
2002.05196 | Jasper De Bock | Jasper De Bock | Archimedean Choice Functions: an Axiomatic Foundation for Imprecise
Decision Making | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | If uncertainty is modelled by a probability measure, decisions are typically
made by choosing the option with the highest expected utility. If an imprecise
probability model is used instead, this decision rule can be generalised in
several ways. We here focus on two such generalisations that apply to sets of
probability measures: E-admissibility and maximality. Both of them can be
regarded as special instances of so-called choice functions, a very general
mathematical framework for decision making. For each of these two decision
rules, we provide a set of necessary and sufficient conditions on choice
functions that uniquely characterises this rule, thereby providing an axiomatic
foundation for imprecise decision making with sets of probabilities. A
representation theorem for Archimedean choice functions in terms of coherent
lower previsions lies at the basis of both results.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2020 19:44:08 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Feb 2020 12:50:12 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Mar 2020 19:39:57 GMT"
}
] | 1,585,267,200,000 | [
[
"De Bock",
"Jasper",
""
]
] |
2002.05461 | Gert de Cooman | Gert de Cooman | Coherent and Archimedean choice in general Banach spaces | 34 pages, 7 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | I introduce and study a new notion of Archimedeanity for binary and
non-binary choice between options that live in an abstract Banach space,
through a very general class of choice models, called sets of desirable option
sets. In order to be able to bring an important diversity of contexts into the
fold, amongst which choice between horse lottery options, I pay special
attention to the case where these linear spaces don't include all `constant'
options.I consider the frameworks of conservative inference associated with
Archimedean (and coherent) choice models, and also pay quite a lot of attention
to representation of general (non-binary) choice models in terms of the
simpler, binary ones.The representation theorems proved here provide an
axiomatic characterisation for, amongst many other choice methods, Levi's
E-admissibility and Walley-Sen maximality.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2020 11:57:50 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Apr 2020 14:38:08 GMT"
},
{
"version": "v3",
"created": "Mon, 30 Nov 2020 14:05:35 GMT"
},
{
"version": "v4",
"created": "Fri, 9 Jul 2021 13:03:31 GMT"
}
] | 1,626,048,000,000 | [
[
"de Cooman",
"Gert",
""
]
] |
2002.05513 | Ke Zhang | Ke Zhang, Meng Li, Zhengchao Zhang, Xi Lin, Fang He | Multi-Vehicle Routing Problems with Soft Time Windows: A Multi-Agent
Reinforcement Learning Approach | 29 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-vehicle routing problem with soft time windows (MVRPSTW) is an
indispensable constituent in urban logistics distribution systems. Over the
past decade, numerous methods for MVRPSTW have been proposed, but most are
based on heuristic rules that require a large amount of computation time. With
the current rapid increase of logistics demands, traditional methods incur the
dilemma between computational efficiency and solution quality. To efficiently
solve the problem, we propose a novel reinforcement learning algorithm called
the Multi-Agent Attention Model that can solve routing problem instantly
benefit from lengthy offline training. Specifically, the vehicle routing
problem is regarded as a vehicle tour generation process, and an
encoder-decoder framework with attention layers is proposed to generate tours
of multiple vehicles iteratively. Furthermore, a multi-agent reinforcement
learning method with an unsupervised auxiliary network is developed for the
model training. By evaluated on four synthetic networks with different scales,
the results demonstrate that the proposed method consistently outperforms
Google OR-Tools and traditional methods with little computation time. In
addition, we validate the robustness of the well-trained model by varying the
number of customers and the capacities of vehicles.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2020 14:26:27 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Oct 2020 09:21:32 GMT"
}
] | 1,603,843,200,000 | [
[
"Zhang",
"Ke",
""
],
[
"Li",
"Meng",
""
],
[
"Zhang",
"Zhengchao",
""
],
[
"Lin",
"Xi",
""
],
[
"He",
"Fang",
""
]
] |
2002.05615 | Steven Carr | Steven Carr, Nils Jansen and Ufuk Topcu | Verifiable RNN-Based Policies for POMDPs Under Temporal Logic
Constraints | 8 pages, 5 figures, 1 table | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent neural networks (RNNs) have emerged as an effective representation
of control policies in sequential decision-making problems. However, a major
drawback in the application of RNN-based policies is the difficulty in
providing formal guarantees on the satisfaction of behavioral specifications,
e.g. safety and/or reachability. By integrating techniques from formal methods
and machine learning, we propose an approach to automatically extract a
finite-state controller (FSC) from an RNN, which, when composed with a
finite-state system model, is amenable to existing formal verification tools.
Specifically, we introduce an iterative modification to the so-called quantized
bottleneck insertion technique to create an FSC as a randomized policy with
memory. For the cases in which the resulting FSC fails to satisfy the
specification, verification generates diagnostic information. We utilize this
information to either adjust the amount of memory in the extracted FSC or
perform focused retraining of the RNN. While generally applicable, we detail
the resulting iterative procedure in the context of policy synthesis for
partially observable Markov decision processes (POMDPs), which is known to be
notoriously hard. The numerical experiments show that the proposed approach
outperforms traditional POMDP synthesis methods by 3 orders of magnitude within
2% of optimal benchmark values.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2020 16:38:38 GMT"
}
] | 1,581,638,400,000 | [
[
"Carr",
"Steven",
""
],
[
"Jansen",
"Nils",
""
],
[
"Topcu",
"Ufuk",
""
]
] |
2002.05769 | Mark Ho | Mark K. Ho, David Abel, Jonathan D. Cohen, Michael L. Littman, Thomas
L. Griffiths | The Efficiency of Human Cognition Reflects Planned Information
Processing | 13 pg (incl. supplemental materials); included in Proceedings of the
34th AAAI Conference on Artificial Intelligence | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Planning is useful. It lets people take actions that have desirable long-term
consequences. But, planning is hard. It requires thinking about consequences,
which consumes limited computational and cognitive resources. Thus, people
should plan their actions, but they should also be smart about how they deploy
resources used for planning their actions. Put another way, people should also
"plan their plans". Here, we formulate this aspect of planning as a
meta-reasoning problem and formalize it in terms of a recursive Bellman
objective that incorporates both task rewards and information-theoretic
planning costs. Our account makes quantitative predictions about how people
should plan and meta-plan as a function of the overall structure of a task,
which we test in two experiments with human participants. We find that people's
reaction times reflect a planned use of information processing, consistent with
our account. This formulation of planning to plan provides new insight into the
function of hierarchical planning, state abstraction, and cognitive control in
both humans and machines.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2020 20:34:33 GMT"
}
] | 1,581,897,600,000 | [
[
"Ho",
"Mark K.",
""
],
[
"Abel",
"David",
""
],
[
"Cohen",
"Jonathan D.",
""
],
[
"Littman",
"Michael L.",
""
],
[
"Griffiths",
"Thomas L.",
""
]
] |
2002.06261 | Carlos Aspillaga | Carlos Aspillaga, Andr\'es Carvallo, Vladimir Araujo | Stress Test Evaluation of Transformer-based Models in Natural Language
Understanding Tasks | Accepted paper LREC2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been significant progress in recent years in the field of Natural
Language Processing thanks to the introduction of the Transformer architecture.
Current state-of-the-art models, via a large number of parameters and
pre-training on massive text corpus, have shown impressive results on several
downstream tasks. Many researchers have studied previous (non-Transformer)
models to understand their actual behavior under different scenarios, showing
that these models are taking advantage of clues or failures of datasets and
that slight perturbations on the input data can severely reduce their
performance. In contrast, recent models have not been systematically tested
with adversarial-examples in order to show their robustness under severe stress
conditions. For that reason, this work evaluates three Transformer-based models
(RoBERTa, XLNet, and BERT) in Natural Language Inference (NLI) and Question
Answering (QA) tasks to know if they are more robust or if they have the same
flaws as their predecessors. As a result, our experiments reveal that RoBERTa,
XLNet and BERT are more robust than recurrent neural network models to stress
tests for both NLI and QA tasks. Nevertheless, they are still very fragile and
demonstrate various unexpected behaviors, thus revealing that there is still
room for future improvement in this field.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2020 21:52:41 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Mar 2020 18:45:48 GMT"
}
] | 1,585,612,800,000 | [
[
"Aspillaga",
"Carlos",
""
],
[
"Carvallo",
"Andrés",
""
],
[
"Araujo",
"Vladimir",
""
]
] |
2002.06276 | Jeannette Wing | Jeannette M. Wing | Trustworthy AI | 12 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The promise of AI is huge. AI systems have already achieved good enough
performance to be in our streets and in our homes. However, they can be brittle
and unfair. For society to reap the benefits of AI systems, society needs to be
able to trust them. Inspired by decades of progress in trustworthy computing,
we suggest what trustworthy properties would be desired of AI systems. By
enumerating a set of new research questions, we explore one approach--formal
verification--for ensuring trust in AI. Trustworthy AI ups the ante on both
trustworthy computing and formal methods.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2020 22:45:36 GMT"
}
] | 1,581,984,000,000 | [
[
"Wing",
"Jeannette M.",
""
]
] |
2002.06290 | Michal Warchalski | Michal Warchalski, Dimitrije Radojevic, Milos Milosevic | Deep RL Agent for a Real-Time Action Strategy Game | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a reinforcement learning environment based on Heroic - Magic
Duel, a 1 v 1 action strategy game. This domain is non-trivial for several
reasons: it is a real-time game, the state space is large, the information
given to the player before and at each step of a match is imperfect, and
distribution of actions is dynamic. Our main contribution is a deep
reinforcement learning agent playing the game at a competitive level that we
trained using PPO and self-play with multiple competing agents, employing only
a simple reward of $\pm 1$ depending on the outcome of a single match. Our best
self-play agent, obtains around $65\%$ win rate against the existing AI and
over $50\%$ win rate against a top human player.
| [
{
"version": "v1",
"created": "Sat, 15 Feb 2020 01:09:56 GMT"
}
] | 1,581,984,000,000 | [
[
"Warchalski",
"Michal",
""
],
[
"Radojevic",
"Dimitrije",
""
],
[
"Milosevic",
"Milos",
""
]
] |
2002.06432 | Tom Silver | Tom Silver and Rohan Chitnis | PDDLGym: Gym Environments from PDDL Problems | ICAPS 2020 PRL Workshop | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | We present PDDLGym, a framework that automatically constructs OpenAI Gym
environments from PDDL domains and problems. Observations and actions in
PDDLGym are relational, making the framework particularly well-suited for
research in relational reinforcement learning and relational sequential
decision-making. PDDLGym is also useful as a generic framework for rapidly
building numerous, diverse benchmarks from a concise and familiar specification
language. We discuss design decisions and implementation details, and also
illustrate empirical variations between the 20 built-in environments in terms
of planning and model-learning difficulty. We hope that PDDLGym will facilitate
bridge-building between the reinforcement learning community (from which Gym
emerged) and the AI planning community (which produced PDDL). We look forward
to gathering feedback from all those interested and expanding the set of
available environments and features accordingly. Code:
https://github.com/tomsilver/pddlgym
| [
{
"version": "v1",
"created": "Sat, 15 Feb 2020 19:10:21 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Sep 2020 23:33:35 GMT"
}
] | 1,600,300,800,000 | [
[
"Silver",
"Tom",
""
],
[
"Chitnis",
"Rohan",
""
]
] |
2002.06726 | Ralph Abboud | Ralph Abboud, \.Ismail \.Ilkan Ceylan, Radoslav Dimitrov | On the Approximability of Weighted Model Integration on DNF Structures | To appear in Proceedings of the Seventeenth International Conference
on Principles of Knowledge Representation and Reasoning (KR 2020) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weighted model counting (WMC) consists of computing the weighted sum of all
satisfying assignments of a propositional formula. WMC is well-known to be
#P-hard for exact solving, but admits a fully polynomial randomized
approximation scheme (FPRAS) when restricted to DNF structures. In this work,
we study weighted model integration, a generalization of weighted model
counting which involves real variables in addition to propositional variables,
and pose the following question: Does weighted model integration on DNF
structures admit an FPRAS? Building on classical results from approximate
volume computation and approximate weighted model counting, we show that
weighted model integration on DNF structures can indeed be approximated for a
class of weight functions. Our approximation algorithm is based on three
subroutines, each of which can be a weak (i.e., approximate), or a strong
(i.e., exact) oracle, and in all cases, comes along with accuracy guarantees.
We experimentally verify our approach over randomly generated DNF instances of
varying sizes, and show that our algorithm scales to large problem instances,
involving up to 1K variables, which are currently out of reach for existing,
general-purpose weighted model integration solvers.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2020 00:29:41 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Mar 2020 12:59:45 GMT"
},
{
"version": "v3",
"created": "Mon, 13 Jul 2020 09:27:12 GMT"
}
] | 1,594,684,800,000 | [
[
"Abboud",
"Ralph",
""
],
[
"Ceylan",
"İsmail İlkan",
""
],
[
"Dimitrov",
"Radoslav",
""
]
] |
2002.07418 | Peng Zhang | Peng Zhang, Jianye Hao, Weixun Wang, Hongyao Tang, Yi Ma, Yihai Duan,
Yan Zheng | KoGuN: Accelerating Deep Reinforcement Learning via Integrating Human
Suboptimal Knowledge | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning agents usually learn from scratch, which requires a
large number of interactions with the environment. This is quite different from
the learning process of human. When faced with a new task, human naturally have
the common sense and use the prior knowledge to derive an initial policy and
guide the learning process afterwards. Although the prior knowledge may be not
fully applicable to the new task, the learning process is significantly sped up
since the initial policy ensures a quick-start of learning and intermediate
guidance allows to avoid unnecessary exploration. Taking this inspiration, we
propose knowledge guided policy network (KoGuN), a novel framework that
combines human prior suboptimal knowledge with reinforcement learning. Our
framework consists of a fuzzy rule controller to represent human knowledge and
a refine module to fine-tune suboptimal prior knowledge. The proposed framework
is end-to-end and can be combined with existing policy-based reinforcement
learning algorithm. We conduct experiments on both discrete and continuous
control tasks. The empirical results show that our approach, which combines
human suboptimal knowledge and RL, achieves significant improvement on learning
efficiency of flat RL algorithms, even with very low-performance human prior
knowledge.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2020 07:58:27 GMT"
},
{
"version": "v2",
"created": "Thu, 21 May 2020 07:02:41 GMT"
}
] | 1,590,105,600,000 | [
[
"Zhang",
"Peng",
""
],
[
"Hao",
"Jianye",
""
],
[
"Wang",
"Weixun",
""
],
[
"Tang",
"Hongyao",
""
],
[
"Ma",
"Yi",
""
],
[
"Duan",
"Yihai",
""
],
[
"Zheng",
"Yan",
""
]
] |
2002.07985 | Zifan Wang | Zifan Wang and Piotr Mardziel and Anupam Datta and Matt Fredrikson | Interpreting Interpretations: Organizing Attribution Methods by Criteria | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by distinct, though related, criteria, a growing number of
attribution methods have been developed tointerprete deep learning. While each
relies on the interpretability of the concept of "importance" and our ability
to visualize patterns, explanations produced by the methods often differ. As a
result, input attribution for vision models fail to provide any level of human
understanding of model behaviour. In this work we expand the foundationsof
human-understandable concepts with which attributionscan be interpreted beyond
"importance" and its visualization; we incorporate the logical concepts of
necessity andsufficiency, and the concept of proportionality. We definemetrics
to represent these concepts as quantitative aspectsof an attribution. This
allows us to compare attributionsproduced by different methods and interpret
them in novelways: to what extent does this attribution (or this
method)represent the necessity or sufficiency of the highlighted inputs, and to
what extent is it proportional? We evaluate our measures on a collection of
methods explaining convolutional neural networks (CNN) for image
classification. We conclude that some attribution methods are more appropriate
for interpretation in terms of necessity while others are in terms of
sufficiency, while no method is always the most appropriate in terms of both.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2020 03:37:29 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Apr 2020 17:29:09 GMT"
}
] | 1,586,217,600,000 | [
[
"Wang",
"Zifan",
""
],
[
"Mardziel",
"Piotr",
""
],
[
"Datta",
"Anupam",
""
],
[
"Fredrikson",
"Matt",
""
]
] |
2002.08103 | Pierre Monnin | Pierre Monnin, Miguel Couceiro, Amedeo Napoli, Adrien Coulet | Knowledge-Based Matching of $n$-ary Tuples | null | null | 10.1007/978-3-030-57855-8_4 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An increasing number of data and knowledge sources are accessible by human
and software agents in the expanding Semantic Web. Sources may differ in
granularity or completeness, and thus be complementary. Consequently, they
should be reconciled in order to unlock the full potential of their conjoint
knowledge. In particular, units should be matched within and across sources,
and their level of relatedness should be classified into equivalent, more
specific, or similar. This task is challenging since knowledge units can be
heterogeneously represented in sources (e.g., in terms of vocabularies). In
this paper, we focus on matching n-ary tuples in a knowledge base with a
rule-based methodology. To alleviate heterogeneity issues, we rely on domain
knowledge expressed by ontologies. We tested our method on the biomedical
domain of pharmacogenomics by searching alignments among 50,435 n-ary tuples
from four different real-world sources. Results highlight noteworthy agreements
and particularities within and across sources.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2020 11:01:33 GMT"
},
{
"version": "v2",
"created": "Thu, 14 May 2020 18:51:53 GMT"
}
] | 1,605,139,200,000 | [
[
"Monnin",
"Pierre",
""
],
[
"Couceiro",
"Miguel",
""
],
[
"Napoli",
"Amedeo",
""
],
[
"Coulet",
"Adrien",
""
]
] |
2002.08136 | Daniel Molina Dr. | Daniel Molina and Javier Poyatos and Javier Del Ser and Salvador
Garc\'ia and Amir Hussain and Francisco Herrera | Comprehensive Taxonomies of Nature- and Bio-inspired Optimization:
Inspiration versus Algorithmic Behavior, Critical Analysis and
Recommendations (from 2020 to 2024) | 89 pages, 9 figures | Cognitive Computation 12:5 (2020) 897-939 | 10.1007/s12559-020-09730-8 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, bio-inspired optimization methods, which mimic biological
processes to solve complex problems, have gained popularity in recent
literature. The proliferation of proposals prove the growing interest in this
field. The increase in nature- and bio-inspired algorithms, applications, and
guidelines highlights growing interest in this field. However, the exponential
rise in the number of bio-inspired algorithms poses a challenge to the future
trajectory of this research domain. Along the five versions of this document,
the number of approaches grows incessantly, and where having a new biological
description takes precedence over real problem-solving. This document presents
two comprehensive taxonomies. One based on principles of biological similarity,
and the other one based on operational aspects associated with the iteration of
population models that initially have a biological inspiration. Therefore,
these taxonomies enable researchers to categorize existing algorithmic
developments into well-defined classes, considering two criteria: the source of
inspiration, and the behavior exhibited by each algorithm. Using these
taxonomies, we classify 518 algorithms based on nature-inspired and
bio-inspired principles. Each algorithm within these categories is thoroughly
examined, allowing for a critical synthesis of design trends and similarities,
and identifying the most analogous classical algorithm for each proposal. From
our analysis, we conclude that a poor relationship is often found between the
natural inspiration of an algorithm and its behavior. Furthermore, similarities
in terms of behavior between different algorithms are greater than what is
claimed in their public disclosure: specifically, we show that more than
one-fourth of the reviewed solvers are versions of classical algorithms. The
conclusions from the analysis of the algorithms lead to several learned
lessons.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2020 12:34:45 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Feb 2020 09:27:38 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Apr 2021 13:54:37 GMT"
},
{
"version": "v4",
"created": "Sat, 7 May 2022 12:08:01 GMT"
},
{
"version": "v5",
"created": "Wed, 17 Apr 2024 07:59:26 GMT"
}
] | 1,713,398,400,000 | [
[
"Molina",
"Daniel",
""
],
[
"Poyatos",
"Javier",
""
],
[
"Del Ser",
"Javier",
""
],
[
"García",
"Salvador",
""
],
[
"Hussain",
"Amir",
""
],
[
"Herrera",
"Francisco",
""
]
] |
2002.08627 | Scott McLachlan Dr | Evangelia Kyrimi, Scott McLachlan, Kudakwashe Dube, Mariana R. Neves,
Ali Fahmi, Norman Fenton | A Comprehensive Scoping Review of Bayesian Networks in Healthcare: Past,
Present and Future | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | No comprehensive review of Bayesian networks (BNs) in healthcare has been
published in the past, making it difficult to organize the research
contributions in the present and identify challenges and neglected areas that
need to be addressed in the future. This unique and novel scoping review of BNs
in healthcare provides an analytical framework for comprehensively
characterizing the domain and its current state. The review shows that: (1) BNs
in healthcare are not used to their full potential; (2) a generic BN
development process is lacking; (3) limitations exists in the way BNs in
healthcare are presented in the literature, which impacts understanding,
consensus towards systematic methodologies, practice and adoption of BNs; and
(4) a gap exists between having an accurate BN and a useful BN that impacts
clinical practice. This review empowers researchers and clinicians with an
analytical framework and findings that will enable understanding of the need to
address the problems of restricted aims of BNs, ad hoc BN development methods,
and the lack of BN adoption in practice. To map the way forward, the paper
proposes future research directions and makes recommendations regarding BN
development methods and adoption in practice.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2020 09:04:38 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2020 11:02:16 GMT"
}
] | 1,583,107,200,000 | [
[
"Kyrimi",
"Evangelia",
""
],
[
"McLachlan",
"Scott",
""
],
[
"Dube",
"Kudakwashe",
""
],
[
"Neves",
"Mariana R.",
""
],
[
"Fahmi",
"Ali",
""
],
[
"Fenton",
"Norman",
""
]
] |
2002.08957 | Lashon Booker | Lashon B. Booker and Scott A. Musman | A Model-Based, Decision-Theoretic Perspective on Automated Cyber
Response | 8 pages, 6 figures, 1 table; Presented at the AAAI-20 Workshop on
Artificial Intelligence for Cyber Security (AICS) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cyber-attacks can occur at machine speeds that are far too fast for
human-in-the-loop (or sometimes on-the-loop) decision making to be a viable
option. Although human inputs are still important, a defensive Artificial
Intelligence (AI) system must have considerable autonomy in these
circumstances. When the AI system is model-based, its behavior responses can be
aligned with risk-aware cost/benefit tradeoffs that are defined by
user-supplied preferences that capture the key aspects of how human operators
understand the system, the adversary and the mission. This paper describes an
approach to automated cyber response that is designed along these lines. We
combine a simulation of the system to be defended with an anytime online
planner to solve cyber defense problems characterized as partially observable
Markov decision problems (POMDPs).
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2020 15:30:59 GMT"
}
] | 1,582,502,400,000 | [
[
"Booker",
"Lashon B.",
""
],
[
"Musman",
"Scott A.",
""
]
] |
2002.09636 | Matthew Guzdial | Matthew Guzdial and Mark Riedl | Conceptual Game Expansion | 14 pages, 6 figures, 2 tables, IEEE Transactions on Games | null | 10.1109/TG.2021.3060005 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated game design is the problem of automatically producing games through
computational processes. Traditionally, these methods have relied on the
authoring of search spaces by a designer, defining the space of all possible
games for the system to author. In this paper, we instead learn representations
of existing games from gameplay video and use these to approximate a search
space of novel games. In a human subject study we demonstrate that these novel
games are indistinguishable from human games in terms of challenge, and that
one of the novel games was equivalent to one of the human games in terms of
fun, frustration, and likeability.
| [
{
"version": "v1",
"created": "Sat, 22 Feb 2020 05:51:54 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Sep 2020 06:25:54 GMT"
},
{
"version": "v3",
"created": "Fri, 19 Feb 2021 00:34:42 GMT"
}
] | 1,613,952,000,000 | [
[
"Guzdial",
"Matthew",
""
],
[
"Riedl",
"Mark",
""
]
] |
2002.09811 | Florian Richoux | Florian Richoux and Jean-Fran\c{c}ois Baffier | Learning Interpretable Error Functions for Combinatorial Optimization
Problem Modeling | null | null | 10.1007/s10472-022-09829-8 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Constraint Programming, constraints are usually represented as predicates
allowing or forbidding combinations of values. However, some algorithms exploit
a finer representation: error functions. Their usage comes with a price though:
it makes problem modeling significantly harder. Here, we propose a method to
automatically learn an error function corresponding to a constraint, given a
function deciding if assignments are valid or not. This is, to the best of our
knowledge, the first attempt to automatically learn error functions for hard
constraints. Our method uses a variant of neural networks we named
Interpretable Compositional Networks, allowing us to get interpretable results,
unlike regular artificial neural networks. Experiments on 5 different
constraints show that our system can learn functions that scale to high
dimensions, and can learn fairly good functions over incomplete spaces.
| [
{
"version": "v1",
"created": "Sun, 23 Feb 2020 02:58:51 GMT"
},
{
"version": "v2",
"created": "Sat, 23 May 2020 01:57:45 GMT"
},
{
"version": "v3",
"created": "Fri, 2 Apr 2021 07:37:13 GMT"
},
{
"version": "v4",
"created": "Thu, 8 Jul 2021 02:43:26 GMT"
}
] | 1,678,320,000,000 | [
[
"Richoux",
"Florian",
""
],
[
"Baffier",
"Jean-François",
""
]
] |
2002.10149 | Emmanuelle-Anna Dietz Saldanha | Emmanuelle-Anna Dietz Saldanha, Antonis Kakas | Cognitive Argumentation and the Suppression Task | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the challenge of modeling human reasoning, within a new
framework called Cognitive Argumentation. This framework rests on the
assumption that human logical reasoning is inherently a process of dialectic
argumentation and aims to develop a cognitive model for human reasoning that is
computational and implementable. To give logical reasoning a human cognitive
form the framework relies on cognitive principles, based on empirical and
theoretical work in Cognitive Science, to suitably adapt a general and abstract
framework of computational argumentation from AI. The approach of Cognitive
Argumentation is evaluated with respect to Byrne's suppression task, where the
aim is not only to capture the suppression effect between different groups of
people but also to account for the variation of reasoning within each group.
Two main cognitive principles are particularly important to capture human
conditional reasoning that explain the participants' responses: (i) the
interpretation of a condition within a conditional as sufficient and/or
necessary and (ii) the mode of reasoning either as predictive or explanatory.
We argue that Cognitive Argumentation provides a coherent and cognitively
adequate model for human conditional reasoning that allows a natural
distinction between definite and plausible conclusions, exhibiting the
important characteristics of context-sensitive and defeasible reasoning.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2020 10:30:39 GMT"
}
] | 1,582,588,800,000 | [
[
"Saldanha",
"Emmanuelle-Anna Dietz",
""
],
[
"Kakas",
"Antonis",
""
]
] |
2002.10373 | Pedro Zuidberg Dos Martires | Pedro Zuidberg Dos Martires, Nitesh Kumar, Andreas Persson, Amy
Loutfi, Luc De Raedt | Symbolic Learning and Reasoning with Noisy Data for Probabilistic
Anchoring | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robotic agents should be able to learn from sub-symbolic sensor data, and at
the same time, be able to reason about objects and communicate with humans on a
symbolic level. This raises the question of how to overcome the gap between
symbolic and sub-symbolic artificial intelligence. We propose a semantic world
modeling approach based on bottom-up object anchoring using an object-centered
representation of the world. Perceptual anchoring processes continuous
perceptual sensor data and maintains a correspondence to a symbolic
representation. We extend the definitions of anchoring to handle multi-modal
probability distributions and we couple the resulting symbol anchoring system
to a probabilistic logic reasoner for performing inference. Furthermore, we use
statistical relational learning to enable the anchoring framework to learn
symbolic knowledge in the form of a set of probabilistic logic rules of the
world from noisy and sub-symbolic sensor input. The resulting framework, which
combines perceptual anchoring and statistical relational learning, is able to
maintain a semantic world model of all the objects that have been perceived
over time, while still exploiting the expressiveness of logical rules to reason
about the state of objects which are not directly observed through sensory
input data. To validate our approach we demonstrate, on the one hand, the
ability of our system to perform probabilistic reasoning over multi-modal
probability distributions, and on the other hand, the learning of probabilistic
logical rules from anchored objects produced by perceptual observations. The
learned logical rules are, subsequently, used to assess our proposed
probabilistic anchoring procedure. We demonstrate our system in a setting
involving object interactions where object occlusions arise and where
probabilistic inference is needed to correctly anchor objects.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2020 16:58:00 GMT"
}
] | 1,582,588,800,000 | [
[
"Martires",
"Pedro Zuidberg Dos",
""
],
[
"Kumar",
"Nitesh",
""
],
[
"Persson",
"Andreas",
""
],
[
"Loutfi",
"Amy",
""
],
[
"De Raedt",
"Luc",
""
]
] |
2002.11107 | Okyu Kwon | Okyu Kwon | Very simple statistical evidence that AlphaGo has exceeded human limits
in playing GO game | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning technology is making great progress in solving the challenging
problems of artificial intelligence, hence machine learning based on artificial
neural networks is in the spotlight again. In some areas, artificial
intelligence based on deep learning is beyond human capabilities. It seemed
extremely difficult for a machine to beat a human in a Go game, but AlphaGo has
shown to beat a professional player in the game. By looking at the statistical
distribution of the distance in which the Go stones are laid in succession, we
find a clear trace that Alphago has surpassed human abilities. The AlphaGo than
professional players and professional players than ordinary players shows the
laying of stones in the distance becomes more frequent. In addition, AlphaGo
shows a much more pronounced difference than that of ordinary players and
professional players.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2020 01:46:12 GMT"
}
] | 1,582,761,600,000 | [
[
"Kwon",
"Okyu",
""
]
] |
2002.11485 | Christopher A. Tucker | Christopher A. Tucker | A machine-learning software-systems approach to capture social,
regulatory, governance, and climate problems | 7 pages, 1 figure, 1 table | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper will discuss the role of an artificially-intelligent computer
system as critique-based, implicit-organizational, and an inherently necessary
device, deployed in synchrony with parallel governmental policy, as a genuine
means of capturing nation-population complexity in quantitative form, public
contentment in societal-cooperative economic groups, regulatory proposition,
and governance-effectiveness domains. It will discuss a solution involving a
well-known algorithm and proffer an improved mechanism for
knowledge-representation, thereby increasing range of utility, scope of
influence (in terms of differentiating class sectors) and operational
efficiency. It will finish with a discussion of these and other historical
implications.
| [
{
"version": "v1",
"created": "Sun, 23 Feb 2020 13:00:52 GMT"
}
] | 1,582,761,600,000 | [
[
"Tucker",
"Christopher A.",
""
]
] |
2002.11508 | Amar Isli | Amar Isli | A binarized-domains arc-consistency algorithm for TCSPs: its
computational analysis and its use as a filtering procedure in solution
search algorithms | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | TCSPs (Temporal Constraint Satisfaction Problems), as defined in [Dechter et
al., 1991], get rid of unary constraints by binarizing them after having added
an "origin of the world" variable. In this work, we look at the constraints
between the "origin of the world" variable and the other variables, as the
(binarized) domains of these other variables. With this in mind, we define a
notion of arc-consistency for TCSPs, which we will refer to as
binarized-domains Arc-Consistency, or bdArc-Consistency for short. We provide
an algorithm achieving bdArc-Consistency for a TCSP, which we will refer to as
bdAC-3, for it is an adaptation of Mackworth's [1977] well-known
arc-consistency algorithm AC-3. We show that if a convex TCSP, referred to in
[Dechter et al., 1991] as an STP (Simple Temporal Problem), is
bdArc-Consistent, and its "origin of the world" variable disconnected from none
of the other variables, its binarized domains are minimal. We provide two
polynomial backtrack-free procedures: one for the task of getting, from a
bdArc-Consistent STP, either that it is inconsistent or, in case of
consistency, a bdArc-Consistent STP refinement whose "origin of the world"
variable is disconnected from none of the other variables; the other for the
task of getting a solution from a bdArc-Consistent STP whose "origin of the
world" variable is disconnected from none of the other variables. We then show
how to use our results both in a general TCSP solver and in a TCSP-based job
shop scheduler. From our work can be extracted a one-to-all all-to-one shortest
paths algorithm of an IR-labelled directed graph. Finally, we show that an
existing adaptation to TCSPs of Mackworth's [1977] path-consistency algorithm
PC-2 is not guaranteed to always terminate, and correct it.
| [
{
"version": "v1",
"created": "Sat, 22 Feb 2020 18:15:03 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Apr 2021 16:40:30 GMT"
}
] | 1,617,580,800,000 | [
[
"Isli",
"Amar",
""
]
] |
2002.11710 | Joseph Tassone | Joseph Tassone and Salimur Choudhury | Algorithms for Optimizing Fleet Scheduling of Air Ambulances | 14 pages, 4 figures, 16 references | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Proper scheduling of air assets can be the difference between life and death
for a patient. While poor scheduling can be incredibly problematic during
hospital transfers, it can be potentially catastrophic in the case of a
disaster. These issues are amplified in the case of an air emergency medical
service (EMS) system where populations are dispersed, and resources are
limited. There are exact methodologies existing for scheduling missions,
although actual calculation times can be quite significant given a large enough
problem space. For this research, known coordinates of air and health
facilities were used in conjunction with a formulated integer linear
programming model. This was the programmed through Gurobi so that performance
could be compared against custom algorithmic solutions. Two methods were
developed, one based on neighbourhood search and the other on Tabu search.
While both were able to achieve results quite close to the Gurobi solution, the
Tabu search outperformed the former algorithm. Additionally, it was able to do
so in a greatly decreased time, with Gurobi actually being unable to resolve to
optimal in larger examples. Parallel variations were also developed with the
compute unified device architecture (CUDA), though did not improve the timing
given the smaller sample size.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2020 21:49:46 GMT"
}
] | 1,582,848,000,000 | [
[
"Tassone",
"Joseph",
""
],
[
"Choudhury",
"Salimur",
""
]
] |
2002.11714 | Taniya Seth | Taniya Seth and Pranab K. Muhuri | Type-2 Fuzzy Set based Hesitant Fuzzy Linguistic Term Sets for
Linguistic Decision Making | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Approaches based on computing with words find good applicability in decision
making systems. Predominantly finding their basis in type-1 fuzzy sets,
computing with words approaches employ type-1 fuzzy sets as semantics of the
linguistic terms. However, type-2 fuzzy sets have been proven to be
scientifically more appropriate to represent linguistic information in
practical systems. They take into account both the intra-uncertainty as well as
the inter-uncertainty in cases where the linguistic information comes from a
group of experts. Hence in this paper, we propose to introduce linguistic terms
whose semantics are denoted by interval type-2 fuzzy sets within the hesitant
fuzzy linguistic term set framework, resulting in type-2 fuzzy sets based
hesitant fuzzy linguistic term sets. We also introduce a novel method of
computing type-2 fuzzy envelopes out of multiple interval type-2 fuzzy sets
with trapezoidal membership functions. Furthermore, the proposed framework with
interval type-2 fuzzy sets is applied on a supplier performance evaluation
scenario. Since humans are predominantly involved in the entire process of
supply chain, their feedback is crucial while deciding many factors. Towards
the end of the paper, we compare our presented model with various existing
models and demonstrate the advantages of the former.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2020 08:49:52 GMT"
}
] | 1,582,848,000,000 | [
[
"Seth",
"Taniya",
""
],
[
"Muhuri",
"Pranab K.",
""
]
] |
2002.11717 | Constance Thierry | Constance Thierry (1), Jean-Christophe Dubois (1), Yolande Le Gall
(1), Arnaud Martin ((1) Universit\'e de Rennes 1, France) | Modelisation de l'incertitude et de l'imprecision de donnees de
crowdsourcing : MONITOR | in French. Extraction et Gestion des Connaissances (EGC), Jan 2020,
Bruxelles, Belgique | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crowdsourcing is defined as the outsourcing of tasks to a crowd of
contributors. The crowd is very diverse on these platforms and includes
malicious contributors attracted by the remuneration of tasks and not
conscientiously performing them. It is essential to identify these contributors
in order to avoid considering their responses. As not all contributors have the
same aptitude for a task, it seems appropriate to give weight to their answers
according to their qualifications. This paper, published at the ICTAI 2019
conference, proposes a method, MONITOR, for estimating the profile of the
contributor and aggregating the responses using belief function theory.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2020 14:58:11 GMT"
}
] | 1,582,848,000,000 | [
[
"Thierry",
"Constance",
"",
"Université de Rennes 1, France"
],
[
"Dubois",
"Jean-Christophe",
"",
"Université de Rennes 1, France"
],
[
"Gall",
"Yolande Le",
"",
"Université de Rennes 1, France"
],
[
"Martin",
"Arnaud",
""
]
] |
2002.11909 | Yi Chu | Yi Chu, Chuan Luo, Holger H. Hoos, QIngwei Lin, Haihang You | Improving the Performance of Stochastic Local Search for Maximum Vertex
Weight Clique Problem Using Programming by Optimization | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The maximum vertex weight clique problem (MVWCP) is an important
generalization of the maximum clique problem (MCP) that has a wide range of
real-world applications. In situations where rigorous guarantees regarding the
optimality of solutions are not required, MVWCP is usually solved using
stochastic local search (SLS) algorithms, which also define the state of the
art for solving this problem. However, there is no single SLS algorithm which
gives the best performance across all classes of MVWCP instances, and it is
challenging to effectively identify the most suitable algorithm for each class
of MVWCP instances. In this work, we follow the paradigm of Programming by
Optimization (PbO) to develop a new, flexible and highly parametric SLS
framework for solving MVWCP, combining, for the first time, a broad range of
effective heuristic mechanisms. By automatically configuring this PbO-MWC
framework, we achieve substantial advances in the state-of-the-art in solving
MVWCP over a broad range of prominent benchmarks, including two derived from
real-world applications in transplantation medicine (kidney exchange) and
assessment of research excellence.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2020 04:22:19 GMT"
}
] | 1,582,848,000,000 | [
[
"Chu",
"Yi",
""
],
[
"Luo",
"Chuan",
""
],
[
"Hoos",
"Holger H.",
""
],
[
"Lin",
"QIngwei",
""
],
[
"You",
"Haihang",
""
]
] |
2002.12441 | Heytem Zitoun | Heytem Zitoun, Claude Michel, Laurent Michel, Michel Rueher | An efficient constraint based framework forhandling floating point SMT
problems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces the 2019 version of \us{}, a novel Constraint
Programming framework for floating point verification problems expressed with
the SMT language of SMTLIB. SMT solvers decompose their task by delegating to
specific theories (e.g., floating point, bit vectors, arrays, ...) the task to
reason about combinatorial or otherwise complex constraints for which the SAT
encoding would be cumbersome or ineffective. This decomposition and encoding
processes lead to the obfuscation of the high-level constraints and a loss of
information on the structure of the combinatorial model. In \us{}, constraints
over the floats are first class objects, and the purpose is to expose and
exploit structures of floating point domains to enhance the search process. A
symbolic phase rewrites each SMTLIB instance to elementary constraints, and
eliminates auxiliary variables whose presence is counterproductive. A
diversification technique within the search steers it away from costly
enumerations in unproductive areas of the search space. The empirical
evaluation demonstrates that the 2019 version of \us{} is competitive on
computationally challenging floating point benchmarks that induce significant
search efforts even for other CP solvers. It highlights that the ability to
harness both inference and search is critical. Indeed, it yields a factor 3
improvement over Colibri and is up to 10 times faster than SMT solvers. The
evaluation was conducted over 214 benchmarks (The Griggio suite) which is a
standard within SMTLIB.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2020 21:11:22 GMT"
}
] | 1,583,107,200,000 | [
[
"Zitoun",
"Heytem",
""
],
[
"Michel",
"Claude",
""
],
[
"Michel",
"Laurent",
""
],
[
"Rueher",
"Michel",
""
]
] |
2002.12445 | Sebastian Sardina | Daniel Ciolek, Nicol\'as D'Ippolito, Alberto Pozanco, Sebastian
Sardina | Multi-tier Automated Planning for Adaptive Behavior (Extended Version) | Shorter version in ICAPS'20 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A planning domain, as any model, is never complete and inevitably makes
assumptions on the environment's dynamic. By allowing the specification of just
one domain model, the knowledge engineer is only able to make one set of
assumptions, and to specify a single objective-goal. Borrowing from work in
Software Engineering, we propose a multi-tier framework for planning that
allows the specification of different sets of assumptions, and of different
corresponding objectives. The framework aims to support the synthesis of
adaptive behavior so as to mitigate the intrinsic risk in any planning modeling
task. After defining the multi-tier planning task and its solution concept, we
show how to solve problem instances by a succinct compilation to a form of
non-deterministic planning. In doing so, our technique justifies the
applicability of planning with both fair and unfair actions, and the need for
more efforts in developing planning systems supporting dual fairness
assumptions.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2020 21:16:01 GMT"
}
] | 1,583,107,200,000 | [
[
"Ciolek",
"Daniel",
""
],
[
"D'Ippolito",
"Nicolás",
""
],
[
"Pozanco",
"Alberto",
""
],
[
"Sardina",
"Sebastian",
""
]
] |
2002.12447 | Heytem Zitoun | Heytem Zitoun, Claude Michel, Laurent Michel, Michel Rueher | Bringing freedom in variable choice when searching counter-examples in
floating point programs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Program verification techniques typically focus on finding counter-examples
that violate properties of a program. Constraint programming offers a
convenient way to verify programs by modeling their state transformations and
specifying searches that seek counter-examples. Floating-point computations
present additional challenges for verification given the semantic subtleties of
floating point arithmetic. % This paper focuses on search strategies for CSPs
using floating point numbers constraint systems and dedicated to program
verification. It introduces a new search heuristic based on the global number
of occurrences that outperforms state-of-the-art strategies. More importantly,
it demonstrates that a new technique that only branches on input variables of
the verified program improve performance. It composes with a diversification
technique that prevents the selection of the same variable within a fixed
horizon further improving performances and reduces disparities between various
variable choice heuristics. The result is a robust methodology that can tailor
the search strategy according to the sought properties of the counter example.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2020 21:20:38 GMT"
}
] | 1,583,107,200,000 | [
[
"Zitoun",
"Heytem",
""
],
[
"Michel",
"Claude",
""
],
[
"Michel",
"Laurent",
""
],
[
"Rueher",
"Michel",
""
]
] |
2003.00030 | Romina Abachi | Romina Abachi, Mohammad Ghavamzadeh, Amir-massoud Farahmand | Policy-Aware Model Learning for Policy Gradient Methods | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the problem of learning a model in model-based
reinforcement learning (MBRL). We examine how the planning module of an MBRL
algorithm uses the model, and propose that the model learning module should
incorporate the way the planner is going to use the model. This is in contrast
to conventional model learning approaches, such as those based on maximum
likelihood estimate, that learn a predictive model of the environment without
explicitly considering the interaction of the model and the planner. We focus
on policy gradient type of planning algorithms and derive new loss functions
for model learning that incorporate how the planner uses the model. We call
this approach Policy-Aware Model Learning (PAML). We theoretically analyze a
generic model-based policy gradient algorithm and provide a convergence
guarantee for the optimized policy. We also empirically evaluate PAML on some
benchmark problems, showing promising results.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2020 19:18:18 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Jan 2021 03:20:54 GMT"
}
] | 1,609,804,800,000 | [
[
"Abachi",
"Romina",
""
],
[
"Ghavamzadeh",
"Mohammad",
""
],
[
"Farahmand",
"Amir-massoud",
""
]
] |
2003.00126 | Zhe Zeng Miss | Zhe Zeng, Paolo Morettin, Fanqi Yan, Antonio Vergari, Guy Van den
Broeck | Scaling up Hybrid Probabilistic Inference with Logical and Arithmetic
Constraints via Message Passing | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weighted model integration (WMI) is a very appealing framework for
probabilistic inference: it allows to express the complex dependencies of
real-world problems where variables are both continuous and discrete, via the
language of Satisfiability Modulo Theories (SMT), as well as to compute
probabilistic queries with complex logical and arithmetic constraints. Yet,
existing WMI solvers are not ready to scale to these problems. They either
ignore the intrinsic dependency structure of the problem at all, or they are
limited to too restrictive structures. To narrow this gap, we derive a
factorized formalism of WMI enabling us to devise a scalable WMI solver based
on message passing, MP-WMI. Namely, MP-WMI is the first WMI solver which allows
to: 1) perform exact inference on the full class of tree-structured WMI
problems; 2) compute all marginal densities in linear time; 3) amortize
inference inter query. Experimental results show that our solver dramatically
outperforms the existing WMI solvers on a large set of benchmarks.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2020 23:51:45 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Aug 2020 22:41:13 GMT"
}
] | 1,597,968,000,000 | [
[
"Zeng",
"Zhe",
""
],
[
"Morettin",
"Paolo",
""
],
[
"Yan",
"Fanqi",
""
],
[
"Vergari",
"Antonio",
""
],
[
"Broeck",
"Guy Van den",
""
]
] |
2003.00172 | Ziyue Wang | Xiang Zhang, Qingqing Yang, Jinru Ding and Ziyue Wang | Entity Profiling in Knowledge Graphs | 10 pages, 5 figures | in IEEE Access, vol. 8, pp. 27257-27266, 2020 | 10.1109/ACCESS.2020.2971567 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge Graphs (KGs) are graph-structured knowledge bases storing factual
information about real-world entities. Understanding the uniqueness of each
entity is crucial to the analyzing, sharing, and reusing of KGs. Traditional
profiling technologies encompass a vast array of methods to find distinctive
features in various applications, which can help to differentiate entities in
the process of human understanding of KGs. In this work, we present a novel
profiling approach to identify distinctive entity features. The distinctiveness
of features is carefully measured by a HAS model, which is a scalable
representation learning model to produce a multi-pattern entity embedding. We
fully evaluate the quality of entity profiles generated from real KGs. The
results show that our approach facilitates human understanding of entities in
KGs.
| [
{
"version": "v1",
"created": "Sat, 29 Feb 2020 03:44:24 GMT"
}
] | 1,583,193,600,000 | [
[
"Zhang",
"Xiang",
""
],
[
"Yang",
"Qingqing",
""
],
[
"Ding",
"Jinru",
""
],
[
"Wang",
"Ziyue",
""
]
] |
2003.00234 | Sumant Pushp | Raza Rahi, Sumant Pushp, Arif Khan, Smriti Kumar Sinha | A Finite State Transducer Based Morphological Analyzer of Maithili
Language | 8 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Morphological analyzers are the essential milestones for many linguistic
applications like; machine translation, word sense disambiguation, spells
checkers, and search engines etc. Therefore, development of an effective
morphological analyzer has a greater impact on the computational recognition of
a language. In this paper, we present a finite state transducer based
inflectional morphological analyzer for a resource poor language of India,
known as Maithili. Maithili is an eastern Indo-Aryan language spoken in the
eastern and northern regions of Bihar in India and the southeastern plains,
known as tarai of Nepal. This work can be recognized as the first work towards
the computational development of Maithili which may attract researchers around
the country to up-rise the language to establish in computational world.
| [
{
"version": "v1",
"created": "Sat, 29 Feb 2020 11:00:15 GMT"
}
] | 1,583,193,600,000 | [
[
"Rahi",
"Raza",
""
],
[
"Pushp",
"Sumant",
""
],
[
"Khan",
"Arif",
""
],
[
"Sinha",
"Smriti Kumar",
""
]
] |
2003.00411 | Md Zahidul Islam PhD | Mahmood A. Khan, Md Zahidul Islam, Mohsin Hafeez | Data Pre-Processing and Evaluating the Performance of Several Data
Mining Methods for Predicting Irrigation Water Requirement | This 13-page paper is a slightly modified version of our original
conference paper published in the 10th Australasian Data Mining Conference
2012. We then submitted the paper to the Journal of Research and Practice in
IT (JRPIT) as an invited paper. However, despite the acceptance for
publication the paper was never published by JRPIT since the journal
discontinued after it had accepted our paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent drought and population growth are planting unprecedented demand for
the use of available limited water resources. Irrigated agriculture is one of
the major consumers of freshwater. A large amount of water in irrigated
agriculture is wasted due to poor water management practices. To improve water
management in irrigated areas, models for estimation of future water
requirements are needed. Developing a model for forecasting irrigation water
demand can improve water management practices and maximise water productivity.
Data mining can be used effectively to build such models.
In this study, we prepare a dataset containing information on suitable
attributes for forecasting irrigation water demand. The data is obtained from
three different sources namely meteorological data, remote sensing images and
water delivery statements. In order to make the prepared dataset useful for
demand forecasting and pattern extraction, we pre-process the dataset using a
novel approach based on a combination of irrigation and data mining knowledge.
We then apply and compare the effectiveness of different data mining methods
namely decision tree (DT), artificial neural networks (ANNs), systematically
developed forest (SysFor) for multiple trees, support vector machine (SVM),
logistic regression, and the traditional Evapotranspiration (ETc) methods and
evaluate the performance of these models to predict irrigation water demand.
Our experimental results indicate the usefulness of data pre-processing and the
effectiveness of different classifiers. Among the six methods we used, SysFor
produces the best prediction with 97.5% accuracy followed by a decision tree
with 96% and ANN with 95% respectively by closely matching the predictions with
actual water usage. Therefore, we recommend using SysFor and DT models for
irrigation water demand forecasting.
| [
{
"version": "v1",
"created": "Sun, 1 Mar 2020 05:42:04 GMT"
}
] | 1,583,193,600,000 | [
[
"Khan",
"Mahmood A.",
""
],
[
"Islam",
"Md Zahidul",
""
],
[
"Hafeez",
"Mohsin",
""
]
] |
2003.00431 | Kamran Alipour | Kamran Alipour, Jurgen P. Schulze, Yi Yao, Avi Ziskind, Giedrius
Burachas | A Study on Multimodal and Interactive Explanations for Visual Question
Answering | http://ceur-ws.org/Vol-2560/paper44.pdf | Proceedings of the Workshop on Artificial Intelligence Safety
(SafeAI 2020) co-located with 34th AAAI Conference on Artificial Intelligence
(AAAI 2020), New York, USA, Feb 7, 2020 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explainability and interpretability of AI models is an essential factor
affecting the safety of AI. While various explainable AI (XAI) approaches aim
at mitigating the lack of transparency in deep networks, the evidence of the
effectiveness of these approaches in improving usability, trust, and
understanding of AI systems are still missing. We evaluate multimodal
explanations in the setting of a Visual Question Answering (VQA) task, by
asking users to predict the response accuracy of a VQA agent with and without
explanations. We use between-subjects and within-subjects experiments to probe
explanation effectiveness in terms of improving user prediction accuracy,
confidence, and reliance, among other factors. The results indicate that the
explanations help improve human prediction accuracy, especially in trials when
the VQA system's answer is inaccurate. Furthermore, we introduce active
attention, a novel method for evaluating causal attentional effects through
intervention by editing attention maps. User explanation ratings are strongly
correlated with human prediction accuracy and suggest the efficacy of these
explanations in human-machine AI collaboration tasks.
| [
{
"version": "v1",
"created": "Sun, 1 Mar 2020 07:54:01 GMT"
}
] | 1,583,193,600,000 | [
[
"Alipour",
"Kamran",
""
],
[
"Schulze",
"Jurgen P.",
""
],
[
"Yao",
"Yi",
""
],
[
"Ziskind",
"Avi",
""
],
[
"Burachas",
"Giedrius",
""
]
] |
2003.00439 | Yang Li | Chengjun Li and Yang Li | Differential Evolution with Individuals Redistribution for Real
Parameter Single Objective Optimization | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differential Evolution (DE) is quite powerful for real parameter single
objective optimization. However, the ability of extending or changing search
area when falling into a local optimum is still required to be developed in DE
for accommodating extremely complicated fitness landscapes with a huge number
of local optima. We propose a new flow of DE, termed DE with individuals
redistribution, in which a process of individuals redistribution will be called
when progress on fitness is low for generations. In such a process, mutation
and crossover are standardized, while trial vectors are all kept in selection.
Once diversity exceeds a predetermined threshold, our opposition replacement is
executed, then algorithm behavior returns to original mode. In our experiments
based on two benchmark test suites, we apply individuals redistribution in ten
DE algorithms. Versions of the ten DE algorithms based on individuals
redistribution are compared with not only original version but also version
based on complete restart, where individuals redistribution and complete
restart are based on the same entry criterion. Experimental results indicate
that, for most of the DE algorithms, version based on individuals
redistribution performs better than both original version and version based on
complete restart.
| [
{
"version": "v1",
"created": "Sun, 1 Mar 2020 08:40:52 GMT"
}
] | 1,583,193,600,000 | [
[
"Li",
"Chengjun",
""
],
[
"Li",
"Yang",
""
]
] |
2003.00475 | Jing Li | Jing Li, Suiyi Ling, Junle Wang, Zhi Li, Patrick Le Callet | GPM: A Generic Probabilistic Model to Recover Annotator's Behavior and
Ground Truth Labeling | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the big data era, data labeling can be obtained through crowdsourcing.
Nevertheless, the obtained labels are generally noisy, unreliable or even
adversarial. In this paper, we propose a probabilistic graphical annotation
model to infer the underlying ground truth and annotator's behavior. To
accommodate both discrete and continuous application scenarios (e.g.,
classifying scenes vs. rating videos on a Likert scale), the underlying ground
truth is considered following a distribution rather than a single value. In
this way, the reliable but potentially divergent opinions from "good"
annotators can be recovered. The proposed model is able to identify whether an
annotator has worked diligently towards the task during the labeling procedure,
which could be used for further selection of qualified annotators. Our model
has been tested on both simulated data and real-world data, where it always
shows superior performance than the other state-of-the-art models in terms of
accuracy and robustness.
| [
{
"version": "v1",
"created": "Sun, 1 Mar 2020 12:14:52 GMT"
}
] | 1,583,193,600,000 | [
[
"Li",
"Jing",
""
],
[
"Ling",
"Suiyi",
""
],
[
"Wang",
"Junle",
""
],
[
"Li",
"Zhi",
""
],
[
"Callet",
"Patrick Le",
""
]
] |
2003.00635 | Hesham Mostafa | Hesham Mostafa, Marcel Nassar | Permutohedral-GCN: Graph Convolutional Networks with Global Attention | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph convolutional networks (GCNs) update a node's feature vector by
aggregating features from its neighbors in the graph. This ignores potentially
useful contributions from distant nodes. Identifying such useful distant
contributions is challenging due to scalability issues (too many nodes can
potentially contribute) and oversmoothing (aggregating features from too many
nodes risks swamping out relevant information and may result in nodes having
different labels but indistinguishable features). We introduce a global
attention mechanism where a node can selectively attend to, and aggregate
features from, any other node in the graph. The attention coefficients depend
on the Euclidean distance between learnable node embeddings, and we show that
the resulting attention-based global aggregation scheme is analogous to
high-dimensional Gaussian filtering. This makes it possible to use efficient
approximate Gaussian filtering techniques to implement our attention-based
global aggregation scheme. By employing an approximate filtering method based
on the permutohedral lattice, the time complexity of our proposed global
aggregation scheme only grows linearly with the number of nodes. The resulting
GCNs, which we term permutohedral-GCNs, are differentiable and trained
end-to-end, and they achieve state of the art performance on several node
classification benchmarks.
| [
{
"version": "v1",
"created": "Mon, 2 Mar 2020 02:44:52 GMT"
}
] | 1,583,193,600,000 | [
[
"Mostafa",
"Hesham",
""
],
[
"Nassar",
"Marcel",
""
]
] |
2003.00683 | Rupam Acharyya | Rupam Acharyya, Shouman Das, Ankani Chattoraj, Oishani Sengupta, Md
Iftekar Tanveer | Detection and Mitigation of Bias in Ted Talk Ratings | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unbiased data collection is essential to guaranteeing fairness in artificial
intelligence models. Implicit bias, a form of behavioral conditioning that
leads us to attribute predetermined characteristics to members of certain
groups and informs the data collection process. This paper quantifies implicit
bias in viewer ratings of TEDTalks, a diverse social platform assessing social
and professional performance, in order to present the correlations of different
kinds of bias across sensitive attributes. Although the viewer ratings of these
videos should purely reflect the speaker's competence and skill, our analysis
of the ratings demonstrates the presence of overwhelming and predominant
implicit bias with respect to race and gender. In our paper, we present
strategies to detect and mitigate bias that are critical to removing unfairness
in AI.
| [
{
"version": "v1",
"created": "Mon, 2 Mar 2020 06:13:24 GMT"
}
] | 1,583,193,600,000 | [
[
"Acharyya",
"Rupam",
""
],
[
"Das",
"Shouman",
""
],
[
"Chattoraj",
"Ankani",
""
],
[
"Sengupta",
"Oishani",
""
],
[
"Tanveer",
"Md Iftekar",
""
]
] |
2003.00749 | David Tuckey | David Tuckey, Alessandra Russo, Krysia Broda | A general framework for scientifically inspired explanations in AI | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explainability in AI is gaining attention in the computer science community
in response to the increasing success of deep learning and the important need
of justifying how such systems make predictions in life-critical applications.
The focus of explainability in AI has predominantly been on trying to gain
insights into how machine learning systems function by exploring relationships
between input data and predicted outcomes or by extracting simpler
interpretable models. Through literature surveys of philosophy and social
science, authors have highlighted the sharp difference between these generated
explanations and human-made explanations and claimed that current explanations
in AI do not take into account the complexity of human interaction to allow for
effective information passing to not-expert users. In this paper we instantiate
the concept of structure of scientific explanation as the theoretical
underpinning for a general framework in which explanations for AI systems can
be implemented. This framework aims to provide the tools to build a
"mental-model" of any AI system so that the interaction with the user can
provide information on demand and be closer to the nature of human-made
explanations. We illustrate how we can utilize this framework through two very
different examples: an artificial neural network and a Prolog solver and we
provide a possible implementation for both examples.
| [
{
"version": "v1",
"created": "Mon, 2 Mar 2020 10:32:21 GMT"
}
] | 1,583,193,600,000 | [
[
"Tuckey",
"David",
""
],
[
"Russo",
"Alessandra",
""
],
[
"Broda",
"Krysia",
""
]
] |
2003.00806 | Jalal Etesami | Jalal Etesami and Philipp Geiger | Causal Transfer for Imitation Learning and Decision Making under
Sensor-shift | It appears in AAAI-2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning from demonstrations (LfD) is an efficient paradigm to train AI
agents. But major issues arise when there are differences between (a) the
demonstrator's own sensory input, (b) our sensors that observe the demonstrator
and (c) the sensory input of the agent we train. In this paper, we propose a
causal model-based framework for transfer learning under such "sensor-shifts",
for two common LfD tasks: (1) inferring the effect of the demonstrator's
actions and (2) imitation learning. First we rigorously analyze, on the
population-level, to what extent the relevant underlying mechanisms (the action
effects and the demonstrator policy) can be identified and transferred from the
available observations together with prior knowledge of sensor characteristics.
And we device an algorithm to infer these mechanisms. Then we introduce several
proxy methods which are easier to calculate, estimate from finite data and
interpret than the exact solutions, alongside theoretical bounds on their
closeness to the exact ones. We validate our two main methods on simulated and
semi-real world data.
| [
{
"version": "v1",
"created": "Mon, 2 Mar 2020 12:37:23 GMT"
}
] | 1,583,193,600,000 | [
[
"Etesami",
"Jalal",
""
],
[
"Geiger",
"Philipp",
""
]
] |
2003.00814 | Bin Guo | Hao Wang, Bin Guo, Wei Wu, Zhiwen Yu | Towards information-rich, logical text generation with
knowledge-enhanced neural models | 7 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text generation system has made massive promising progress contributed by
deep learning techniques and has been widely applied in our life. However,
existing end-to-end neural models suffer from the problem of tending to
generate uninformative and generic text because they cannot ground input
context with background knowledge. In order to solve this problem, many
researchers begin to consider combining external knowledge in text generation
systems, namely knowledge-enhanced text generation. The challenges of knowledge
enhanced text generation including how to select the appropriate knowledge from
large-scale knowledge bases, how to read and understand extracted knowledge,
and how to integrate knowledge into generation process. This survey gives a
comprehensive review of knowledge-enhanced text generation systems, summarizes
research progress to solving these challenges and proposes some open issues and
research directions.
| [
{
"version": "v1",
"created": "Mon, 2 Mar 2020 12:41:02 GMT"
}
] | 1,583,193,600,000 | [
[
"Wang",
"Hao",
""
],
[
"Guo",
"Bin",
""
],
[
"Wu",
"Wei",
""
],
[
"Yu",
"Zhiwen",
""
]
] |
2003.01008 | Eden Abadi Ea | Eden Abadi, Ronen I. Brafman | Learning and Solving Regular Decision Processes | 7 pages, 1 figure | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Regular Decision Processes (RDPs) are a recently introduced model that
extends MDPs with non-Markovian dynamics and rewards. The non-Markovian
behavior is restricted to depend on regular properties of the history. These
can be specified using regular expressions or formulas in linear dynamic logic
over finite traces. Fully specified RDPs can be solved by compiling them into
an appropriate MDP. Learning RDPs from data is a challenging problem that has
yet to be addressed, on which we focus in this paper. Our approach rests on a
new representation for RDPs using Mealy Machines that emit a distribution and
an expected reward for each state-action pair. Building on this representation,
we combine automata learning techniques with history clustering to learn such a
Mealy machine and solve it by adapting MCTS to it. We empirically evaluate this
approach, demonstrating its feasibility.
| [
{
"version": "v1",
"created": "Mon, 2 Mar 2020 16:36:16 GMT"
}
] | 1,583,193,600,000 | [
[
"Abadi",
"Eden",
""
],
[
"Brafman",
"Ronen I.",
""
]
] |
2003.01207 | Michael Wybrow | Ann E. Nicholson, Kevin B. Korb, Erik P. Nyberg, Michael Wybrow,
Ingrid Zukerman, Steven Mascaro, Shreshth Thakur, Abraham Oshni Alvandi, Jeff
Riley, Ross Pearson, Shane Morris, Matthieu Herrmann, A.K.M. Azad, Fergus
Bolger, Ulrike Hahn, and David Lagnado | BARD: A structured technique for group elicitation of Bayesian networks
to support analytic reasoning | null | null | 10.1111/risa.13759 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many complex, real-world situations, problem solving and decision making
require effective reasoning about causation and uncertainty. However, human
reasoning in these cases is prone to confusion and error. Bayesian networks
(BNs) are an artificial intelligence technology that models uncertain
situations, supporting probabilistic and causal reasoning and decision making.
However, to date, BN methodologies and software require significant upfront
training, do not provide much guidance on the model building process, and do
not support collaboratively building BNs. BARD (Bayesian ARgumentation via
Delphi) is both a methodology and an expert system that utilises (1) BNs as the
underlying structured representations for better argument analysis, (2) a
multi-user web-based software platform and Delphi-style social processes to
assist with collaboration, and (3) short, high-quality e-courses on demand, a
highly structured process to guide BN construction, and a variety of helpful
tools to assist in building and reasoning with BNs, including an automated
explanation tool to assist effective report writing. The result is an
end-to-end online platform, with associated online training, for groups without
prior BN expertise to understand and analyse a problem, build a model of its
underlying probabilistic causal structure, validate and reason with the causal
model, and use it to produce a written analytic report. Initial experimental
results demonstrate that BARD aids in problem solving, reasoning and
collaboration.
| [
{
"version": "v1",
"created": "Mon, 2 Mar 2020 21:55:35 GMT"
}
] | 1,628,121,600,000 | [
[
"Nicholson",
"Ann E.",
""
],
[
"Korb",
"Kevin B.",
""
],
[
"Nyberg",
"Erik P.",
""
],
[
"Wybrow",
"Michael",
""
],
[
"Zukerman",
"Ingrid",
""
],
[
"Mascaro",
"Steven",
""
],
[
"Thakur",
"Shreshth",
""
],
[
"Alvandi",
"Abraham Oshni",
""
],
[
"Riley",
"Jeff",
""
],
[
"Pearson",
"Ross",
""
],
[
"Morris",
"Shane",
""
],
[
"Herrmann",
"Matthieu",
""
],
[
"Azad",
"A. K. M.",
""
],
[
"Bolger",
"Fergus",
""
],
[
"Hahn",
"Ulrike",
""
],
[
"Lagnado",
"David",
""
]
] |
2003.02979 | Hengyuan Hu | Hengyuan Hu, Adam Lerer, Alex Peysakhovich, Jakob Foerster | "Other-Play" for Zero-Shot Coordination | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We consider the problem of zero-shot coordination - constructing AI agents
that can coordinate with novel partners they have not seen before (e.g.
humans). Standard Multi-Agent Reinforcement Learning (MARL) methods typically
focus on the self-play (SP) setting where agents construct strategies by
playing the game with themselves repeatedly. Unfortunately, applying SP naively
to the zero-shot coordination problem can produce agents that establish highly
specialized conventions that do not carry over to novel partners they have not
been trained with. We introduce a novel learning algorithm called other-play
(OP), that enhances self-play by looking for more robust strategies, exploiting
the presence of known symmetries in the underlying problem. We characterize OP
theoretically as well as experimentally. We study the cooperative card game
Hanabi and show that OP agents achieve higher scores when paired with
independently trained agents. In preliminary results we also show that our OP
agents obtains higher average scores when paired with human players, compared
to state-of-the-art SP agents.
| [
{
"version": "v1",
"created": "Fri, 6 Mar 2020 00:39:37 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Mar 2020 17:58:40 GMT"
},
{
"version": "v3",
"created": "Wed, 12 May 2021 05:22:20 GMT"
}
] | 1,620,864,000,000 | [
[
"Hu",
"Hengyuan",
""
],
[
"Lerer",
"Adam",
""
],
[
"Peysakhovich",
"Alex",
""
],
[
"Foerster",
"Jakob",
""
]
] |
2003.03410 | Jakub Kowalski | Jakub Kowalski, Marek Szyku{\l}a | Experimental Studies in General Game Playing: An Experience Report | null | The AAAI 2020 Workshop on Reproducible AI - RAI2020 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe nearly fifteen years of General Game Playing experimental
research history in the context of reproducibility and fairness of comparisons
between various GGP agents and systems designed to play games described by
different formalisms. We think our survey may provide an interesting
perspective of how chaotic methods were allowed when nothing better was
possible. Finally, from our experience-based view, we would like to propose a
few recommendations of how such specific heterogeneous branch of research
should be handled appropriately in the future. The goal of this note is to
point out common difficulties and problems in the experimental research in the
area. We hope that our recommendations will help in avoiding them in future
works and allow more fair and reproducible comparisons.
| [
{
"version": "v1",
"created": "Fri, 6 Mar 2020 19:53:28 GMT"
}
] | 1,583,798,400,000 | [
[
"Kowalski",
"Jakub",
""
],
[
"Szykuła",
"Marek",
""
]
] |
2003.04369 | Kumar Sankar Ray | Kumar Sankar Ray, Sandip Paul, Diganta Saha | Belief Base Revision for Further Improvement of Unified Answer Set
Programming | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A belief base revision is developed. The belief base is represented using
Unified Answer Set Programs which is capable of representing imprecise and
uncertain information and perform nonomonotonic reasoning with them. The base
revision operator is developed using Removed Set Revision strategy. The
operator is characterized with respect to the postulates for base revisions
operator satisfies.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2020 08:31:01 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Nov 2020 11:05:36 GMT"
}
] | 1,606,176,000,000 | [
[
"Ray",
"Kumar Sankar",
""
],
[
"Paul",
"Sandip",
""
],
[
"Saha",
"Diganta",
""
]
] |
2003.04445 | Michael Painter | Michael Painter, Bruno Lacerda and Nick Hawes | Convex Hull Monte-Carlo Tree Search | Camera-ready version of paper accepted to ICAPS 2020, along with
relevant appendices | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work investigates Monte-Carlo planning for agents in stochastic
environments, with multiple objectives. We propose the Convex Hull Monte-Carlo
Tree-Search (CHMCTS) framework, which builds upon Trial Based Heuristic Tree
Search and Convex Hull Value Iteration (CHVI), as a solution to multi-objective
planning in large environments. Moreover, we consider how to pose the problem
of approximating multiobjective planning solutions as a contextual multi-armed
bandits problem, giving a principled motivation for how to select actions from
the view of contextual regret. This leads us to the use of Contextual Zooming
for action selection, yielding Zooming CHMCTS. We evaluate our algorithm using
the Generalised Deep Sea Treasure environment, demonstrating that Zooming
CHMCTS can achieve a sublinear contextual regret and scales better than CHVI on
a given computational budget.
| [
{
"version": "v1",
"created": "Mon, 9 Mar 2020 22:52:59 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Mar 2020 11:01:03 GMT"
}
] | 1,585,008,000,000 | [
[
"Painter",
"Michael",
""
],
[
"Lacerda",
"Bruno",
""
],
[
"Hawes",
"Nick",
""
]
] |
2003.04770 | Najla AL-Saati | Najla Akram AL-Saati, Marrwa Abd-AlKareem Alabajee | A Comparative Study on Parameter Estimation in Software Reliability
Modeling using Swarm Intelligence | 7 pages | International Journal of Recent Research and Review, Vol. IX,
Issue 4, December 2016 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work focuses on a comparison between the performances of two well-known
Swarm algorithms: Cuckoo Search (CS) and Firefly Algorithm (FA), in estimating
the parameters of Software Reliability Growth Models. This study is further
reinforced using Particle Swarm Optimization (PSO) and Ant Colony Optimization
(ACO). All algorithms are evaluated according to real software failure data,
the tests are performed and the obtained results are compared to show the
performance of each of the used algorithms. Furthermore, CS and FA are also
compared with each other on bases of execution time and iteration number.
Experimental results show that CS is more efficient in estimating the
parameters of SRGMs, and it has outperformed FA in addition to PSO and ACO for
the selected Data sets and employed models.
| [
{
"version": "v1",
"created": "Sun, 8 Mar 2020 16:35:42 GMT"
}
] | 1,583,884,800,000 | [
[
"AL-Saati",
"Najla Akram",
""
],
[
"Alabajee",
"Marrwa Abd-AlKareem",
""
]
] |
2003.05104 | Abeer M.Mahmoud | Ibrahim M. Ahmed, Abeer M. Mahmoud | Development of an Expert System for Diabetic Type-2 Diet | null | International Journal of Computer Applications, 2014, 107(1) | 10.5120/18714-9932 | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | A successful intelligent control of patient food for treatment purpose must
combines patient interesting food list and doctors efficient treatment food
list. Actually, many rural communities in Sudan have extremely limited access
to diabetic diet centers. People travel long distances to clinics or medical
facilities, and there is a shortage of medical experts in most of these
facilities. This results in slow service, and patients end up waiting long
hours without receiving any attention. Hence diabetic diet expert systems can
play a significant role in such cases where medical experts are not readily
available. This paper presents the design and implementation of an intelligent
medical expert system for diabetes diet that intended to be used in Sudan. The
development of the proposed expert system went through a number of stages such
problem and need identification, requirements analysis, knowledge acquisition,
formalization, design and implementation. Visual prolog was used for designing
the graphical user interface and the implementation of the system. The proposed
expert system is a promising helpful tool that reduces the workload for
physicians and provides diabetics with simple and valuable assistance.
| [
{
"version": "v1",
"created": "Sat, 22 Feb 2020 09:34:44 GMT"
}
] | 1,583,971,200,000 | [
[
"Ahmed",
"Ibrahim M.",
""
],
[
"Mahmoud",
"Abeer M.",
""
]
] |
2003.05196 | Nicolas Riesterer | Nicolas Riesterer, Daniel Brand, Marco Ragni | Uncovering the Data-Related Limits of Human Reasoning Research: An
Analysis based on Recommender Systems | 6 pages, 5 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Understanding the fundamentals of human reasoning is central to the
development of any system built to closely interact with humans. Cognitive
science pursues the goal of modeling human-like intelligence from a
theory-driven perspective with a strong focus on explainability. Syllogistic
reasoning as one of the core domains of human reasoning research has seen a
surge of computational models being developed over the last years. However,
recent analyses of models' predictive performances revealed a stagnation in
improvement. We believe that most of the problems encountered in cognitive
science are not due to the specific models that have been developed but can be
traced back to the peculiarities of behavioral data instead.
Therefore, we investigate potential data-related reasons for the problems in
human reasoning research by comparing model performances on human and
artificially generated datasets. In particular, we apply collaborative
filtering recommenders to investigate the adversarial effects of
inconsistencies and noise in data and illustrate the potential for data-driven
methods in a field of research predominantly concerned with gaining high-level
theoretical insight into a domain.
Our work (i) provides insight into the levels of noise to be expected from
human responses in reasoning data, (ii) uncovers evidence for an upper-bound of
performance that is close to being reached urging for an extension of the
modeling task, and (iii) introduces the tools and presents initial results to
pioneer a new paradigm for investigating and modeling reasoning focusing on
predicting responses for individual human reasoners.
| [
{
"version": "v1",
"created": "Wed, 11 Mar 2020 10:12:35 GMT"
}
] | 1,583,971,200,000 | [
[
"Riesterer",
"Nicolas",
""
],
[
"Brand",
"Daniel",
""
],
[
"Ragni",
"Marco",
""
]
] |
2003.05320 | Kieran Greer Dr | Kieran Greer | How the Brain might use Division | null | WSEAS Transactions on Computer Research, ISSN / E-ISSN: 1991-8755
/ 2415-1521, Volume 8, 2020, Art. #16, pp. 126-137 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most fundamental questions in Biology or Artificial Intelligence
is how the human brain performs mathematical functions. How does a neural
architecture that may organise itself mostly through statistics, know what to
do? One possibility is to extract the problem to something more abstract. This
becomes clear when thinking about how the brain handles large numbers, for
example to the power of something, when simply summing to an answer is not
feasible. In this paper, the author suggests that the maths question can be
answered more easily if the problem is changed into one of symbol manipulation
and not just number counting. If symbols can be compared and manipulated, maybe
without understanding completely what they are, then the mathematical
operations become relative and some of them might even be rote learned. The
proposed system may also be suggested as an alternative to the traditional
computer binary system. Any of the actual maths still breaks down into binary
operations, while a more symbolic level above that can manipulate the numbers
and reduce the problem size, thus making the binary operations simpler. An
interesting result of looking at this is the possibility of a new fractal
equation resulting from division, that can be used as a measure of good fit and
would help the brain decide how to solve something through self-replacement and
a comparison with this good fit.
| [
{
"version": "v1",
"created": "Wed, 11 Mar 2020 14:12:45 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Mar 2020 15:08:19 GMT"
}
] | 1,600,646,400,000 | [
[
"Greer",
"Kieran",
""
]
] |
2003.05370 | Ernesto Jimenez-Ruiz | Ernesto Jim\'enez-Ruiz, Asan Agibetov, Jiaoyan Chen, Matthias Samwald,
Valerie Cross | Dividing the Ontology Alignment Task with Semantic Embeddings and
Logic-based Modules | Accepted to the 24th European Conference on Artificial Intelligence
(ECAI 2020). arXiv admin note: text overlap with arXiv:1805.12402 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large ontologies still pose serious challenges to state-of-the-art ontology
alignment systems. In this paper we present an approach that combines a neural
embedding model and logic-based modules to accurately divide an input ontology
matching task into smaller and more tractable matching (sub)tasks. We have
conducted a comprehensive evaluation using the datasets of the Ontology
Alignment Evaluation Initiative. The results are encouraging and suggest that
the proposed method is adequate in practice and can be integrated within the
workflow of systems unable to cope with very large ontologies.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2020 14:44:12 GMT"
}
] | 1,583,971,200,000 | [
[
"Jiménez-Ruiz",
"Ernesto",
""
],
[
"Agibetov",
"Asan",
""
],
[
"Chen",
"Jiaoyan",
""
],
[
"Samwald",
"Matthias",
""
],
[
"Cross",
"Valerie",
""
]
] |
2003.05861 | Pablo Barros | Pablo Barros, Anne C. Bloem, Inge M. Hootsmans, Lena M. Opheij, Romain
H.A. Toebosch, Emilia Barakova and Alessandra Sciutti | The Chef's Hat Simulation Environment for Reinforcement-Learning-Based
Agents | Submitted to IROS2020 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | To achieve social interactions within Human-Robot Interaction (HRI)
environments is a very challenging task. Most of the current research focuses
on Wizard-of-Oz approaches, which neglect the recent development of intelligent
robots. On the other hand, real-world scenarios usually do not provide the
necessary control and reproducibility which are needed for learning algorithms.
In this paper, we propose a virtual simulation environment that implements the
Chef's Hat card game, designed to be used in HRI scenarios, to provide a
controllable and reproducible scenario for reinforcement-learning algorithms.
| [
{
"version": "v1",
"created": "Thu, 12 Mar 2020 15:52:49 GMT"
}
] | 1,584,057,600,000 | [
[
"Barros",
"Pablo",
""
],
[
"Bloem",
"Anne C.",
""
],
[
"Hootsmans",
"Inge M.",
""
],
[
"Opheij",
"Lena M.",
""
],
[
"Toebosch",
"Romain H. A.",
""
],
[
"Barakova",
"Emilia",
""
],
[
"Sciutti",
"Alessandra",
""
]
] |
2003.06347 | Jennifer Renoux | Jennifer Renoux, Uwe K\"ockemann, Amy Loutfi | Online Guest Detection in a Smart Home using Pervasive Sensors and
Probabilistic Reasoning | null | European Conference on Ambient Intelligence (pp. 74-89). Springer,
Cham, 2018 | 10.1007/978-3-030-03062-9_6 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Smart home environments equipped with distributed sensor networks are capable
of helping people by providing services related to health, emergency detection
or daily routine management. A backbone to these systems relies often on the
system's ability to track and detect activities performed by the users in their
home. Despite the continuous progress in the area of activity recognition in
smart homes, many systems make a strong underlying assumption that the number
of occupants in the home at any given moment of time is always known.
Estimating the number of persons in a Smart Home at each time step remains a
challenge nowadays. Indeed, unlike most (crowd) counting solution which are
based on computer vision techniques, the sensors considered in a Smart Home are
often very simple and do not offer individually a good overview of the
situation. The data gathered needs therefore to be fused in order to infer
useful information. This paper aims at addressing this challenge and presents a
probabilistic approach able to estimate the number of persons in the
environment at each time step. This approach works in two steps: first, an
estimate of the number of persons present in the environment is done using a
Constraint Satisfaction Problem solver, based on the topology of the sensor
network and the sensor activation pattern at this time point. Then, a Hidden
Markov Model refines this estimate by considering the uncertainty related to
the sensors. Using both simulated and real data, our method has been tested and
validated on two smart homes of different sizes and configuration and
demonstrates the ability to accurately estimate the number of inhabitants.
| [
{
"version": "v1",
"created": "Fri, 13 Mar 2020 15:41:15 GMT"
}
] | 1,584,316,800,000 | [
[
"Renoux",
"Jennifer",
""
],
[
"Köckemann",
"Uwe",
""
],
[
"Loutfi",
"Amy",
""
]
] |
2003.06551 | Hadi Mansourifar | Hadi Mansourifar, Lin Chen, Weidong Shi | Hybrid Cryptocurrency Pump and Dump Detection | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Increasingly growing Cryptocurrency markets have become a hive for scammers
to run pump and dump schemes which is considered as an anomalous activity in
exchange markets. Anomaly detection in time series is challenging since
existing methods are not sufficient to detect the anomalies in all contexts. In
this paper, we propose a novel hybrid pump and dump detection method based on
distance and density metrics. First, we propose a novel automatic thresh-old
setting method for distance-based anomaly detection. Second, we propose a novel
metric called density score for density-based anomaly detection. Finally, we
exploit the combination of density and distance metrics successfully as a
hybrid approach. Our experiments show that, the proposed hybrid approach is
reliable to detect the majority of alleged P & D activities in top ranked
exchange pairs by outperforming both density-based and distance-based methods.
| [
{
"version": "v1",
"created": "Sat, 14 Mar 2020 04:38:01 GMT"
}
] | 1,584,403,200,000 | [
[
"Mansourifar",
"Hadi",
""
],
[
"Chen",
"Lin",
""
],
[
"Shi",
"Weidong",
""
]
] |
2003.06649 | Nadjib Lazaar Dr | Christian Bessiere, Clement Carbonnel, Anton Dries, Emmanuel Hebrard,
George Katsirelos, Nadjib Lazaar, Nina Narodytska, Claude-Guy Quimper, Kostas
Stergiou, Dimosthenis C. Tsouros, Toby Walsh | Partial Queries for Constraint Acquisition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning constraint networks is known to require a number of membership
queries exponential in the number of variables. In this paper, we learn
constraint networks by asking the user partial queries. That is, we ask the
user to classify assignments to subsets of the variables as positive or
negative. We provide an algorithm, called QUACQ, that, given a negative
example, focuses onto a constraint of the target network in a number of queries
logarithmic in the size of the example. The whole constraint network can then
be learned with a polynomial number of partial queries. We give information
theoretic lower bounds for learning some simple classes of constraint networks
and show that our generic algorithm is optimal in some cases.
| [
{
"version": "v1",
"created": "Sat, 14 Mar 2020 14:43:45 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Oct 2021 09:41:15 GMT"
}
] | 1,634,083,200,000 | [
[
"Bessiere",
"Christian",
""
],
[
"Carbonnel",
"Clement",
""
],
[
"Dries",
"Anton",
""
],
[
"Hebrard",
"Emmanuel",
""
],
[
"Katsirelos",
"George",
""
],
[
"Lazaar",
"Nadjib",
""
],
[
"Narodytska",
"Nina",
""
],
[
"Quimper",
"Claude-Guy",
""
],
[
"Stergiou",
"Kostas",
""
],
[
"Tsouros",
"Dimosthenis C.",
""
],
[
"Walsh",
"Toby",
""
]
] |
2003.07813 | Elif Surer | Sinan Ariyurek, Aysu Betin-Can, Elif Surer | Enhancing the Monte Carlo Tree Search Algorithm for Video Game Testing | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the effects of several Monte Carlo Tree Search (MCTS)
modifications for video game testing. Although MCTS modifications are highly
studied in game playing, their impacts on finding bugs are blank. We focused on
bug finding in our previous study where we introduced synthetic and human-like
test goals and we used these test goals in Sarsa and MCTS agents to find bugs.
In this study, we extend the MCTS agent with several modifications for game
testing purposes. Furthermore, we present a novel tree reuse strategy. We
experiment with these modifications by testing them on three testbed games,
four levels each, that contain 45 bugs in total. We use the General Video Game
Artificial Intelligence (GVG-AI) framework to create the testbed games and
collect 427 human tester trajectories using the GVG-AI framework. We analyze
the proposed modifications in three parts: we evaluate their effects on bug
finding performances of agents, we measure their success under two different
computational budgets, and we assess their effects on human-likeness of the
human-like agent. Our results show that MCTS modifications improve the bug
finding performance of the agents.
| [
{
"version": "v1",
"created": "Tue, 17 Mar 2020 16:52:53 GMT"
}
] | 1,584,489,600,000 | [
[
"Ariyurek",
"Sinan",
""
],
[
"Betin-Can",
"Aysu",
""
],
[
"Surer",
"Elif",
""
]
] |
2003.08316 | Giuseppe Marra | Luc De Raedt, Sebastijan Duman\v{c}i\'c, Robin Manhaeve, and Giuseppe
Marra | From Statistical Relational to Neuro-Symbolic Artificial Intelligence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuro-symbolic and statistical relational artificial intelligence both
integrate frameworks for learning with logical reasoning. This survey
identifies several parallels across seven different dimensions between these
two fields. These cannot only be used to characterize and position
neuro-symbolic artificial intelligence approaches but also to identify a number
of directions for further research.
| [
{
"version": "v1",
"created": "Wed, 18 Mar 2020 16:15:46 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Mar 2020 16:03:51 GMT"
}
] | 1,585,094,400,000 | [
[
"De Raedt",
"Luc",
""
],
[
"Dumančić",
"Sebastijan",
""
],
[
"Manhaeve",
"Robin",
""
],
[
"Marra",
"Giuseppe",
""
]
] |
2003.08445 | Azalia Mirhoseini | Anna Goldie and Azalia Mirhoseini | Placement Optimization with Deep Reinforcement Learning | International Symposium on Physical Design (ISPD), 2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Placement Optimization is an important problem in systems and chip design,
which consists of mapping the nodes of a graph onto a limited set of resources
to optimize for an objective, subject to constraints. In this paper, we start
by motivating reinforcement learning as a solution to the placement problem. We
then give an overview of what deep reinforcement learning is. We next formulate
the placement problem as a reinforcement learning problem and show how this
problem can be solved with policy gradient optimization. Finally, we describe
lessons we have learned from training deep reinforcement learning policies
across a variety of placement optimization problems.
| [
{
"version": "v1",
"created": "Wed, 18 Mar 2020 19:20:37 GMT"
}
] | 1,584,662,400,000 | [
[
"Goldie",
"Anna",
""
],
[
"Mirhoseini",
"Azalia",
""
]
] |
2003.08598 | Philipp Wanko | Dirk Abels, Julian Jordi, Max Ostrowski, Torsten Schaub, Ambra
Toletti, and Philipp Wanko | Train Scheduling with Hybrid Answer Set Programming | Under consideration in Theory and Practice of Logic Programming
(TPLP) | Theory and Practice of Logic Programming 21 (2021) 317-347 | 10.1017/S1471068420000046 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a solution to real-world train scheduling problems, involving
routing, scheduling, and optimization, based on Answer Set Programming (ASP).
To this end, we pursue a hybrid approach that extends ASP with difference
constraints to account for a fine-grained timing. More precisely, we
exemplarily show how the hybrid ASP system clingo[DL] can be used to tackle
demanding planning-and-scheduling problems. In particular, we investigate how
to boost performance by combining distinct ASP solving techniques, such as
approximations and heuristics, with preprocessing and encoding techniques for
tackling large-scale, real-world train scheduling instances. Under
consideration in Theory and Practice of Logic Programming (TPLP)
| [
{
"version": "v1",
"created": "Thu, 19 Mar 2020 06:50:04 GMT"
}
] | 1,625,097,600,000 | [
[
"Abels",
"Dirk",
""
],
[
"Jordi",
"Julian",
""
],
[
"Ostrowski",
"Max",
""
],
[
"Schaub",
"Torsten",
""
],
[
"Toletti",
"Ambra",
""
],
[
"Wanko",
"Philipp",
""
]
] |
2003.08727 | Aleksander Czechowski | Aleksander Czechowski, Frans A. Oliehoek | Decentralized MCTS via Learned Teammate Models | Sole copyright holder is IJCAI, all rights reserved. Published
version available online: https://doi.org/10.24963/ijcai.2020/12 | Proceedings of the Twenty-Ninth International Joint Conference on
Artificial Intelligence, pages 81--88, 2020 | 10.24963/ijcai.2020/12 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decentralized online planning can be an attractive paradigm for cooperative
multi-agent systems, due to improved scalability and robustness. A key
difficulty of such approach lies in making accurate predictions about the
decisions of other agents. In this paper, we present a trainable online
decentralized planning algorithm based on decentralized Monte Carlo Tree
Search, combined with models of teammates learned from previous episodic runs.
By only allowing one agent to adapt its models at a time, under the assumption
of ideal policy approximation, successive iterations of our method are
guaranteed to improve joint policies, and eventually lead to convergence to a
Nash equilibrium. We test the efficiency of the algorithm by performing
experiments in several scenarios of the spatial task allocation environment
introduced in [Claes et al., 2015]. We show that deep learning and
convolutional neural networks can be employed to produce accurate policy
approximators which exploit the spatial features of the problem, and that the
proposed algorithm improves over the baseline planning performance for
particularly challenging domain configurations.
| [
{
"version": "v1",
"created": "Thu, 19 Mar 2020 13:10:20 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jul 2020 15:39:36 GMT"
},
{
"version": "v3",
"created": "Tue, 10 Nov 2020 18:42:03 GMT"
}
] | 1,605,052,800,000 | [
[
"Czechowski",
"Aleksander",
""
],
[
"Oliehoek",
"Frans A.",
""
]
] |
2003.09529 | Thibault Duhamel | Thibault Duhamel, Mariane Maynard and Froduald Kabanza | Imagination-Augmented Deep Learning for Goal Recognition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Being able to infer the goal of people we observe, interact with, or read
stories about is one of the hallmarks of human intelligence. A prominent idea
in current goal-recognition research is to infer the likelihood of an agent's
goal from the estimations of the costs of plans to the different goals the
agent might have. Different approaches implement this idea by relying only on
handcrafted symbolic representations. Their application to real-world settings
is, however, quite limited, mainly because extracting rules for the factors
that influence goal-oriented behaviors remains a complicated task. In this
paper, we introduce a novel idea of using a symbolic planner to compute
plan-cost insights, which augment a deep neural network with an imagination
capability, leading to improved goal recognition accuracy in real and synthetic
domains compared to a symbolic recognizer or a deep-learning goal recognizer
alone.
| [
{
"version": "v1",
"created": "Fri, 20 Mar 2020 23:07:34 GMT"
}
] | 1,585,008,000,000 | [
[
"Duhamel",
"Thibault",
""
],
[
"Maynard",
"Mariane",
""
],
[
"Kabanza",
"Froduald",
""
]
] |
2003.09579 | Tai Vu | Tai Vu, Leon Tran | FlapAI Bird: Training an Agent to Play Flappy Bird Using Reinforcement
Learning Techniques | typos corrected, references added | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Reinforcement learning is one of the most popular approaches for automated
game playing. This method allows an agent to estimate the expected utility of
its state in order to make optimal actions in an unknown environment. We seek
to apply reinforcement learning algorithms to the game Flappy Bird. We
implement SARSA and Q-Learning with some modifications such as
$\epsilon$-greedy policy, discretization and backward updates. We find that
SARSA and Q-Learning outperform the baseline, regularly achieving scores of
1400+, with the highest in-game score of 2069.
| [
{
"version": "v1",
"created": "Sat, 21 Mar 2020 05:27:36 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Apr 2020 09:03:35 GMT"
}
] | 1,586,390,400,000 | [
[
"Vu",
"Tai",
""
],
[
"Tran",
"Leon",
""
]
] |
2003.09661 | Xinyang Deng | Xinyang Deng | Basic concepts, definitions, and methods in D number theory | 28 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a generalization of Dempster-Shafer theory, D number theory (DNT) aims to
provide a framework to deal with uncertain information with non-exclusiveness
and incompleteness. Although there are some advances on DNT in previous
studies, however, they lack of systematicness, and many important issues have
not yet been solved. In this paper, several crucial aspects in constructing a
perfect and systematic framework of DNT are considered. At first the
non-exclusiveness in DNT is formally defined and discussed. Secondly, a method
to combine multiple D numbers is proposed by extending previous exclusive
conflict redistribution (ECR) rule. Thirdly, a new pair of belief and
plausibility measures for D numbers are defined and many desirable properties
are satisfied by the proposed measures. Fourthly, the combination of
information-incomplete D numbers is studied specially to show how to deal with
the incompleteness of information in DNT. In this paper, we mainly give
relative math definitions, properties, and theorems, concrete examples and
applications will be considered in the future study.
| [
{
"version": "v1",
"created": "Sat, 21 Mar 2020 13:42:29 GMT"
}
] | 1,585,008,000,000 | [
[
"Deng",
"Xinyang",
""
]
] |
2003.09698 | Mario Alviano | Mario Alviano and Marco Manna | Large-scale Ontological Reasoning via Datalog | 15 pages, 2 tables, 1 figure, 2 algorithms, under review for the book
Studies on the Semantic Web Series | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reasoning over OWL 2 is a very expensive task in general, and therefore the
W3C identified tractable profiles exhibiting good computational properties.
Ontological reasoning for many fragments of OWL 2 can be reduced to the
evaluation of Datalog queries. This paper surveys some of these compilations,
and in particular the one addressing queries over Horn-$\mathcal{SHIQ}$
knowledge bases and its implementation in DLV2 enanched by a new version of the
Magic Sets algorithm.
| [
{
"version": "v1",
"created": "Sat, 21 Mar 2020 16:51:02 GMT"
}
] | 1,585,008,000,000 | [
[
"Alviano",
"Mario",
""
],
[
"Manna",
"Marco",
""
]
] |
2003.09746 | Shushman Choudhury | Shushman Choudhury, Nate Gruver, Mykel J. Kochenderfer | Adaptive Informative Path Planning with Multimodal Sensing | First two authors contributed equally; International Conference on
Automated Planning and Scheduling (ICAPS) 2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adaptive Informative Path Planning (AIPP) problems model an agent tasked with
obtaining information subject to resource constraints in unknown, partially
observable environments. Existing work on AIPP has focused on representing
observations about the world as a result of agent movement. We formulate the
more general setting where the agent may choose between different sensors at
the cost of some energy, in addition to traversing the environment to gather
information. We call this problem AIPPMS (MS for Multimodal Sensing). AIPPMS
requires reasoning jointly about the effects of sensing and movement in terms
of both energy expended and information gained. We frame AIPPMS as a Partially
Observable Markov Decision Process (POMDP) and solve it with online planning.
Our approach is based on the Partially Observable Monte Carlo Planning
framework with modifications to ensure constraint feasibility and a heuristic
rollout policy tailored for AIPPMS. We evaluate our method on two domains: a
simulated search-and-rescue scenario and a challenging extension to the classic
RockSample problem. We find that our approach outperforms a classic AIPP
algorithm that is modified for AIPPMS, as well as online planning using a
random rollout policy.
| [
{
"version": "v1",
"created": "Sat, 21 Mar 2020 20:28:57 GMT"
}
] | 1,585,008,000,000 | [
[
"Choudhury",
"Shushman",
""
],
[
"Gruver",
"Nate",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.