id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2009.08616 | Yi Yu | Yi Yu, Abhishek Srivastava, Rajiv Ratn Shah | Conditional Hybrid GAN for Sequence Generation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conditional sequence generation aims to instruct the generation procedure by
conditioning the model with additional context information, which is a
self-supervised learning issue (a form of unsupervised learning with
supervision information from data itself). Unfortunately, the current
state-of-the-art generative models have limitations in sequence generation with
multiple attributes. In this paper, we propose a novel conditional hybrid GAN
(C-Hybrid-GAN) to solve this issue. Discrete sequence with triplet attributes
are separately generated when conditioned on the same context. Most
importantly, relational reasoning technique is exploited to model not only the
dependency inside each sequence of the attribute during the training of the
generator but also the consistency among the sequences of attributes during the
training of the discriminator. To avoid the non-differentiability problem in
GANs encountered during discrete data generation, we exploit the Gumbel-Softmax
technique to approximate the distribution of discrete-valued sequences.Through
evaluating the task of generating melody (associated with note, duration, and
rest) from lyrics, we demonstrate that the proposed C-Hybrid-GAN outperforms
the existing methods in context-conditioned discrete-valued sequence
generation.
| [
{
"version": "v1",
"created": "Fri, 18 Sep 2020 03:52:55 GMT"
}
] | 1,600,646,400,000 | [
[
"Yu",
"Yi",
""
],
[
"Srivastava",
"Abhishek",
""
],
[
"Shah",
"Rajiv Ratn",
""
]
] |
2009.08644 | Zihan Ding | Zihan Ding, Tianyang Yu, Yanhua Huang, Hongming Zhang, Guo Li,
Quancheng Guo, Luo Mai and Hao Dong | Efficient Reinforcement Learning Development with RLzoo | Accepted by ACM Multimedia Open Source Software Competition | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many researchers and developers are exploring for adopting Deep Reinforcement
Learning (DRL) techniques in their applications. They however often find such
an adoption challenging. Existing DRL libraries provide poor support for
prototyping DRL agents (i.e., models), customising the agents, and comparing
the performance of DRL agents. As a result, the developers often report low
efficiency in developing DRL agents. In this paper, we introduce RLzoo, a new
DRL library that aims to make the development of DRL agents efficient. RLzoo
provides developers with (i) high-level yet flexible APIs for prototyping DRL
agents, and further customising the agents for best performance, (ii) a model
zoo where users can import a wide range of DRL agents and easily compare their
performance, and (iii) an algorithm that can automatically construct DRL agents
with custom components (which are critical to improve agent's performance in
custom applications). Evaluation results show that RLzoo can effectively reduce
the development cost of DRL agents, while achieving comparable performance with
existing DRL libraries.
| [
{
"version": "v1",
"created": "Fri, 18 Sep 2020 06:18:49 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Aug 2021 01:59:29 GMT"
}
] | 1,629,417,600,000 | [
[
"Ding",
"Zihan",
""
],
[
"Yu",
"Tianyang",
""
],
[
"Huang",
"Yanhua",
""
],
[
"Zhang",
"Hongming",
""
],
[
"Li",
"Guo",
""
],
[
"Guo",
"Quancheng",
""
],
[
"Mai",
"Luo",
""
],
[
"Dong",
"Hao",
""
]
] |
2009.08656 | Zhaochong An | Zhaochong An, Bozhou Chen, Houde Quan, Qihui Lin, Hongzhi Wang | EM-RBR: a reinforced framework for knowledge graph completion from
reasoning perspective | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge graph completion aims to predict the new links in given entities
among the knowledge graph (KG). Most mainstream embedding methods focus on fact
triplets contained in the given KG, however, ignoring the rich background
information provided by logic rules driven from knowledge base implicitly. To
solve this problem, in this paper, we propose a general framework, named
EM-RBR(embedding and rule-based reasoning), capable of combining the advantages
of reasoning based on rules and the state-of-the-art models of embedding.
EM-RBR aims to utilize relational background knowledge contained in rules to
conduct multi-relation reasoning link prediction rather than superficial vector
triangle linkage in embedding models. By this way, we can explore relation
between two entities in deeper context to achieve higher accuracy. In
experiments, we demonstrate that EM-RBR achieves better performance compared
with previous models on FB15k, WN18 and our new dataset FB15k-R, especially the
new dataset where our model perform futher better than those state-of-the-arts.
We make the implementation of EM-RBR available at
https://github.com/1173710224/link-prediction-with-rule-based-reasoning.
| [
{
"version": "v1",
"created": "Fri, 18 Sep 2020 07:02:41 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Oct 2020 17:36:25 GMT"
}
] | 1,602,547,200,000 | [
[
"An",
"Zhaochong",
""
],
[
"Chen",
"Bozhou",
""
],
[
"Quan",
"Houde",
""
],
[
"Lin",
"Qihui",
""
],
[
"Wang",
"Hongzhi",
""
]
] |
2009.08696 | Raul Montoliu | Alejandro Estaben, C\'esar D\'iaz, Raul Montoliu, Diego
P\'erez-Liebana | TotalBotWar: A New Pseudo Real-time Multi-action Game Challenge and
Competition for AI | 6 pages, 5 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents TotalBotWar, a new pseudo real-time multi-action
challenge for game AI, as well as some initial experiments that benchmark the
framework with different agents. The game is based on the real-time battles of
the popular TotalWar games series where players manage an army to defeat the
opponent's one. In the proposed game, a turn consists of a set of orders to
control the units. The number and specific orders that can be performed in a
turn vary during the progression of the game. One interesting feature of the
game is that if a particular unit does not receive an order in a turn, it will
continue performing the action specified in a previous turn. The turn-wise
branching factor becomes overwhelming for traditional algorithms and the
partial observability of the game state makes the proposed game an interesting
platform to test modern AI algorithms.
| [
{
"version": "v1",
"created": "Fri, 18 Sep 2020 09:13:56 GMT"
}
] | 1,600,646,400,000 | [
[
"Estaben",
"Alejandro",
""
],
[
"Díaz",
"César",
""
],
[
"Montoliu",
"Raul",
""
],
[
"Pérez-Liebana",
"Diego",
""
]
] |
2009.08770 | Bishwamittra Ghosh | Daniel Neider and Bishwamittra Ghosh | Probably Approximately Correct Explanations of Machine Learning Models
via Syntax-Guided Synthesis | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel approach to understanding the decision making of complex
machine learning models (e.g., deep neural networks) using a combination of
probably approximately correct learning (PAC) and a logic inference methodology
called syntax-guided synthesis (SyGuS). We prove that our framework produces
explanations that with a high probability make only few errors and show
empirically that it is effective in generating small, human-interpretable
explanations.
| [
{
"version": "v1",
"created": "Fri, 18 Sep 2020 12:10:49 GMT"
}
] | 1,600,646,400,000 | [
[
"Neider",
"Daniel",
""
],
[
"Ghosh",
"Bishwamittra",
""
]
] |
2009.08776 | Mariela Morveli-Espinoza | Mariela Morveli-Espinoza, Juan Carlos Nieves, Ayslan Trevizan
Possebom, and Cesar Augusto Tacla | Dealing with Incompatibilities among Procedural Goals under Uncertainty | 14 pages, 4 figures, accepted in the Iberoamerican Journal of
Artificial Intelligence. arXiv admin note: substantial text overlap with
arXiv:2009.05186 | null | 10.4114/intartif.vol22iss64pp47-62 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | By considering rational agents, we focus on the problem of selecting goals
out of a set of incompatible ones. We consider three forms of incompatibility
introduced by Castelfranchi and Paglieri, namely the terminal, the instrumental
(or based on resources), and the superfluity. We represent the agent's plans by
means of structured arguments whose premises are pervaded with uncertainty. We
measure the strength of these arguments in order to determine the set of
compatible goals. We propose two novel ways for calculating the strength of
these arguments, depending on the kind of incompatibility that exists between
them. The first one is the logical strength value, it is denoted by a
three-dimensional vector, which is calculated from a probabilistic interval
associated with each argument. The vector represents the precision of the
interval, the location of it, and the combination of precision and location.
This type of representation and treatment of the strength of a structured
argument has not been defined before by the state of the art. The second way
for calculating the strength of the argument is based on the cost of the plans
(regarding the necessary resources) and the preference of the goals associated
with the plans. Considering our novel approach for measuring the strength of
structured arguments, we propose a semantics for the selection of plans and
goals that is based on Dung's abstract argumentation theory. Finally, we make a
theoretical evaluation of our proposal.
| [
{
"version": "v1",
"created": "Thu, 17 Sep 2020 00:56:45 GMT"
}
] | 1,600,646,400,000 | [
[
"Morveli-Espinoza",
"Mariela",
""
],
[
"Nieves",
"Juan Carlos",
""
],
[
"Possebom",
"Ayslan Trevizan",
""
],
[
"Tacla",
"Cesar Augusto",
""
]
] |
2009.08922 | James Goodman | James Goodman, Sebastian Risi, Simon Lucas | AI and Wargaming | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent progress in Game AI has demonstrated that given enough data from human
gameplay, or experience gained via simulations, machines can rival or surpass
the most skilled human players in classic games such as Go, or commercial
computer games such as Starcraft. We review the current state-of-the-art
through the lens of wargaming, and ask firstly what features of wargames
distinguish them from the usual AI testbeds, and secondly which recent AI
advances are best suited to address these wargame-specific features.
| [
{
"version": "v1",
"created": "Fri, 18 Sep 2020 16:39:54 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Sep 2020 08:40:57 GMT"
}
] | 1,601,251,200,000 | [
[
"Goodman",
"James",
""
],
[
"Risi",
"Sebastian",
""
],
[
"Lucas",
"Simon",
""
]
] |
2009.09263 | Bin Wang | Bin Wang, Guangtao Wang, Jing Huang, Jiaxuan You, Jure Leskovec, C.-C.
Jay Kuo | Inductive Learning on Commonsense Knowledge Graph Completion | 8 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Commonsense knowledge graph (CKG) is a special type of knowledge graph (KG),
where entities are composed of free-form text. However, most existing CKG
completion methods focus on the setting where all the entities are presented at
training time. Although this setting is standard for conventional KG
completion, it has limitations for CKG completion. At test time, entities in
CKGs can be unseen because they may have unseen text/names and entities may be
disconnected from the training graph, since CKGs are generally very sparse.
Here, we propose to study the inductive learning setting for CKG completion
where unseen entities may present at test time. We develop a novel learning
framework named InductivE. Different from previous approaches, InductiveE
ensures the inductive learning capability by directly computing entity
embeddings from raw entity attributes/text. InductiveE consists of a free-text
encoder, a graph encoder, and a KG completion decoder. Specifically, the
free-text encoder first extracts the textual representation of each entity
based on the pre-trained language model and word embedding. The graph encoder
is a gated relational graph convolutional neural network that learns from a
densified graph for more informative entity representation learning. We develop
a method that densifies CKGs by adding edges among semantic-related entities
and provide more supportive information for unseen entities, leading to better
generalization ability of entity embedding for unseen entities. Finally,
inductiveE employs Conv-TransE as the CKG completion decoder. Experimental
results show that InductiveE significantly outperforms state-of-the-art
baselines in both standard and inductive settings on ATOMIC and ConceptNet
benchmarks. InductivE performs especially well on inductive scenarios where it
achieves above 48% improvement over present methods.
| [
{
"version": "v1",
"created": "Sat, 19 Sep 2020 16:10:26 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Feb 2021 19:48:13 GMT"
}
] | 1,613,692,800,000 | [
[
"Wang",
"Bin",
""
],
[
"Wang",
"Guangtao",
""
],
[
"Huang",
"Jing",
""
],
[
"You",
"Jiaxuan",
""
],
[
"Leskovec",
"Jure",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] |
2009.09355 | Shyni Thomas | Shyni Thomas and Dipti Deodhare and M.N. Murty | Multi Agent Path Finding with Awareness for Spatially Extended Agents | Submitted to Expert Systems with Application 26 pages, 11 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Path finding problems involve identification of a plan for conflict free
movement of agents over a common road network. Most approaches to this problem
handle the agents as point objects, wherein the size of the agent is
significantly smaller than the road on which it travels. In this paper, we
consider spatially extended agents which have a size comparable to the length
of the road on which they travel. An optimal multi agent path finding approach
for spatially-extended agents was proposed in the eXtended Conflict Based
Search (XCBS) algorithm. As XCBS resolves only a pair of conflicts at a time,
it results in deeper search trees in case of cascading or multiple (more than
two agent) conflicts at a given location. This issue is addressed in eXtended
Conflict Based Search with Awareness (XCBS-A) in which an agent uses awareness
of other agents' plans to make its own plan. In this paper, we explore XCBS-A
in greater detail, we theoretically prove its completeness and empirically
demonstrate its performance with other algorithms in terms of variances in road
characteristics, agent characteristics and plan characteristics. We demonstrate
the distributive nature of the algorithm by evaluating its performance when
distributed over multiple machines. XCBS-A generates a huge search space
impacting its efficiency in terms of memory; to address this we propose an
approach for memory-efficiency and empirically demonstrate the performance of
the algorithm. The nature of XCBS-A is such that it may lead to suboptimal
solutions, hence the final contribution of this paper is an enhanced approach,
XCBS-Local Awareness (XCBS-LA) which we prove will be optimal and complete.
| [
{
"version": "v1",
"created": "Sun, 20 Sep 2020 05:40:04 GMT"
}
] | 1,600,732,800,000 | [
[
"Thomas",
"Shyni",
""
],
[
"Deodhare",
"Dipti",
""
],
[
"Murty",
"M. N.",
""
]
] |
2009.10152 | \"Ozg\"ur Akg\"un | Patrick Spracklen, Nguyen Dang, \"Ozg\"ur Akg\"un, Ian Miguel | Towards Portfolios of Streamlined Constraint Models: A Case Study with
the Balanced Academic Curriculum Problem | null | ModRef 2020 - The 19th workshop on Constraint Modelling and
Reformulation | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Augmenting a base constraint model with additional constraints can strengthen
the inferences made by a solver and therefore reduce search effort. We focus on
the automatic addition of streamliner constraints, derived from the types
present in an abstract Essence specification of a problem class of interest,
which trade completeness for potentially very significant reduction in search.
The refinement of streamlined Essence specifications into constraint models
suitable for input to constraint solvers gives rise to a large number of
modelling choices in addition to those required for the base Essence
specification. Previous automated streamlining approaches have been limited in
evaluating only a single default model for each streamlined specification. In
this paper we explore the effect of model selection in the context of
streamlined specifications. We propose a new best-first search method that
generates a portfolio of Pareto Optimal streamliner-model combinations by
evaluating for each streamliner a portfolio of models to search and explore the
variability in performance and find the optimal model. Various forms of racing
are utilised to constrain the computational cost of training.
| [
{
"version": "v1",
"created": "Mon, 21 Sep 2020 19:48:02 GMT"
}
] | 1,600,819,200,000 | [
[
"Spracklen",
"Patrick",
""
],
[
"Dang",
"Nguyen",
""
],
[
"Akgün",
"Özgür",
""
],
[
"Miguel",
"Ian",
""
]
] |
2009.10156 | \"Ozg\"ur Akg\"un | \"Ozg\"ur Akg\"un, Nguyen Dang, Joan Espasa, Ian Miguel, Andr\'as Z.
Salamon, Christopher Stone | Exploring Instance Generation for Automated Planning | null | ModRef 2020 - The 19th workshop on Constraint Modelling and
Reformulation | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Many of the core disciplines of artificial intelligence have sets of standard
benchmark problems well known and widely used by the community when developing
new algorithms. Constraint programming and automated planning are examples of
these areas, where the behaviour of a new algorithm is measured by how it
performs on these instances. Typically the efficiency of each solving method
varies not only between problems, but also between instances of the same
problem. Therefore, having a diverse set of instances is crucial to be able to
effectively evaluate a new solving method. Current methods for automatic
generation of instances for Constraint Programming problems start with a
declarative model and search for instances with some desired attributes, such
as hardness or size. We first explore the difficulties of adapting this
approach to generate instances starting from problem specifications written in
PDDL, the de-facto standard language of the automated planning community. We
then propose a new approach where the whole planning problem description is
modelled using Essence, an abstract modelling language that allows expressing
high-level structures without committing to a particular low level
representation in PDDL.
| [
{
"version": "v1",
"created": "Mon, 21 Sep 2020 19:58:33 GMT"
}
] | 1,600,819,200,000 | [
[
"Akgün",
"Özgür",
""
],
[
"Dang",
"Nguyen",
""
],
[
"Espasa",
"Joan",
""
],
[
"Miguel",
"Ian",
""
],
[
"Salamon",
"András Z.",
""
],
[
"Stone",
"Christopher",
""
]
] |
2009.10224 | Luis A. Pineda | Luis A. Pineda | Entropy, Computing and Rationality | 43 pages, 4 figures, 44 references | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Making decisions freely presupposes that there is some indeterminacy in the
environment and in the decision making engine. The former is reflected on the
behavioral changes due to communicating: few changes indicate rigid
environments; productive changes manifest a moderate indeterminacy, but a large
communicating effort with few productive changes characterize a chaotic
environment. Hence, communicating, effective decision making and productive
behavioral changes are related. The entropy measures the indeterminacy of the
environment, and there is an entropy range in which communicating supports
effective decision making. This conjecture is referred to here as the The
Potential Productivity of Decisions.
The computing engine that is causal to decision making should also have some
indeterminacy. However, computations performed by standard Turing Machines are
predetermined. To overcome this limitation an entropic mode of computing that
is called here Relational-Indeterminate is presented. Its implementation in a
table format has been used to model an associative memory. The present theory
and experiment suggest the Entropy Trade-off: There is an entropy range in
which computing is effective but if the entropy is too low computations are too
rigid and if it is too high computations are unfeasible. The entropy trade-off
of computing engines corresponds to the potential productivity of decisions of
the environment.
The theory is referred to an Interaction-Oriented Cognitive Architecture.
Memory, perception, action and thought involve a level of indeterminacy and
decision making may be free in such degree. The overall theory supports an
ecological view of rationality. The entropy of the brain has been measured in
neuroscience studies and the present theory supports that the brain is an
entropic machine. The paper is concluded with a number of predictions that may
be tested empirically.
| [
{
"version": "v1",
"created": "Mon, 21 Sep 2020 23:56:03 GMT"
}
] | 1,600,819,200,000 | [
[
"Pineda",
"Luis A.",
""
]
] |
2009.10236 | EPTCS | Alex Brik (Google Inc.) | Splitting a Hybrid ASP Program | In Proceedings ICLP 2020, arXiv:2009.09158 | EPTCS 325, 2020, pp. 21-34 | 10.4204/EPTCS.325.8 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hybrid Answer Set Programming (Hybrid ASP) is an extension of Answer Set
Programming (ASP) that allows ASP-like rules to interact with outside sources.
The Splitting Set Theorem is an important and extensively used result for ASP.
The paper introduces the Splitting Set Theorem for Hybrid ASP, which is for
Hybrid ASP the equivalent of the Splitting Set Theorem, and shows how it can be
applied to simplify computing answer sets for Hybrid ASP programs most relevant
for practical applications.
| [
{
"version": "v1",
"created": "Tue, 22 Sep 2020 00:47:31 GMT"
}
] | 1,600,819,200,000 | [
[
"Brik",
"Alex",
"",
"Google Inc."
]
] |
2009.10253 | EPTCS | Alessandro Bertagnon (University of Ferrara) | Constraint Programming Algorithms for Route Planning Exploiting
Geometrical Information | In Proceedings ICLP 2020, arXiv:2009.09158 | EPTCS 325, 2020, pp. 286-295 | 10.4204/EPTCS.325.38 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Problems affecting the transport of people or goods are plentiful in industry
and commerce and they also appear to be at the origin of much more complex
problems. In recent years, the logistics and transport sector keeps growing
supported by technological progress, i.e. companies to be competitive are
resorting to innovative technologies aimed at efficiency and effectiveness.
This is why companies are increasingly using technologies such as Artificial
Intelligence (AI), Blockchain and Internet of Things (IoT). Artificial
intelligence, in particular, is often used to solve optimization problems in
order to provide users with the most efficient ways to exploit available
resources. In this work we present an overview of our current research
activities concerning the development of new algorithms, based on CLP
techniques, for route planning problems exploiting the geometric information
intrinsically present in many of them or in some of their variants. The
research so far has focused in particular on the Euclidean Traveling
Salesperson Problem (Euclidean TSP) with the aim to exploit the results
obtained also to other problems of the same category, such as the Euclidean
Vehicle Routing Problem (Euclidean VRP), in the future.
| [
{
"version": "v1",
"created": "Tue, 22 Sep 2020 00:51:45 GMT"
}
] | 1,600,819,200,000 | [
[
"Bertagnon",
"Alessandro",
"",
"University of Ferrara"
]
] |
2009.10256 | EPTCS | Zhun Yang | Extending Answer Set Programs with Neural Networks | In Proceedings ICLP 2020, arXiv:2009.09158 | EPTCS 325, 2020, pp. 313-322 | 10.4204/EPTCS.325.41 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The integration of low-level perception with high-level reasoning is one of
the oldest problems in Artificial Intelligence. Recently, several proposals
were made to implement the reasoning process in complex neural network
architectures. While these works aim at extending neural networks with the
capability of reasoning, a natural question that we consider is: can we extend
answer set programs with neural networks to allow complex and high-level
reasoning on neural network outputs? As a preliminary result, we propose
NeurASP -- a simple extension of answer set programs by embracing neural
networks where neural network outputs are treated as probability distributions
over atomic facts in answer set programs. We show that NeurASP can not only
improve the perception accuracy of a pre-trained neural network, but also help
to train a neural network better by giving restrictions through logic rules.
However, training with NeurASP would take much more time than pure neural
network training due to the internal use of a symbolic reasoning engine. For
future work, we plan to investigate the potential ways to solve the scalability
issue of NeurASP. One potential way is to embed logic programs directly in
neural networks. On this route, we plan to first design a SAT solver using
neural networks, then extend such a solver to allow logic programs.
| [
{
"version": "v1",
"created": "Tue, 22 Sep 2020 00:52:30 GMT"
}
] | 1,600,819,200,000 | [
[
"Yang",
"Zhun",
""
]
] |
2009.10613 | Larry Muhlstein | Larry Muhlstein | The Relativity of Induction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lately there has been a lot of discussion about why deep learning algorithms
perform better than we would theoretically suspect. To get insight into this
question, it helps to improve our understanding of how learning works. We
explore the core problem of generalization and show that long-accepted Occam's
razor and parsimony principles are insufficient to ground learning. Instead, we
derive and demonstrate a set of relativistic principles that yield clearer
insight into the nature and dynamics of learning. We show that concepts of
simplicity are fundamentally contingent, that all learning operates relative to
an initial guess, and that generalization cannot be measured or strongly
inferred, but that it can be expected given enough observation. Using these
principles, we reconstruct our understanding in terms of distributed learning
systems whose components inherit beliefs and update them. We then apply this
perspective to elucidate the nature of some real world inductive processes
including deep learning.
| [
{
"version": "v1",
"created": "Tue, 22 Sep 2020 15:17:26 GMT"
}
] | 1,600,819,200,000 | [
[
"Muhlstein",
"Larry",
""
]
] |
2009.10968 | Sao Mai Nguyen | Alexandre Manoury (IMT Atlantique - INFO), Sao Mai Nguyen, C\'edric
Buche | Hierarchical Affordance Discovery using Intrinsic Motivation | 7th International Conference on Human-Agent Interaction (HAI '19),
Oct 2019, Kyoto, Japan | null | 10.1145/3349537.3351898 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To be capable of lifelong learning in a real-life environment, robots have to
tackle multiple challenges. Being able to relate physical properties they may
observe in their environment to possible interactions they may have is one of
them. This skill, named affordance learning, is strongly related to embodiment
and is mastered through each person's development: each individual learns
affordances differently through their own interactions with their surroundings.
Current methods for affordance learning usually use either fixed actions to
learn these affordances or focus on static setups involving a robotic arm to be
operated. In this article, we propose an algorithm using intrinsic motivation
to guide the learning of affordances for a mobile robot. This algorithm is
capable to autonomously discover, learn and adapt interrelated affordances
without pre-programmed actions. Once learned, these affordances may be used by
the algorithm to plan sequences of actions in order to perform tasks of various
difficulties. We then present one experiment and analyse our system before
comparing it with other approaches from reinforcement learning and affordance
learning.
| [
{
"version": "v1",
"created": "Wed, 23 Sep 2020 07:18:21 GMT"
}
] | 1,600,905,600,000 | [
[
"Manoury",
"Alexandre",
"",
"IMT Atlantique - INFO"
],
[
"Nguyen",
"Sao Mai",
""
],
[
"Buche",
"Cédric",
""
]
] |
2009.11111 | \"Ozg\"ur Akg\"un | G\"okberk Ko\c{c}ak, \"Ozg\"ur Akg\"un, Nguyen Dang, Ian Miguel | Efficient Incremental Modelling and Solving | null | ModRef 2020 - The 19th workshop on Constraint Modelling and
Reformulation | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In various scenarios, a single phase of modelling and solving is either not
sufficient or not feasible to solve the problem at hand. A standard approach to
solving AI planning problems, for example, is to incrementally extend the
planning horizon and solve the problem of trying to find a plan of a particular
length. Indeed, any optimization problem can be solved as a sequence of
decision problems in which the objective value is incrementally updated.
Another example is constraint dominance programming (CDP), in which search is
organized into a sequence of levels. The contribution of this work is to enable
a native interaction between SAT solvers and the automated modelling system
Savile Row to support efficient incremental modelling and solving. This allows
adding new decision variables, posting new constraints and removing existing
constraints (via assumptions) between incremental steps. Two additional
benefits of the native coupling of modelling and solving are the ability to
retain learned information between SAT solver calls and to enable SAT
assumptions, further improving flexibility and efficiency. Experiments on one
optimisation problem and five pattern mining tasks demonstrate that the native
interaction between the modelling system and SAT solver consistently improves
performance significantly.
| [
{
"version": "v1",
"created": "Wed, 23 Sep 2020 12:40:23 GMT"
}
] | 1,600,905,600,000 | [
[
"Koçak",
"Gökberk",
""
],
[
"Akgün",
"Özgür",
""
],
[
"Dang",
"Nguyen",
""
],
[
"Miguel",
"Ian",
""
]
] |
2009.11142 | Patrick Rodler | Patrick Rodler and Erich Teppan | The Scheduling Job-Set Optimization Problem: A Model-Based Diagnosis
Approach | See also the online proceedings of the International Workshop on
Principles of Diagnosis (DX-2020):
http://www.dx-2020.org/papers/DX-2020_paper_18.pdf | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | A common issue for companies is that the volume of product orders may at
times exceed the production capacity. We formally introduce two novel problems
dealing with the question which orders to discard or postpone in order to meet
certain (timeliness) goals, and try to approach them by means of model-based
diagnosis. In thorough analyses, we identify many similarities of the
introduced problems to diagnosis problems, but also reveal crucial
idiosyncracies and outline ways to handle or leverage them. Finally, a
proof-of-concept evaluation on industrial-scale problem instances from a
well-known scheduling benchmark suite demonstrates that one of the two
formalized problems can be well attacked by out-of-the-box model-based
diagnosis tools.
| [
{
"version": "v1",
"created": "Wed, 23 Sep 2020 13:38:36 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 12:36:06 GMT"
}
] | 1,659,657,600,000 | [
[
"Rodler",
"Patrick",
""
],
[
"Teppan",
"Erich",
""
]
] |
2009.11640 | Ra\"ida Ktari | Ra\"ida Ktari and Mohamed Ayman Boujelben | On the use of evidence theory in belief base revision | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper deals with belief base revision that is a form of belief change
consisting of the incorporation of new facts into an agent's beliefs
represented by a finite set of propositional formulas. In the aim to guarantee
more reliability and rationality for real applications while performing
revision, we propose the idea of credible belief base revision yielding to
define two new formula-based revision operators using the suitable tools
offered by evidence theory. These operators, uniformly presented in the same
spirit of others in [9], stem from consistent subbases maximal with respect to
credibility instead of set inclusion and cardinality. Moreover, in between
these two extremes operators, evidence theory let us shed some light on a
compromise operator avoiding losing initial beliefs to the maximum extent
possible. Its idea captures maximal consistent sets stemming from all possible
intersections of maximal consistent subbases. An illustration of all these
operators and a comparison with others are inverstigated by examples.
| [
{
"version": "v1",
"created": "Thu, 24 Sep 2020 12:45:32 GMT"
}
] | 1,600,992,000,000 | [
[
"Ktari",
"Raïda",
""
],
[
"Boujelben",
"Mohamed Ayman",
""
]
] |
2009.12065 | Diego Perez Liebana Dr. | Raluca D. Gaina, Martin Balla, Alexander Dockhorn, Raul Montoliu,
Diego Perez-Liebana | Design and Implementation of TAG: A Tabletop Games Framework | 24 pages, 6 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This document describes the design and implementation of the Tabletop Games
framework (TAG), a Java-based benchmark for developing modern board games for
AI research. TAG provides a common skeleton for implementing tabletop games
based on a common API for AI agents, a set of components and classes to easily
add new games and an import module for defining data in JSON format. At
present, this platform includes the implementation of seven different tabletop
games that can also be used as an example for further developments.
Additionally, TAG also incorporates logging functionality that allows the user
to perform a detailed analysis of the game, in terms of action space, branching
factor, hidden information, and other measures of interest for Game AI
research. The objective of this document is to serve as a central point where
the framework can be described at length. TAG can be downloaded at:
https://github.com/GAIGResearch/TabletopGames
| [
{
"version": "v1",
"created": "Fri, 25 Sep 2020 07:27:30 GMT"
}
] | 1,601,251,200,000 | [
[
"Gaina",
"Raluca D.",
""
],
[
"Balla",
"Martin",
""
],
[
"Dockhorn",
"Alexander",
""
],
[
"Montoliu",
"Raul",
""
],
[
"Perez-Liebana",
"Diego",
""
]
] |
2009.12178 | Patrick Rodler | Patrick Rodler and Fatima Elichanova | Do We Really Sample Right In Model-Based Diagnosis? | See also the online proceedings of the International Workshop on
Principles of Diagnosis 2020 (DX-2020):
http://www.dx-2020.org/papers/DX-2020_paper_13.pdf | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Statistical samples, in order to be representative, have to be drawn from a
population in a random and unbiased way. Nevertheless, it is common practice in
the field of model-based diagnosis to make estimations from (biased) best-first
samples. One example is the computation of a few most probable possible fault
explanations for a defective system and the use of these to assess which aspect
of the system, if measured, would bring the highest information gain.
In this work, we scrutinize whether these statistically not well-founded
conventions, that both diagnosis researchers and practitioners have adhered to
for decades, are indeed reasonable. To this end, we empirically analyze various
sampling methods that generate fault explanations. We study the
representativeness of the produced samples in terms of their estimations about
fault explanations and how well they guide diagnostic decisions, and we
investigate the impact of sample size, the optimal trade-off between sampling
efficiency and effectivity, and how approximate sampling techniques compare to
exact ones.
| [
{
"version": "v1",
"created": "Fri, 25 Sep 2020 12:30:14 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 14:43:18 GMT"
}
] | 1,659,657,600,000 | [
[
"Rodler",
"Patrick",
""
],
[
"Elichanova",
"Fatima",
""
]
] |
2009.12190 | Patrick Rodler | Patrick Rodler | Sound, Complete, Linear-Space, Best-First Diagnosis Search | See also the online proceedings of the International Workshop on
Principles of Diagnosis 2020 (DX-2020):
http://www.dx-2020.org/papers/DX-2020_paper_19.pdf | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Various model-based diagnosis scenarios require the computation of the most
preferred fault explanations. Existing algorithms that are sound (i.e., output
only actual fault explanations) and complete (i.e., can return all
explanations), however, require exponential space to achieve this task. As a
remedy, to enable successful diagnosis on memory-restricted devices and for
memory-intensive problem cases, we propose RBF-HS, a diagnostic search method
based on Korf's well-known RBFS algorithm. RBF-HS can enumerate an arbitrary
fixed number of fault explanations in best-first order within linear space
bounds, without sacrificing the desirable soundness or completeness properties.
Evaluations using real-world diagnosis cases show that RBF-HS, when used to
compute minimum-cardinality fault explanations, in most cases saves substantial
space (up to 98 %) while requiring only reasonably more or even less time than
Reiter's HS-Tree, a commonly used and as generally applicable sound, complete
and best-first diagnosis search.
| [
{
"version": "v1",
"created": "Fri, 25 Sep 2020 12:49:49 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 13:22:56 GMT"
}
] | 1,659,657,600,000 | [
[
"Rodler",
"Patrick",
""
]
] |
2009.12416 | Diego Carvalho | Rafael Garcia Barbastefano and Maria Clara Lippi and Diego Carvalho | Process mining classification with a weightless neural network | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using a weightless neural network architecture WiSARD we propose a
straightforward graph to retina codification to represent business process
graph flows avoiding kernels, and we present how WiSARD outperforms the
classification performance with small training sets in the process mining
context.
| [
{
"version": "v1",
"created": "Fri, 25 Sep 2020 19:59:42 GMT"
}
] | 1,601,337,600,000 | [
[
"Barbastefano",
"Rafael Garcia",
""
],
[
"Lippi",
"Maria Clara",
""
],
[
"Carvalho",
"Diego",
""
]
] |
2009.12600 | Gavin Rens | Gavin Rens, Jean-Fran\c{c}ois Raskin, Rapha\"el Reynouad, Giuseppe
Marra | Online Learning of Non-Markovian Reward Models | 24 pages, single column, 7 figures. arXiv admin note: substantial
text overlap with arXiv:2001.09293 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are situations in which an agent should receive rewards only after
having accomplished a series of previous tasks, that is, rewards are
non-Markovian. One natural and quite general way to represent history-dependent
rewards is via a Mealy machine, a finite state automaton that produces output
sequences from input sequences. In our formal setting, we consider a Markov
decision process (MDP) that models the dynamics of the environment in which the
agent evolves and a Mealy machine synchronized with this MDP to formalize the
non-Markovian reward function. While the MDP is known by the agent, the reward
function is unknown to the agent and must be learned.
Our approach to overcome this challenge is to use Angluin's $L^*$ active
learning algorithm to learn a Mealy machine representing the underlying
non-Markovian reward machine (MRM). Formal methods are used to determine the
optimal strategy for answering so-called membership queries posed by $L^*$.
Moreover, we prove that the expected reward achieved will eventually be at
least as much as a given, reasonable value provided by a domain expert. We
evaluate our framework on three problems. The results show that using $L^*$ to
learn an MRM in a non-Markovian reward decision process is effective.
| [
{
"version": "v1",
"created": "Sat, 26 Sep 2020 13:54:34 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Sep 2020 08:56:39 GMT"
}
] | 1,601,510,400,000 | [
[
"Rens",
"Gavin",
""
],
[
"Raskin",
"Jean-François",
""
],
[
"Reynouad",
"Raphaël",
""
],
[
"Marra",
"Giuseppe",
""
]
] |
2009.12691 | Juan Camilo Fonseca-Galindo | Juan Camilo Fonseca-Galindo, Gabriela de Castro Surita, Jos\'e Maia
Neto, Cristiano Leite de Castro and Andr\'e Paim Lemos | A Multi-Agent System for Solving the Dynamic Capacitated Vehicle Routing
Problem with Stochastic Customers using Trajectory Data Mining | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The worldwide growth of e-commerce has created new challenges for logistics
companies, one of which is being able to deliver products quickly and at low
cost, which reflects directly in the way of sorting packages, needing to
eliminate steps such as storage and batch creation. Our work presents a
multi-agent system that uses trajectory data mining techniques to extract
territorial patterns and use them in the dynamic creation of last-mile routes.
The problem can be modeled as a Dynamic Capacitated Vehicle Routing Problem
(VRP) with Stochastic Customer, being therefore NP-HARD, what makes its
implementation unfeasible for many packages. The work's main contribution is to
solve this problem only depending on the Warehouse system configurations and
not on the number of packages processed, which is appropriate for Big Data
scenarios commonly present in the delivery of e-commerce products.
Computational experiments were conducted for single and multi depot instances.
Due to its probabilistic nature, the proposed approach presented slightly lower
performances when compared to the static VRP algorithm. However, the
operational gains that our solution provides making it very attractive for
situations in which the routes must be set dynamically.
| [
{
"version": "v1",
"created": "Sat, 26 Sep 2020 21:36:35 GMT"
}
] | 1,601,337,600,000 | [
[
"Fonseca-Galindo",
"Juan Camilo",
""
],
[
"Surita",
"Gabriela de Castro",
""
],
[
"Neto",
"José Maia",
""
],
[
"de Castro",
"Cristiano Leite",
""
],
[
"Lemos",
"André Paim",
""
]
] |
2009.12974 | Fred Valdez Ameneyro | Fred Valdez Ameneyro, Edgar Galvan, Anger Fernando Kuri Morales | Playing Carcassonne with Monte Carlo Tree Search | 8 pages, 6 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monte Carlo Tree Search (MCTS) is a relatively new sampling method with
multiple variants in the literature. They can be applied to a wide variety of
challenging domains including board games, video games, and energy-based
problems to mention a few. In this work, we explore the use of the vanilla MCTS
and the MCTS with Rapid Action Value Estimation (MCTS-RAVE) in the game of
Carcassonne, a stochastic game with a deceptive scoring system where limited
research has been conducted. We compare the strengths of the MCTS-based methods
with the Star2.5 algorithm, previously reported to yield competitive results in
the game of Carcassonne when a domain-specific heuristic is used to evaluate
the game states. We analyse the particularities of the strategies adopted by
the algorithms when they share a common reward system. The MCTS-based methods
consistently outperformed the Star2.5 algorithm given their ability to find and
follow long-term strategies, with the vanilla MCTS exhibiting a more robust
game-play than the MCTS-RAVE.
| [
{
"version": "v1",
"created": "Sun, 27 Sep 2020 22:35:53 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Oct 2020 17:49:29 GMT"
}
] | 1,601,942,400,000 | [
[
"Ameneyro",
"Fred Valdez",
""
],
[
"Galvan",
"Edgar",
""
],
[
"Morales",
"Anger Fernando Kuri",
""
]
] |
2009.12990 | Benjamin Goertzel | Ben Goertzel | Uncertain Linear Logic via Fibring of Probabilistic and Fuzzy Logic | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Beginning with a simple semantics for propositions, based on counting
observations, it is shown that probabilistic and fuzzy logic correspond to two
different heuristic assumptions regarding the combination of propositions whose
evidence bases are not currently available. These two different heuristic
assumptions lead to two different sets of formulas for propagating quantitative
truth values through lattice operations. It is shown that these two sets of
formulas provide a natural grounding for the multiplicative and additive
operator-sets in linear logic. The standard rules of linear logic then emerge
as consequences of the underlying semantics. The concept of linear logic as a
``logic of resources" is manifested here via the principle of ``conservation of
evidence" -- the restrictions to weakening and contraction in linear logic
serve to avoid double-counting of evidence (beyond any double-counting incurred
via use of heuristic truth value functions).
| [
{
"version": "v1",
"created": "Mon, 28 Sep 2020 00:19:42 GMT"
}
] | 1,601,337,600,000 | [
[
"Goertzel",
"Ben",
""
]
] |
2009.13058 | Luis A. Pineda | Luis A. Pineda and Gibr\'an Fuentes and Rafael Morales | An Entropic Associative Memory | 25 pages, 6 figures, 17 references | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural memories are associative, declarative and distributed. Symbolic
computing memories resemble natural memories in their declarative character,
and information can be stored and recovered explicitly; however, they lack the
associative and distributed properties of natural memories. Sub-symbolic
memories developed within the connectionist or artificial neural networks
paradigm are associative and distributed, but are unable to express symbolic
structure and information cannot be stored and retrieved explicitly; hence,
they lack the declarative property. To address this dilemma, we use
Relational-Indeterminate Computing to model associative memory registers that
hold distributed representations of individual objects. This mode of computing
has an intrinsic computing entropy which measures the indeterminacy of
representations. This parameter determines the operational characteristics of
the memory. Associative registers are embedded in an architecture that maps
concrete images expressed in modality-specific buffers into abstract
representations, and vice versa, and the memory system as a whole fulfills the
three properties of natural memories. The system has been used to model a
visual memory holding the representations of hand-written digits, and
recognition and recall experiments show that there is a range of entropy
values, not too low and not too high, in which associative memory registers
have a satisfactory performance. The similarity between the cue and the object
recovered in memory retrieve operations depends on the entropy of the memory
register holding the representation of the corresponding object. The
experiments were implemented in a simulation using a standard computer, but a
parallel architecture may be built where the memory operations would take a
very reduced number of computing steps.
| [
{
"version": "v1",
"created": "Mon, 28 Sep 2020 04:24:21 GMT"
}
] | 1,601,337,600,000 | [
[
"Pineda",
"Luis A.",
""
],
[
"Fuentes",
"Gibrán",
""
],
[
"Morales",
"Rafael",
""
]
] |
2009.13371 | Mehak Maniktala | Mehak Maniktala, Christa Cody, Tiffany Barnes, and Min Chi | Avoiding Help Avoidance: Using Interface Design Changes to Promote
Unsolicited Hint Usage in an Intelligent Tutor | null | International Journal of Artificial Intelligence in Education 2020 | 10.1007/s40593-020-00213-3 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Within intelligent tutoring systems, considerable research has investigated
hints, including how to generate data-driven hints, what hint content to
present, and when to provide hints for optimal learning outcomes. However, less
attention has been paid to how hints are presented. In this paper, we propose a
new hint delivery mechanism called "Assertions" for providing unsolicited hints
in a data-driven intelligent tutor. Assertions are partially-worked example
steps designed to appear within a student workspace, and in the same format as
student-derived steps, to show students a possible subgoal leading to the
solution. We hypothesized that Assertions can help address the well-known hint
avoidance problem. In systems that only provide hints upon request, hint
avoidance results in students not receiving hints when they are needed. Our
unsolicited Assertions do not seek to improve student help-seeking, but rather
seek to ensure students receive the help they need. We contrast Assertions with
Messages, text-based, unsolicited hints that appear after student inactivity.
Our results show that Assertions significantly increase unsolicited hint usage
compared to Messages. Further, they show a significant aptitude-treatment
interaction between Assertions and prior proficiency, with Assertions leading
students with low prior proficiency to generate shorter (more efficient)
posttest solutions faster. We also present a clustering analysis that shows
patterns of productive persistence among students with low prior knowledge when
the tutor provides unsolicited help in the form of Assertions. Overall, this
work provides encouraging evidence that hint presentation can significantly
impact how students use them and using Assertions can be an effective way to
address help avoidance.
| [
{
"version": "v1",
"created": "Mon, 28 Sep 2020 14:39:11 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Oct 2020 16:28:55 GMT"
}
] | 1,602,633,600,000 | [
[
"Maniktala",
"Mehak",
""
],
[
"Cody",
"Christa",
""
],
[
"Barnes",
"Tiffany",
""
],
[
"Chi",
"Min",
""
]
] |
2009.13780 | Xing Wang | Xing Wang, Alexander Vinel | Cross Learning in Deep Q-Networks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose a novel cross Q-learning algorithm, aim at
alleviating the well-known overestimation problem in value-based reinforcement
learning methods, particularly in the deep Q-networks where the overestimation
is exaggerated by function approximation errors. Our algorithm builds on double
Q-learning, by maintaining a set of parallel models and estimate the Q-value
based on a randomly selected network, which leads to reduced overestimation
bias as well as the variance. We provide empirical evidence on the advantages
of our method by evaluating on some benchmark environment, the experimental
results demonstrate significant improvement of performance in reducing the
overestimation bias and stabilizing the training, further leading to better
derived policies.
| [
{
"version": "v1",
"created": "Tue, 29 Sep 2020 04:58:17 GMT"
}
] | 1,601,424,000,000 | [
[
"Wang",
"Xing",
""
],
[
"Vinel",
"Alexander",
""
]
] |
2009.13996 | Kary Fr\"amling | Kary Fr\"amling | Explainable AI without Interpretable Model | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explainability has been a challenge in AI for as long as AI has existed. With
the recently increased use of AI in society, it has become more important than
ever that AI systems would be able to explain the reasoning behind their
results also to end-users in situations such as being eliminated from a
recruitment process or having a bank loan application refused by an AI system.
Especially if the AI system has been trained using Machine Learning, it tends
to contain too many parameters for them to be analysed and understood, which
has caused them to be called `black-box' systems. Most Explainable AI (XAI)
methods are based on extracting an interpretable model that can be used for
producing explanations. However, the interpretable model does not necessarily
map accurately to the original black-box model. Furthermore, the
understandability of interpretable models for an end-user remains questionable.
The notions of Contextual Importance and Utility (CIU) presented in this paper
make it possible to produce human-like explanations of black-box outcomes
directly, without creating an interpretable model. Therefore, CIU explanations
map accurately to the black-box model itself. CIU is completely model-agnostic
and can be used with any black-box system. In addition to feature importance,
the utility concept that is well-known in Decision Theory provides a new
dimension to explanations compared to most existing XAI methods. Finally, CIU
can produce explanations at any level of abstraction and using different
vocabularies and other means of interaction, which makes it possible to adjust
explanations and interaction according to the context and to the target users.
| [
{
"version": "v1",
"created": "Tue, 29 Sep 2020 13:29:44 GMT"
}
] | 1,601,424,000,000 | [
[
"Främling",
"Kary",
""
]
] |
2009.14297 | Xing Wang | Xing Wang, Alexander Vinel | Reannealing of Decaying Exploration Based On Heuristic Measure in Deep
Q-Network | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing exploration strategies in reinforcement learning (RL) often either
ignore the history or feedback of search, or are complicated to implement.
There is also a very limited literature showing their effectiveness over
diverse domains. We propose an algorithm based on the idea of reannealing, that
aims at encouraging exploration only when it is needed, for example, when the
algorithm detects that the agent is stuck in a local optimum. The approach is
simple to implement. We perform an illustrative case study showing that it has
potential to both accelerate training and obtain a better policy.
| [
{
"version": "v1",
"created": "Tue, 29 Sep 2020 20:40:00 GMT"
}
] | 1,601,510,400,000 | [
[
"Wang",
"Xing",
""
],
[
"Vinel",
"Alexander",
""
]
] |
2009.14365 | Mojtaba Mozaffar | Mojtaba Mozaffar, Ablodghani Ebrahimi, Jian Cao | Toolpath design for additive manufacturing using deep reinforcement
learning | 8 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Toolpath optimization of metal-based additive manufacturing processes is
currently hampered by the high-dimensionality of its design space. In this
work, a reinforcement learning platform is proposed that dynamically learns
toolpath strategies to build an arbitrary part. To this end, three prominent
model-free reinforcement learning formulations are investigated to design
additive manufacturing toolpaths and demonstrated for two cases of dense and
sparse reward structures. The results indicate that this learning-based
toolpath design approach achieves high scores, especially when a dense reward
structure is present.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 01:03:45 GMT"
}
] | 1,601,510,400,000 | [
[
"Mozaffar",
"Mojtaba",
""
],
[
"Ebrahimi",
"Ablodghani",
""
],
[
"Cao",
"Jian",
""
]
] |
2009.14409 | Seongmin Lee | Hyun Dong Lee, Seongmin Lee and U Kang | AUBER: Automated BERT Regularization | null | null | 10.1371/journal.pone.0253241 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can we effectively regularize BERT? Although BERT proves its
effectiveness in various downstream natural language processing tasks, it often
overfits when there are only a small number of training instances. A promising
direction to regularize BERT is based on pruning its attention heads based on a
proxy score for head importance. However, heuristic-based methods are usually
suboptimal since they predetermine the order by which attention heads are
pruned. In order to overcome such a limitation, we propose AUBER, an effective
regularization method that leverages reinforcement learning to automatically
prune attention heads from BERT. Instead of depending on heuristics or
rule-based policies, AUBER learns a pruning policy that determines which
attention heads should or should not be pruned for regularization. Experimental
results show that AUBER outperforms existing pruning methods by achieving up to
10% better accuracy. In addition, our ablation study empirically demonstrates
the effectiveness of our design choices for AUBER.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 03:32:55 GMT"
}
] | 1,631,664,000,000 | [
[
"Lee",
"Hyun Dong",
""
],
[
"Lee",
"Seongmin",
""
],
[
"Kang",
"U",
""
]
] |
2009.14452 | Marco Pegoraro | Marco Pegoraro, Merih Seran Uysal, Wil M.P. van der Aalst | Conformance Checking over Uncertain Event Data | 39 pages, 12 figures, 10 tables, 44 references. arXiv admin note:
text overlap with arXiv:1910.00089 | Information Systems 102 (2021) 101810 | 10.1016/j.is.2021.101810 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The strong impulse to digitize processes and operations in companies and
enterprises have resulted in the creation and automatic recording of an
increasingly large amount of process data in information systems. These are
made available in the form of event logs. Process mining techniques enable the
process-centric analysis of data, including automatically discovering process
models and checking if event data conform to a given model. In this paper, we
analyze the previously unexplored setting of uncertain event logs. In such
event logs uncertainty is recorded explicitly, i.e., the time, activity and
case of an event may be unclear or imprecise. In this work, we define a
taxonomy of uncertain event logs and models, and we examine the challenges that
uncertainty poses on process discovery and conformance checking. Finally, we
show how upper and lower bounds for conformance can be obtained by aligning an
uncertain trace onto a regular process model.
| [
{
"version": "v1",
"created": "Tue, 29 Sep 2020 14:27:30 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Nov 2020 10:33:21 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Apr 2022 09:16:02 GMT"
}
] | 1,649,635,200,000 | [
[
"Pegoraro",
"Marco",
""
],
[
"Uysal",
"Merih Seran",
""
],
[
"van der Aalst",
"Wil M. P.",
""
]
] |
2009.14519 | Narjes Torabi | Narjes Torabi, Nimar S. Arora, Emma Yu, Kinjal Shah, Wenshun Liu,
Michael Tingley | Uncertainty Estimation For Community Standards Violation In Online
Social Networks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online Social Networks (OSNs) provide a platform for users to share their
thoughts and opinions with their community of friends or to the general public.
In order to keep the platform safe for all users, as well as to keep it
compliant with local laws, OSNs typically create a set of community standards
organized into policy groups, and use Machine Learning (ML) models to identify
and remove content that violates any of the policies. However, out of the
billions of content that is uploaded on a daily basis only a small fraction is
so unambiguously violating that it can be removed by the automated models.
Prevalence estimation is the task of estimating the fraction of violating
content in the residual items by sending a small sample of these items to human
labelers to get ground truth labels. This task is exceedingly hard because even
though we can easily get the ML scores or features for all of the billions of
items we can only get ground truth labels on a few thousands of these items due
to practical considerations. Indeed the prevalence can be so low that even
after a judicious choice of items to be labeled there can be many days in which
not even a single item is labeled violating. A pragmatic choice for such low
prevalence, $10^{-4}$ to $10^{-5}$, regimes is to report the upper bound, or
$97.5\%$ confidence interval, prevalence (UBP) that takes the uncertainties of
the sampling and labeling processes into account and gives a smoothed estimate.
In this work we present two novel techniques Bucketed-Beta-Binomial and a
Bucketed-Gaussian Process for this UBP task and demonstrate on real and
simulated data that it has much better coverage than the commonly used
bootstrapping technique.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 09:10:22 GMT"
}
] | 1,601,510,400,000 | [
[
"Torabi",
"Narjes",
""
],
[
"Arora",
"Nimar S.",
""
],
[
"Yu",
"Emma",
""
],
[
"Shah",
"Kinjal",
""
],
[
"Liu",
"Wenshun",
""
],
[
"Tingley",
"Michael",
""
]
] |
2009.14521 | David Milec | David Milec, Jakub \v{C}ern\'y, Viliam Lis\'y, Bo An | Complexity and Algorithms for Exploiting Quantal Opponents in Large
Two-Player Games | 15 pages, 11 figures, submitted to AAAI 2021 | Proceedings of the AAAI Conference on Artificial Intelligence,
35(6), 5575-5583 (2021) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solution concepts of traditional game theory assume entirely rational
players; therefore, their ability to exploit subrational opponents is limited.
One type of subrationality that describes human behavior well is the quantal
response. While there exist algorithms for computing solutions against quantal
opponents, they either do not scale or may provide strategies that are even
worse than the entirely-rational Nash strategies. This paper aims to analyze
and propose scalable algorithms for computing effective and robust strategies
against a quantal opponent in normal-form and extensive-form games. Our
contributions are: (1) we define two different solution concepts related to
exploiting quantal opponents and analyze their properties; (2) we prove that
computing these solutions is computationally hard; (3) therefore, we evaluate
several heuristic approximations based on scalable counterfactual regret
minimization (CFR); and (4) we identify a CFR variant that exploits the bounded
opponents better than the previously used variants while being less exploitable
by the worst-case perfectly-rational opponent.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 09:14:56 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Dec 2020 12:11:43 GMT"
}
] | 1,625,702,400,000 | [
[
"Milec",
"David",
""
],
[
"Černý",
"Jakub",
""
],
[
"Lisý",
"Viliam",
""
],
[
"An",
"Bo",
""
]
] |
2009.14653 | Youri Xu | Youri Xu, E Haihong, Meina Song, Wenyu Song, Xiaodong Lv, Wang
Haotian, Yang Jinrui | RTFE: A Recursive Temporal Fact Embedding Framework for Temporal
Knowledge Graph Completion | Accepted as a main conference paper at NAACL-HLT 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Static knowledge graph (SKG) embedding (SKGE) has been studied intensively in
the past years. Recently, temporal knowledge graph (TKG) embedding (TKGE) has
emerged. In this paper, we propose a Recursive Temporal Fact Embedding (RTFE)
framework to transplant SKGE models to TKGs and to enhance the performance of
existing TKGE models for TKG completion. Different from previous work which
ignores the continuity of states of TKG in time evolution, we treat the
sequence of graphs as a Markov chain, which transitions from the previous state
to the next state. RTFE takes the SKGE to initialize the embeddings of TKG.
Then it recursively tracks the state transition of TKG by passing updated
parameters/features between timestamps. Specifically, at each timestamp, we
approximate the state transition as the gradient update process. Since RTFE
learns each timestamp recursively, it can naturally transit to future
timestamps. Experiments on five TKG datasets show the effectiveness of RTFE.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 12:59:09 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Oct 2020 11:27:22 GMT"
},
{
"version": "v3",
"created": "Sat, 17 Apr 2021 18:15:30 GMT"
},
{
"version": "v4",
"created": "Fri, 4 Jun 2021 07:19:14 GMT"
}
] | 1,623,024,000,000 | [
[
"Xu",
"Youri",
""
],
[
"Haihong",
"E",
""
],
[
"Song",
"Meina",
""
],
[
"Song",
"Wenyu",
""
],
[
"Lv",
"Xiaodong",
""
],
[
"Haotian",
"Wang",
""
],
[
"Jinrui",
"Yang",
""
]
] |
2009.14654 | Jiaoyan Chen | Jiaoyan Chen and Pan Hu and Ernesto Jimenez-Ruiz and Ole Magnus Holter
and Denvar Antonyrajah and Ian Horrocks | OWL2Vec*: Embedding of OWL Ontologies | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic embedding of knowledge graphs has been widely studied and used for
prediction and statistical analysis tasks across various domains such as
Natural Language Processing and the Semantic Web. However, less attention has
been paid to developing robust methods for embedding OWL (Web Ontology
Language) ontologies which can express a much wider range of semantics than
knowledge graphs and have been widely adopted in domains such as
bioinformatics. In this paper, we propose a random walk and word embedding
based ontology embedding method named OWL2Vec*, which encodes the semantics of
an OWL ontology by taking into account its graph structure, lexical information
and logical constructors. Our empirical evaluation with three real world
datasets suggests that OWL2Vec* benefits from these three different aspects of
an ontology in class membership prediction and class subsumption prediction
tasks. Furthermore, OWL2Vec* often significantly outperforms the
state-of-the-art methods in our experiments.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 13:07:50 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Jan 2021 17:38:46 GMT"
}
] | 1,611,619,200,000 | [
[
"Chen",
"Jiaoyan",
""
],
[
"Hu",
"Pan",
""
],
[
"Jimenez-Ruiz",
"Ernesto",
""
],
[
"Holter",
"Ole Magnus",
""
],
[
"Antonyrajah",
"Denvar",
""
],
[
"Horrocks",
"Ian",
""
]
] |
2009.14715 | Theodore Sumers | Theodore R. Sumers, Mark K. Ho, Robert D. Hawkins, Karthik Narasimhan,
Thomas L. Griffiths | Learning Rewards from Linguistic Feedback | 9 pages, 4 figures. AAAI '21 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore unconstrained natural language feedback as a learning signal for
artificial agents. Humans use rich and varied language to teach, yet most prior
work on interactive learning from language assumes a particular form of input
(e.g., commands). We propose a general framework which does not make this
assumption, using aspect-based sentiment analysis to decompose feedback into
sentiment about the features of a Markov decision process. We then perform an
analogue of inverse reinforcement learning, regressing the sentiment on the
features to infer the teacher's latent reward function. To evaluate our
approach, we first collect a corpus of teaching behavior in a cooperative task
where both teacher and learner are human. We implement three artificial
learners: sentiment-based "literal" and "pragmatic" models, and an inference
network trained end-to-end to predict latent rewards. We then repeat our
initial experiment and pair them with human teachers. All three successfully
learn from interactive human feedback. The sentiment models outperform the
inference network, with the "pragmatic" model approaching human performance.
Our work thus provides insight into the information structure of naturalistic
linguistic feedback as well as methods to leverage it for reinforcement
learning.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 14:51:00 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Dec 2020 15:54:34 GMT"
},
{
"version": "v3",
"created": "Sat, 3 Jul 2021 19:03:12 GMT"
}
] | 1,625,529,600,000 | [
[
"Sumers",
"Theodore R.",
""
],
[
"Ho",
"Mark K.",
""
],
[
"Hawkins",
"Robert D.",
""
],
[
"Narasimhan",
"Karthik",
""
],
[
"Griffiths",
"Thomas L.",
""
]
] |
2009.14759 | Yuxuan Wu | Yuxuan Wu and Hideki Nakayama | Graph-based Heuristic Search for Module Selection Procedure in Neural
Module Network | in Neural Module Network[C]//Proceedings of the Asian Conference on
Computer Vision. 2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural Module Network (NMN) is a machine learning model for solving the
visual question answering tasks. NMN uses programs to encode modules'
structures, and its modularized architecture enables it to solve logical
problems more reasonably. However, because of the non-differentiable procedure
of module selection, NMN is hard to be trained end-to-end. To overcome this
problem, existing work either included ground-truth program into training data
or applied reinforcement learning to explore the program. However, both of
these methods still have weaknesses. In consideration of this, we proposed a
new learning framework for NMN. Graph-based Heuristic Search is the algorithm
we proposed to discover the optimal program through a heuristic search on the
data structure named Program Graph. Our experiments on FigureQA and CLEVR
dataset show that our methods can realize the training of NMN without
ground-truth programs and achieve superior efficiency over existing
reinforcement learning methods in program exploration.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 15:55:44 GMT"
}
] | 1,606,694,400,000 | [
[
"Wu",
"Yuxuan",
""
],
[
"Nakayama",
"Hideki",
""
]
] |
2009.14795 | Shane Mueller | Robert R. Hoffman, William J. Clancey, and Shane T. Mueller | Explaining AI as an Exploratory Process: The Peircean Abduction Model | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current discussions of "Explainable AI" (XAI) do not much consider the role
of abduction in explanatory reasoning (see Mueller, et al., 2018). It might be
worthwhile to pursue this, to develop intelligent systems that allow for the
observation and analysis of abductive reasoning and the assessment of abductive
reasoning as a learnable skill. Abductive inference has been defined in many
ways. For example, it has been defined as the achievement of insight. Most
often abduction is taken as a single, punctuated act of syllogistic reasoning,
like making a deductive or inductive inference from given premises. In
contrast, the originator of the concept of abduction---the American
scientist/philosopher Charles Sanders Peirce---regarded abduction as an
exploratory activity. In this regard, Peirce's insights about reasoning align
with conclusions from modern psychological research. Since abduction is often
defined as "inferring the best explanation," the challenge of implementing
abductive reasoning and the challenge of automating the explanation process are
closely linked. We explore these linkages in this report. This analysis
provides a theoretical framework for understanding what the XAI researchers are
already doing, it explains why some XAI projects are succeeding (or might
succeed), and it leads to design advice.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 17:10:37 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Oct 2020 16:43:24 GMT"
}
] | 1,601,596,800,000 | [
[
"Hoffman",
"Robert R.",
""
],
[
"Clancey",
"William J.",
""
],
[
"Mueller",
"Shane T.",
""
]
] |
2009.14810 | Luiz Pessoa | Marwen Belkaid and Luiz Pessoa | Modeling emotion for human-like behavior in future intelligent robots | null | Intellectica, 79, (pp.109-128), 2023 | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Over the past decades, research in cognitive and affective neuroscience has
emphasized that emotion is crucial for human intelligence and in fact
inseparable from cognition. Concurrently, there has been growing interest in
simulating and modeling emotion-related processes in robots and artificial
agents. In this opinion paper, our goal is to provide a snapshot of the present
landscape in emotion modeling and to show how neuroscience can help advance the
current state of the art. We start with an overview of the existing literature
on emotion modeling in three areas of research: affective computing, social
robotics, and neurorobotics. Briefly summarizing the current state of knowledge
on natural emotion, we then highlight how existing proposals in artificial
emotion do not make sufficient contact with neuroscientific evidence. We
conclude by providing a set of principles to help guide future research in
artificial emotion and intelligent machines more generally. Overall, we argue
that a stronger integration of emotion-related processes in robot models is
critical for the design of human-like behavior in future intelligent machines.
Such integration not only will contribute to the development of autonomous
social machines capable of tackling real-world problems but would contribute to
advancing understanding of human emotion.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 17:32:30 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2022 13:00:46 GMT"
}
] | 1,703,203,200,000 | [
[
"Belkaid",
"Marwen",
""
],
[
"Pessoa",
"Luiz",
""
]
] |
2009.14817 | Barbara K\"onig | Rebecca Bernemann and Benjamin Cabrera and Reiko Heckel and Barbara
K\"onig | Uncertainty Reasoning for Probabilistic Petri Nets via Bayesian Networks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper exploits extended Bayesian networks for uncertainty reasoning on
Petri nets, where firing of transitions is probabilistic. In particular,
Bayesian networks are used as symbolic representations of probability
distributions, modelling the observer's knowledge about the tokens in the net.
The observer can study the net by monitoring successful and failed steps.
An update mechanism for Bayesian nets is enabled by relaxing some of their
restrictions, leading to modular Bayesian nets that can conveniently be
represented and modified. As for every symbolic representation, the question is
how to derive information - in this case marginal probability distributions -
from a modular Bayesian net. We show how to do this by generalizing the known
method of variable elimination.
The approach is illustrated by examples about the spreading of diseases (SIR
model) and information diffusion in social networks. We have implemented our
approach and provide runtime results.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 17:40:54 GMT"
}
] | 1,601,510,400,000 | [
[
"Bernemann",
"Rebecca",
""
],
[
"Cabrera",
"Benjamin",
""
],
[
"Heckel",
"Reiko",
""
],
[
"König",
"Barbara",
""
]
] |
2010.00030 | Kevin Leahy | Kevin Leahy, Austin Jones, Cristian-Ioan Vasile | Fast Decomposition of Temporal Logic Specifications for Heterogeneous
Teams | null | null | 10.1109/LRA.2022.3143304 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we focus on decomposing large multi-agent path planning
problems with global temporal logic goals (common to all agents) into smaller
sub-problems that can be solved and executed independently. Crucially, the
sub-problems' solutions must jointly satisfy the common global mission
specification. The agents' missions are given as Capability Temporal Logic
(CaTL) formulas, a fragment of signal temporal logic, that can express
properties over tasks involving multiple agent capabilities (sensors, e.g.,
camera, IR, and effectors, e.g., wheeled, flying, manipulators) under strict
timing constraints. The approach we take is to decompose both the temporal
logic specification and the team of agents. We jointly reason about the
assignment of agents to subteams and the decomposition of formulas using a
satisfiability modulo theories (SMT) approach. The output of the SMT is then
distributed to subteams and leads to a significant speed up in planning time.
We include computational results to evaluate the efficiency of our solution, as
well as the trade-offs introduced by the conservative nature of the SMT
encoding.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 18:04:39 GMT"
}
] | 1,647,475,200,000 | [
[
"Leahy",
"Kevin",
""
],
[
"Jones",
"Austin",
""
],
[
"Vasile",
"Cristian-Ioan",
""
]
] |
2010.00048 | Maithilee Kunda | Maithilee Kunda and Irina Rabkina | Creative Captioning: An AI Grand Challenge Based on the Dixit Board Game | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new class of "grand challenge" AI problems that we call creative
captioning---generating clever, interesting, or abstract captions for images,
as well as understanding such captions. Creative captioning draws on core AI
research areas of vision, natural language processing, narrative reasoning, and
social reasoning, and across all these areas, it requires sophisticated uses of
common sense and cultural knowledge. In this paper, we analyze several specific
research problems that fall under creative captioning, using the popular board
game Dixit as both inspiration and proposed testing ground. We expect that
Dixit could serve as an engaging and motivating benchmark for creative
captioning across numerous AI research communities for the coming 1-2 decades.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 18:28:01 GMT"
}
] | 1,601,596,800,000 | [
[
"Kunda",
"Maithilee",
""
],
[
"Rabkina",
"Irina",
""
]
] |
2010.00055 | Florian Mirus | Florian Mirus, Terrence C. Stewart, Jorg Conradt | Analyzing the Capacity of Distributed Vector Representations to Encode
Spatial Information | null | null | 10.1109/IJCNN48605.2020.9207137 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vector Symbolic Architectures belong to a family of related cognitive
modeling approaches that encode symbols and structures in high-dimensional
vectors. Similar to human subjects, whose capacity to process and store
information or concepts in short-term memory is subject to numerical
restrictions,the capacity of information that can be encoded in such vector
representations is limited and one way of modeling the numerical restrictions
to cognition. In this paper, we analyze these limits regarding information
capacity of distributed representations. We focus our analysis on simple
superposition and more complex, structured representations involving
convolutive powers to encode spatial information. In two experiments, we find
upper bounds for the number of concepts that can effectively be stored in a
single vector.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 18:49:29 GMT"
}
] | 1,601,596,800,000 | [
[
"Mirus",
"Florian",
""
],
[
"Stewart",
"Terrence C.",
""
],
[
"Conradt",
"Jorg",
""
]
] |
2010.00074 | Kirk Roberts | Nicholas Greenspan and Yuqi Si and Kirk Roberts | Extracting Concepts for Precision Oncology from the Biomedical
Literature | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes an initial dataset and automatic natural language
processing (NLP) method for extracting concepts related to precision oncology
from biomedical research articles. We extract five concept types: Cancer,
Mutation, Population, Treatment, Outcome. A corpus of 250 biomedical abstracts
were annotated with these concepts following standard double-annotation
procedures. We then experiment with BERT-based models for concept extraction.
The best-performing model achieved a precision of 63.8%, a recall of 71.9%, and
an F1 of 67.1. Finally, we propose additional directions for research for
improving extraction performance and utilizing the NLP system in downstream
precision oncology applications.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2020 19:31:04 GMT"
}
] | 1,601,596,800,000 | [
[
"Greenspan",
"Nicholas",
""
],
[
"Si",
"Yuqi",
""
],
[
"Roberts",
"Kirk",
""
]
] |
2010.00238 | Zhiqiang Zhong | Zhiqiang Zhong, Cheng-Te Li and Jun Pang | Multi-grained Semantics-aware Graph Neural Networks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Neural Networks (GNNs) are powerful techniques in representation
learning for graphs and have been increasingly deployed in a multitude of
different applications that involve node- and graph-wise tasks. Most existing
studies solve either the node-wise task or the graph-wise task independently
while they are inherently correlated. This work proposes a unified model,
AdamGNN, to interactively learn node and graph representations in a
mutual-optimisation manner. Compared with existing GNN models and graph pooling
methods, AdamGNN enhances the node representation with the learned
multi-grained semantics and avoids losing node features and graph structure
information during pooling. Specifically, a differentiable pooling operator is
proposed to adaptively generate a multi-grained structure that involves meso-
and macro-level semantic information in the graph. We also devise the unpooling
operator and the flyback aggregator in AdamGNN to better leverage the
multi-grained semantics to enhance node representations. The updated node
representations can further adjust the graph representation in the next
iteration. Experiments on 14 real-world graph datasets show that AdamGNN can
significantly outperform 17 competing models on both node- and graph-wise
tasks. The ablation studies confirm the effectiveness of AdamGNN's components,
and the last empirical analysis further reveals the ingenious ability of
AdamGNN in capturing long-range interactions.
| [
{
"version": "v1",
"created": "Thu, 1 Oct 2020 07:52:06 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Oct 2020 17:26:29 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Mar 2022 17:21:25 GMT"
}
] | 1,647,820,800,000 | [
[
"Zhong",
"Zhiqiang",
""
],
[
"Li",
"Cheng-Te",
""
],
[
"Pang",
"Jun",
""
]
] |
2010.00370 | Suiyi Ling | Suiyi Ling, Jing Li, Anne Flore Perrin, Zhi Li, Luk\'a\v{s} Krasula,
Patrick Le Callet | Strategy for Boosting Pair Comparison and Improving Quality Assessment
Accuracy | 8 pages, 11 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of rigorous quality assessment model relies on the collection
of reliable subjective data, where the perceived quality of visual multimedia
is rated by the human observers. Different subjective assessment protocols can
be used according to the objectives, which determine the discriminability and
accuracy of the subjective data.
Single stimulus methodology, e.g., the Absolute Category Rating (ACR) has
been widely adopted due to its simplicity and efficiency. However, Pair
Comparison (PC) is of significant advantage over ACR in terms of
discriminability. In addition, PC avoids the influence of observers' bias
regarding their understanding of the quality scale. Nevertheless, full pair
comparison is much more time-consuming. In this study, we therefore 1) employ a
generic model to bridge the pair comparison data and ACR data, where the
variance term could be recovered and the obtained information is more complete;
2) propose a fusion strategy to boost pair comparisons by utilizing the ACR
results as initialization information; 3) develop a novel active batch sampling
strategy based on Minimum Spanning Tree (MST) for PC. In such a way, the
proposed methodology could achieve the same accuracy of pair comparison but
with the compelxity as low as ACR. Extensive experimental results demonstrate
the efficiency and accuracy of the proposed approach, which outperforms the
state of the art approaches.
| [
{
"version": "v1",
"created": "Thu, 1 Oct 2020 13:05:09 GMT"
}
] | 1,601,596,800,000 | [
[
"Ling",
"Suiyi",
""
],
[
"Li",
"Jing",
""
],
[
"Perrin",
"Anne Flore",
""
],
[
"Li",
"Zhi",
""
],
[
"Krasula",
"Lukáš",
""
],
[
"Callet",
"Patrick Le",
""
]
] |
2010.00499 | Patrick Kenekayoro | Patrick Kenekayoro, Biralatei Fawei | Meta-Heuristic Solutions to a Student Grouping Optimization Problem
faced in Higher Education Institutions | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Combinatorial problems which have been proven to be NP-hard are faced in
Higher Education Institutions and researches have extensively investigated some
of the well-known combinatorial problems such as the timetabling and student
project allocation problems. However, NP-hard problems faced in Higher
Education Institutions are not only confined to these categories of
combinatorial problems. The majority of NP-hard problems faced in institutions
involve grouping students and/or resources, albeit with each problem having its
own unique set of constraints. Thus, it can be argued that techniques to solve
NP-hard problems in Higher Education Institutions can be transferred across the
different problem categories. As no method is guaranteed to outperform all
others in all problems, it is necessary to investigate heuristic techniques for
solving lesser-known problems in order to guide stakeholders or software
developers to the most appropriate algorithm for each unique class of NP-hard
problems faced in Higher Education Institutions. To this end, this study
described an optimization problem faced in a real university that involved
grouping students for the presentation of semester results. Ordering based
heuristics, genetic algorithm and the ant colony optimization algorithm
implemented in Python programming language were used to find feasible solutions
to this problem, with the ant colony optimization algorithm performing better
or equal in 75% of the test instances and the genetic algorithm producing
better or equal results in 38% of the test instances.
| [
{
"version": "v1",
"created": "Thu, 1 Oct 2020 15:44:47 GMT"
}
] | 1,601,596,800,000 | [
[
"Kenekayoro",
"Patrick",
""
],
[
"Fawei",
"Biralatei",
""
]
] |
2010.01676 | Matthew Guzdial | Faraz Khadivpour and Matthew Guzdial | Explainability via Responsibility | 8 pages, 4 figures | Proceedings of the 2020 Experiment AI in Games Workshop | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Procedural Content Generation via Machine Learning (PCGML) refers to a group
of methods for creating game content (e.g. platformer levels, game maps, etc.)
using machine learning models. PCGML approaches rely on black box models, which
can be difficult to understand and debug by human designers who do not have
expert knowledge about machine learning. This can be even more tricky in
co-creative systems where human designers must interact with AI agents to
generate game content. In this paper we present an approach to explainable
artificial intelligence in which certain training instances are offered to
human users as an explanation for the AI agent's actions during a co-creation
process. We evaluate this approach by approximating its ability to provide
human users with the explanations of AI agent's actions and helping them to
more efficiently cooperate with the AI agent.
| [
{
"version": "v1",
"created": "Sun, 4 Oct 2020 20:41:03 GMT"
}
] | 1,601,942,400,000 | [
[
"Khadivpour",
"Faraz",
""
],
[
"Guzdial",
"Matthew",
""
]
] |
2010.01685 | Matthew Guzdial | Nazanin Yousefzadeh Khameneh and Matthew Guzdial | Entity Embedding as Game Representation | 7 pages, 6 figures | Proceedings of the 2020 Experimental AI in Games Workshop | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Procedural content generation via machine learning (PCGML) has shown success
at producing new video game content with machine learning. However, the
majority of the work has focused on the production of static game content,
including game levels and visual elements. There has been much less work on
dynamic game content, such as game mechanics. One reason for this is the lack
of a consistent representation for dynamic game content, which is key for a
number of statistical machine learning approaches. We present an autoencoder
for deriving what we call "entity embeddings", a consistent way to represent
different dynamic entities across multiple games in the same representation. In
this paper we introduce the learned representation, along with some evidence
towards its quality and future utility.
| [
{
"version": "v1",
"created": "Sun, 4 Oct 2020 21:16:45 GMT"
}
] | 1,601,942,400,000 | [
[
"Khameneh",
"Nazanin Yousefzadeh",
""
],
[
"Guzdial",
"Matthew",
""
]
] |
2010.01909 | Sunandita Patra | Sunandita Patra, James Mason, Malik Ghallab, Dana Nau, Paolo Traverso | Deliberative Acting, Online Planning and Learning with Hierarchical
Operational Models | Published in Artificial Intelligence (AIJ). Please cite as: Sunandita
Patra, James Mason, Malik Ghallab, Dana Nau, Paolo Traverso. Deliberative
Acting, Planning and Learning with Hierarchical Operational Models.
Artificial Intelligence, Elsevier, 2021, 299, pp.103523.
10.1016/j.artint.2021.103523. arXiv admin note: text overlap with
arXiv:2003.03932 | Artificial Intelligence, Elsevier, 2021, 299, pp.103523 | 10.1016/j.artint.2021.103523 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In AI research, synthesizing a plan of action has typically used descriptive
models of the actions that abstractly specify what might happen as a result of
an action, and are tailored for efficiently computing state transitions.
However, executing the planned actions has needed operational models, in which
rich computational control structures and closed-loop online decision-making
are used to specify how to perform an action in a nondeterministic execution
context, react to events and adapt to an unfolding situation. Deliberative
actors, which integrate acting and planning, have typically needed to use both
of these models together -- which causes problems when attempting to develop
the different models, verify their consistency, and smoothly interleave acting
and planning.
As an alternative, we define and implement an integrated acting and planning
system in which both planning and acting use the same operational models. These
rely on hierarchical task-oriented refinement methods offering rich control
structures. The acting component, called Reactive Acting Engine (RAE), is
inspired by the well-known PRS system. At each decision step, RAE can get
advice from a planner for a near-optimal choice with respect to a utility
function. The anytime planner uses a UCT-like Monte Carlo Tree Search
procedure, called UPOM, whose rollouts are simulations of the actor's
operational models. We also present learning strategies for use with RAE and
UPOM that acquire, from online acting experiences and/or simulated planning
results, a mapping from decision contexts to method instances as well as a
heuristic function to guide UPOM. We demonstrate the asymptotic convergence of
UPOM towards optimal methods in static domains, and show experimentally that
UPOM and the learning strategies significantly improve the acting efficiency
and robustness.
| [
{
"version": "v1",
"created": "Fri, 2 Oct 2020 14:50:05 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jan 2021 09:13:44 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Nov 2021 21:12:54 GMT"
}
] | 1,637,107,200,000 | [
[
"Patra",
"Sunandita",
""
],
[
"Mason",
"James",
""
],
[
"Ghallab",
"Malik",
""
],
[
"Nau",
"Dana",
""
],
[
"Traverso",
"Paolo",
""
]
] |
2010.01961 | Ihor Kendiukhov | Ihor Kendiukhov | A Finite-Time Technological Singularity Model With Artificial
Intelligence Self-Improvement | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in the development of artificial intelligence, technological
progress acceleration, long-term trends of macroeconomic dynamics increase the
relevance of technological singularity hypothesis. In this paper, we build a
model of finite-time technological singularity assuming that artificial
intelligence will replace humans for artificial intelligence engineers after
some point in time when it is developed enough. This model implies the
following: let A be the level of development of artificial intelligence. Then,
the moment of technological singularity n is defined as the point in time where
artificial intelligence development function approaches infinity. Thus, it
happens in finite time. Although infinite level of development of artificial
intelligence cannot be reached practically, this approximation is useful for
several reasons, firstly because it allows modeling a phase transition or a
change of regime. In the model, intelligence growth function appears to be
hyperbolic function under relatively broad conditions which we list and
compare. Subsequently, we also add a stochastic term (Brownian motion) to the
model and investigate the changes in its behavior. The results can be applied
for the modeling of dynamics of various processes characterized by
multiplicative growth.
| [
{
"version": "v1",
"created": "Mon, 31 Aug 2020 15:29:14 GMT"
}
] | 1,601,942,400,000 | [
[
"Kendiukhov",
"Ihor",
""
]
] |
2010.01985 | Christopher Pereyda | Christopher Pereyda, Lawrence Holder | Measuring the Complexity of Domains Used to Evaluate AI Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is currently a rapid increase in the number of challenge problem,
benchmarking datasets and algorithmic optimization tests for evaluating AI
systems. However, there does not currently exist an objective measure to
determine the complexity between these newly created domains. This lack of
cross-domain examination creates an obstacle to effectively research more
general AI systems. We propose a theory for measuring the complexity between
varied domains. This theory is then evaluated using approximations by a
population of neural network based AI systems. The approximations are compared
to other well known standards and show it meets intuitions of complexity. An
application of this measure is then demonstrated to show its effectiveness as a
tool in varied situations. The experimental results show this measure has
promise as an effective tool for aiding in the evaluation of AI systems. We
propose the future use of such a complexity metric for use in computing an AI
system's intelligence.
| [
{
"version": "v1",
"created": "Fri, 18 Sep 2020 21:53:07 GMT"
}
] | 1,601,942,400,000 | [
[
"Pereyda",
"Christopher",
""
],
[
"Holder",
"Lawrence",
""
]
] |
2010.02627 | Felipe Meneguzzi | Nir Oren and Felipe Meneguzzi | Norm Identification through Plan Recognition | Published as "In 15th International Workshop on Coordination,
Organisations, Institutions and Norms (COIN 2013) @AAMAS, Saint Paul, MN,
USA, 2013." | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Societal rules, as exemplified by norms, aim to provide a degree of
behavioural stability to multi-agent societies. Norms regulate a society using
the deontic concepts of permissions, obligations and prohibitions to specify
what can, must and must not occur in a society. Many implementations of
normative systems assume various combinations of the following assumptions:
that the set of norms is static and defined at design time; that agents joining
a society are instantly informed of the complete set of norms; that the set of
agents within a society does not change; and that all agents are aware of the
existing norms. When any one of these assumptions is dropped, agents need a
mechanism to identify the set of norms currently present within a society, or
risk unwittingly violating the norms. In this paper, we develop a norm
identification mechanism that uses a combination of parsing-based plan
recognition and Hierarchical Task Network (HTN) planning mechanisms, which
operates by analysing the actions performed by other agents. While our basic
mechanism cannot learn in situations where norm violations take place, we
describe an extension which is able to operate in the presence of violations.
| [
{
"version": "v1",
"created": "Tue, 6 Oct 2020 11:18:52 GMT"
}
] | 1,602,028,800,000 | [
[
"Oren",
"Nir",
""
],
[
"Meneguzzi",
"Felipe",
""
]
] |
2010.02911 | James Miller Dr | James D. Miller, Roman Yampolskiy, Olle Haggstrom, Stuart Armstrong | Chess as a Testing Grounds for the Oracle Approach to AI Safety | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To reduce the danger of powerful super-intelligent AIs, we might make the
first such AIs oracles that can only send and receive messages. This paper
proposes a possibly practical means of using machine learning to create two
classes of narrow AI oracles that would provide chess advice: those aligned
with the player's interest, and those that want the player to lose and give
deceptively bad advice. The player would be uncertain which type of oracle it
was interacting with. As the oracles would be vastly more intelligent than the
player in the domain of chess, experience with these oracles might help us
prepare for future artificial general intelligence oracles.
| [
{
"version": "v1",
"created": "Tue, 6 Oct 2020 17:47:53 GMT"
}
] | 1,602,028,800,000 | [
[
"Miller",
"James D.",
""
],
[
"Yampolskiy",
"Roman",
""
],
[
"Haggstrom",
"Olle",
""
],
[
"Armstrong",
"Stuart",
""
]
] |
2010.03597 | John Mern | John Mern, Anil Yildiz, Zachary Sunberg, Tapan Mukerji, Mykel J.
Kochenderfer | Bayesian Optimized Monte Carlo Planning | 8 pages | AAAI-21 Technical Tracks Vol. 35, No. 13, 2021, 11880-11887 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online solvers for partially observable Markov decision processes have
difficulty scaling to problems with large action spaces. Monte Carlo tree
search with progressive widening attempts to improve scaling by sampling from
the action space to construct a policy search tree. The performance of
progressive widening search is dependent upon the action sampling policy, often
requiring problem-specific samplers. In this work, we present a general method
for efficient action sampling based on Bayesian optimization. The proposed
method uses a Gaussian process to model a belief over the action-value function
and selects the action that will maximize the expected improvement in the
optimal action value. We implement the proposed approach in a new online tree
search algorithm called Bayesian Optimized Monte Carlo Planning (BOMCP).
Several experiments show that BOMCP is better able to scale to large action
space POMDPs than existing state-of-the-art tree search solvers.
| [
{
"version": "v1",
"created": "Wed, 7 Oct 2020 18:29:27 GMT"
}
] | 1,635,984,000,000 | [
[
"Mern",
"John",
""
],
[
"Yildiz",
"Anil",
""
],
[
"Sunberg",
"Zachary",
""
],
[
"Mukerji",
"Tapan",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] |
2010.04282 | Patrick Rodler | Patrick Rodler | RBF-HS: Recursive Best-First Hitting Set Search | This is a technical report underlying the work "Patrick Rodler.
Memory-limited model-based diagnosis" published in the journal Artificial
Intelligence, volume 305, 2022. arXiv admin note: text overlap with
arXiv:2009.12190 | Artificial Intelligence 305, April 2022, 103681 | 10.1016/j.artint.2022.103681 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Various model-based diagnosis scenarios require the computation of most
preferred fault explanations. Existing algorithms that are sound (i.e., output
only actual fault explanations) and complete (i.e., can return all
explanations), however, require exponential space to achieve this task. As a
remedy, we propose two novel diagnostic search algorithms, called RBF-HS
(Recursive Best-First Hitting Set Search) and HBF-HS (Hybrid Best-First Hitting
Set Search), which build upon tried and tested techniques from the heuristic
search domain. RBF-HS can enumerate an arbitrary predefined finite number of
fault explanations in best-first order within linear space bounds, without
sacrificing the desirable soundness or completeness properties. The idea of
HBF-HS is to find a trade-off between runtime optimization and a restricted
space consumption that does not exceed the available memory.
In extensive experiments on real-world diagnosis cases we compared our
approaches to Reiter's HS-Tree, a state-of-the-art method that gives the same
theoretical guarantees and is as general(ly applicable) as the suggested
algorithms. For the computation of minimum-cardinality fault explanations, we
find that (1) RBF-HS reduces memory requirements substantially in most cases by
up to several orders of magnitude, (2) in more than a third of the cases, both
memory savings and runtime savings are achieved, and (3) given the runtime
overhead is significant, using HBF-HS instead of RBF-HS reduces the runtime to
values comparable with HS-Tree while keeping the used memory reasonably
bounded. When computing most probable fault explanations, we observe that
RBF-HS tends to trade memory savings more or less one-to-one for runtime
overheads. Again, HBF-HS proves to be a reasonable remedy to cut down the
runtime while complying with practicable memory bounds.
| [
{
"version": "v1",
"created": "Thu, 8 Oct 2020 22:09:39 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Feb 2022 14:29:08 GMT"
}
] | 1,645,488,000,000 | [
[
"Rodler",
"Patrick",
""
]
] |
2010.04550 | Maksim Tomic | Maksim Tomic | Quantum Computational Psychoanalysis -- Quantum logic approach to
Bi-logic | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we are dealing with the fundamental concepts of Bi-logic
proposed by Chilean psychoanalyst Ignacio Matte Blanco in the context of
quantum logic, founded by Gareth Birkhoff and John Von Neumann. The main
purpose of this paper is to present how a quantum-logical model, represented by
the lattice of a closed subspace of Hilbert space, can be used as a
computational framework for concepts that are originally described by Sigmund
Freud as the fundamental properties of the unconscious psyche.
| [
{
"version": "v1",
"created": "Mon, 5 Oct 2020 11:40:14 GMT"
}
] | 1,605,052,800,000 | [
[
"Tomic",
"Maksim",
""
]
] |
2010.04687 | Andrea Ferrario | Andrea Ferrario, Michele Loi | A Series of Unfortunate Counterfactual Events: the Role of Time in
Counterfactual Explanations | 11 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Counterfactual explanations are a prominent example of post-hoc
interpretability methods in the explainable Artificial Intelligence research
domain. They provide individuals with alternative scenarios and a set of
recommendations to achieve a sought-after machine learning model outcome.
Recently, the literature has identified desiderata of counterfactual
explanations, such as feasibility, actionability and sparsity that should
support their applicability in real-world contexts. However, we show that the
literature has neglected the problem of the time dependency of counterfactual
explanations. We argue that, due to their time dependency and because of the
provision of recommendations, even feasible, actionable and sparse
counterfactual explanations may not be appropriate in real-world applications.
This is due to the possible emergence of what we call "unfortunate
counterfactual events." These events may occur due to the retraining of machine
learning models whose outcomes have to be explained via counterfactual
explanation. Series of unfortunate counterfactual events frustrate the efforts
of those individuals who successfully implemented the recommendations of
counterfactual explanations. This negatively affects people's trust in the
ability of institutions to provide machine learning-supported decisions
consistently. We introduce an approach to address the problem of the emergence
of unfortunate counterfactual events that makes use of histories of
counterfactual explanations. In the final part of the paper we propose an
ethical analysis of two distinct strategies to cope with the challenge of
unfortunate counterfactual events. We show that they respond to an ethically
responsible imperative to preserve the trustworthiness of credit lending
organizations, the decision models they employ, and the social-economic
function of credit lending.
| [
{
"version": "v1",
"created": "Fri, 9 Oct 2020 17:16:29 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Jan 2021 19:52:07 GMT"
}
] | 1,611,100,800,000 | [
[
"Ferrario",
"Andrea",
""
],
[
"Loi",
"Michele",
""
]
] |
2010.04949 | Mingxiang Chen | Mingxiang Chen, Zhecheng Wang | Image Generation With Neural Cellular Automatas | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel approach to generate images (or other
artworks) by using neural cellular automatas (NCAs). Rather than training NCAs
based on single images one by one, we combined the idea with variational
autoencoders (VAEs), and hence explored some applications, such as image
restoration and style fusion. The code for model implementation is available
online.
| [
{
"version": "v1",
"created": "Sat, 10 Oct 2020 08:52:52 GMT"
},
{
"version": "v2",
"created": "Sat, 7 Nov 2020 03:34:23 GMT"
}
] | 1,604,966,400,000 | [
[
"Chen",
"Mingxiang",
""
],
[
"Wang",
"Zhecheng",
""
]
] |
2010.04974 | Xiangming Gu | Xiangming Gu and Xiang Cheng | Distilling a Deep Neural Network into a Takagi-Sugeno-Kang Fuzzy
Inference System | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks (DNNs) demonstrate great success in classification
tasks. However, they act as black boxes and we don't know how they make
decisions in a particular classification task. To this end, we propose to
distill the knowledge from a DNN into a fuzzy inference system (FIS), which is
Takagi-Sugeno-Kang (TSK)-type in this paper. The model has the capability to
express the knowledge acquired by a DNN based on fuzzy rules, thus explaining a
particular decision much easier. Knowledge distillation (KD) is applied to
create a TSK-type FIS that generalizes better than one directly from the
training data, which is guaranteed through experiments in this paper. To
further improve the performances, we modify the baseline method of KD and
obtain good results.
| [
{
"version": "v1",
"created": "Sat, 10 Oct 2020 10:58:05 GMT"
}
] | 1,602,547,200,000 | [
[
"Gu",
"Xiangming",
""
],
[
"Cheng",
"Xiang",
""
]
] |
2010.04990 | Yassine Himeur | Christos Sardianos and Iraklis Varlamis and Christos Chronis and
George Dimitrakopoulos and Abdullah Alsalemi and Yassine Himeur and Faycal
Bensaali and Abbes Amira | The emergence of Explainability of Intelligent Systems: Delivering
Explainable and Personalised Recommendations for Energy Efficiency | 19 pages, 8 figures, 1 table | International Journal of Intelligent Systems, 2020 | 10.1002/int.22314 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent advances in artificial intelligence namely in machine learning and
deep learning, have boosted the performance of intelligent systems in several
ways. This gave rise to human expectations, but also created the need for a
deeper understanding of how intelligent systems think and decide. The concept
of explainability appeared, in the extent of explaining the internal system
mechanics in human terms. Recommendation systems are intelligent systems that
support human decision making, and as such, they have to be explainable in
order to increase user trust and improve the acceptance of recommendations. In
this work, we focus on a context-aware recommendation system for energy
efficiency and develop a mechanism for explainable and persuasive
recommendations, which are personalized to user preferences and habits. The
persuasive facts either emphasize on the economical saving prospects (Econ) or
on a positive ecological impact (Eco) and explanations provide the reason for
recommending an energy saving action. Based on a study conducted using a
Telegram bot, different scenarios have been validated with actual data and
human feedback. Current results show a total increase of 19\% on the
recommendation acceptance ratio when both economical and ecological persuasive
facts are employed. This revolutionary approach on recommendation systems,
demonstrates how intelligent recommendations can effectively encourage energy
saving behavior.
| [
{
"version": "v1",
"created": "Sat, 10 Oct 2020 13:11:43 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Oct 2020 11:25:18 GMT"
}
] | 1,603,756,800,000 | [
[
"Sardianos",
"Christos",
""
],
[
"Varlamis",
"Iraklis",
""
],
[
"Chronis",
"Christos",
""
],
[
"Dimitrakopoulos",
"George",
""
],
[
"Alsalemi",
"Abdullah",
""
],
[
"Himeur",
"Yassine",
""
],
[
"Bensaali",
"Faycal",
""
],
[
"Amira",
"Abbes",
""
]
] |
2010.05180 | Zhengxian Lin | Zhengxian Lin, Kim-Ho Lam and Alan Fern | Contrastive Explanations for Reinforcement Learning via Embedded Self
Predictions | Published (Oral) at ICLR 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate a deep reinforcement learning (RL) architecture that supports
explaining why a learned agent prefers one action over another. The key idea is
to learn action-values that are directly represented via human-understandable
properties of expected futures. This is realized via the embedded
self-prediction (ESP)model, which learns said properties in terms of human
provided features. Action preferences can then be explained by contrasting the
future properties predicted for each action. To address cases where there are a
large number of features, we develop a novel method for computing minimal
sufficient explanations from anESP. Our case studies in three domains,
including a complex strategy game, show that ESP models can be effectively
learned and support insightful explanations.
| [
{
"version": "v1",
"created": "Sun, 11 Oct 2020 07:02:20 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Jan 2021 08:53:22 GMT"
}
] | 1,611,014,400,000 | [
[
"Lin",
"Zhengxian",
""
],
[
"Lam",
"Kim-Ho",
""
],
[
"Fern",
"Alan",
""
]
] |
2010.05394 | Fred Glover | Fred Glover | Exploiting Local Optimality in Metaheuristic Search | 60 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A variety of strategies have been proposed for overcoming local optimality in
metaheuristic search. This paper examines characteristics of moves that can be
exploited to make good decisions about steps that lead away from a local
optimum and then lead toward a new local optimum. We introduce strategies to
identify and take advantage of useful features of solution history with an
adaptive memory metaheuristic, to provide rules for selecting moves that offer
promise for discovering improved local optima.
Our approach uses a new type of adaptive memory based on a construction
called exponential extrapolation. The memory operates by means of threshold
inequalities that ensure selected moves will not lead to a specified number of
most recently encountered local optima. Associated thresholds are embodied in
choice rule strategies that further exploit the exponential extrapolation
concept. Together these produce a threshold based Alternating Ascent (AA)
algorithm that opens a variety of research possibilities for exploration.
| [
{
"version": "v1",
"created": "Mon, 12 Oct 2020 01:51:09 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Oct 2020 13:59:58 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Oct 2020 13:45:42 GMT"
}
] | 1,603,324,800,000 | [
[
"Glover",
"Fred",
""
]
] |
2010.05418 | Stephen Casper | Stephen Casper | Achilles Heels for AGI/ASI via Decision Theoretic Adversaries | Contact info for author at stephencasper.com | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As progress in AI continues to advance, it is important to know how advanced
systems will make choices and in what ways they may fail. Machines can already
outsmart humans in some domains, and understanding how to safely build ones
which may have capabilities at or above the human level is of particular
concern. One might suspect that artificially generally intelligent (AGI) and
artificially superintelligent (ASI) will be systems that humans cannot reliably
outsmart. As a challenge to this assumption, this paper presents the Achilles
Heel hypothesis which states that even a potentially superintelligent system
may nonetheless have stable decision-theoretic delusions which cause them to
make irrational decisions in adversarial settings. In a survey of key dilemmas
and paradoxes from the decision theory literature, a number of these potential
Achilles Heels are discussed in context of this hypothesis. Several novel
contributions are made toward understanding the ways in which these weaknesses
might be implanted into a system.
| [
{
"version": "v1",
"created": "Mon, 12 Oct 2020 02:53:23 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Dec 2020 00:51:41 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Jul 2021 01:39:09 GMT"
},
{
"version": "v4",
"created": "Fri, 25 Mar 2022 21:50:47 GMT"
},
{
"version": "v5",
"created": "Mon, 18 Jul 2022 21:47:00 GMT"
},
{
"version": "v6",
"created": "Wed, 20 Jul 2022 21:17:11 GMT"
},
{
"version": "v7",
"created": "Sun, 18 Sep 2022 21:36:12 GMT"
},
{
"version": "v8",
"created": "Fri, 7 Oct 2022 17:19:34 GMT"
},
{
"version": "v9",
"created": "Sun, 2 Apr 2023 03:20:17 GMT"
}
] | 1,680,566,400,000 | [
[
"Casper",
"Stephen",
""
]
] |
2010.05453 | Son-Il Kwak | I.M. Son, S.I. Kwak, M.O. Choe | Fuzzy Approximate Reasoning Method based on Least Common Multiple and
its Property Analysis | 18 pages, 0 figures, 14 tables. arXiv admin note: substantial text
overlap with arXiv:2003.13450 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper shows a novel fuzzy approximate reasoning method based on the
least common multiple (LCM). Its fundamental idea is to obtain a new fuzzy
reasoning result by the extended distance measure based on LCM between the
antecedent fuzzy set and the consequent one in discrete SISO fuzzy system. The
proposed method is called LCM one. And then this paper analyzes its some
properties, i.e., the reductive property, information loss occurred in
reasoning process, and the convergence of fuzzy control. Theoretical and
experimental research results highlight that proposed method meaningfully
improve the reductive property and information loss and controllability than
the previous fuzzy reasoning methods.
| [
{
"version": "v1",
"created": "Mon, 5 Oct 2020 07:22:28 GMT"
}
] | 1,602,547,200,000 | [
[
"Son",
"I. M.",
""
],
[
"Kwak",
"S. I.",
""
],
[
"Choe",
"M. O.",
""
]
] |
2010.05480 | Ildar Rakhmatulin | Ildar Rakhmatulin | A review of the low-cost eye-tracking systems for 2010-2020 | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The manuscript presented an analysis of the work in the field of eye-tracking
over the past ten years in the low-cost filed. We researched in detail the
methods, algorithms, and developed hardware. To realization, this task we
considered the commercial eye-tracking systems with hardware and software and
Free software. Additionally, the manuscript considered advances in the neural
network fields for eye-tracking tasks and problems which hold back the
development of the low-cost eye-tracking system. special attention in the
manuscript is given to recommendations for further research in the field of
eye-tracking devices in the low-cost field.
| [
{
"version": "v1",
"created": "Mon, 12 Oct 2020 06:54:27 GMT"
}
] | 1,602,547,200,000 | [
[
"Rakhmatulin",
"Ildar",
""
]
] |
2010.06002 | Andrea Loreggia | Grady Booch, Francesco Fabiano, Lior Horesh, Kiran Kate, Jon Lenchner,
Nick Linck, Andrea Loreggia, Keerthiram Murugesan, Nicholas Mattei, Francesca
Rossi, Biplav Srivastava | Thinking Fast and Slow in AI | null | Proceedings of the AAAI Conference on Artificial Intelligence
2021, 35(17), 15042-15046 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a research direction to advance AI which draws
inspiration from cognitive theories of human decision making. The premise is
that if we gain insights about the causes of some human capabilities that are
still lacking in AI (for instance, adaptability, generalizability, common
sense, and causal reasoning), we may obtain similar capabilities in an AI
system by embedding these causal components. We hope that the high-level
description of our vision included in this paper, as well as the several
research questions that we propose to consider, can stimulate the AI research
community to define, try and evaluate new methodologies, frameworks, and
evaluation metrics, in the spirit of achieving a better understanding of both
human and machine intelligence.
| [
{
"version": "v1",
"created": "Mon, 12 Oct 2020 20:10:05 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Dec 2020 21:12:08 GMT"
}
] | 1,622,505,600,000 | [
[
"Booch",
"Grady",
""
],
[
"Fabiano",
"Francesco",
""
],
[
"Horesh",
"Lior",
""
],
[
"Kate",
"Kiran",
""
],
[
"Lenchner",
"Jon",
""
],
[
"Linck",
"Nick",
""
],
[
"Loreggia",
"Andrea",
""
],
[
"Murugesan",
"Keerthiram",
""
],
[
"Mattei",
"Nicholas",
""
],
[
"Rossi",
"Francesca",
""
],
[
"Srivastava",
"Biplav",
""
]
] |
2010.06049 | Alejandro Flores Mr | A. Flores and G. Flores | Implementation of a neural network for non-linearities estimation in a
tail-sitter aircraft | 11 pages, 8 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The control of a tail-sitter aircraft is a challenging task, especially
during transition maneuver where the lift and drag forces are highly nonlinear.
In this work, we implement a Neural Network (NN) capable of estimate such
nonlinearities. Once they are estimated, one can propose a control scheme where
these forces can correctly feed-forwarded. Our implementation of the NN has
been programmed in C++ on the PX4 Autopilot an open-source autopilot for
drones. To ensure that this implementation does not considerably affect the
autopilot's performance, the coded NN must be of a light computational load.
With the aim to test our approach, we have carried out a series of realistic
simulations in the Software in The Loop (SITL) using the PX4 Autopilot. These
experiments demonstrate that the implemented NN can be used to estimate the
tail-sitter aerodynamic forces, and can be used to improve the control
algorithms during all the flight phases of the tail-sitter aircraft: hover,
cruise flight, and transition.
| [
{
"version": "v1",
"created": "Mon, 12 Oct 2020 21:46:16 GMT"
}
] | 1,602,633,600,000 | [
[
"Flores",
"A.",
""
],
[
"Flores",
"G.",
""
]
] |
2010.06059 | Sonia Baee | Sonia Baee, Mark Rucker, Anna Baglione, Mawulolo K. Ameko, Laura
Barnes | A Framework for Addressing the Risks and Opportunities In AI-Supported
Virtual Health Coaches | 4 pages | null | 10.1145/3421937.3421971 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Virtual coaching has rapidly evolved into a foundational component of modern
clinical practice. At a time when healthcare professionals are in short supply
and the demand for low-cost treatments is ever-increasing, virtual health
coaches (VHCs) offer intervention-on-demand for those limited by finances or
geographic access to care. More recently, AI-powered virtual coaches have
become a viable complement to human coaches. However, the push for AI-powered
coaching systems raises several important issues for researchers, designers,
clinicians, and patients. In this paper, we present a novel framework to guide
the design and development of virtual coaching systems. This framework augments
a traditional data science pipeline with four key guiding goals: reliability,
fairness, engagement, and ethics.
| [
{
"version": "v1",
"created": "Mon, 12 Oct 2020 22:41:35 GMT"
}
] | 1,609,891,200,000 | [
[
"Baee",
"Sonia",
""
],
[
"Rucker",
"Mark",
""
],
[
"Baglione",
"Anna",
""
],
[
"Ameko",
"Mawulolo K.",
""
],
[
"Barnes",
"Laura",
""
]
] |
2010.06164 | Mauricio Gonzalez-Soto | Mauricio Gonzalez-Soto, Ivan R. Feliciano-Avelino, L. Enrique Sucar,
Hugo J. Escalante Balderas | Causal Structure Learning: a Bayesian approach based on random graphs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Random Graph is a random object which take its values in the space of
graphs. We take advantage of the expressibility of graphs in order to model the
uncertainty about the existence of causal relationships within a given set of
variables. We adopt a Bayesian point of view in order to capture a causal
structure via interaction and learning with a causal environment. We test our
method over two different scenarios, and the experiments mainly confirm that
our technique can learn a causal structure. Furthermore, the experiments and
results presented for the first test scenario demonstrate the usefulness of our
method to learn a causal structure as well as the optimal action. On the other
hand the second experiment, shows that our proposal manages to learn the
underlying causal structure of several tasks with different sizes and different
causal structures.
| [
{
"version": "v1",
"created": "Tue, 13 Oct 2020 04:13:06 GMT"
}
] | 1,602,633,600,000 | [
[
"Gonzalez-Soto",
"Mauricio",
""
],
[
"Feliciano-Avelino",
"Ivan R.",
""
],
[
"Sucar",
"L. Enrique",
""
],
[
"Balderas",
"Hugo J. Escalante",
""
]
] |
2010.06425 | Esther Rodrigo Bonet | Esther Rodrigo Bonet, Duc Minh Nguyen and Nikos Deligiannis | Temporal Collaborative Filtering with Graph Convolutional Neural
Networks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal collaborative filtering (TCF) methods aim at modelling non-static
aspects behind recommender systems, such as the dynamics in users' preferences
and social trends around items. State-of-the-art TCF methods employ recurrent
neural networks (RNNs) to model such aspects. These methods deploy
matrix-factorization-based (MF-based) approaches to learn the user and item
representations. Recently, graph-neural-network-based (GNN-based) approaches
have shown improved performance in providing accurate recommendations over
traditional MF-based approaches in non-temporal CF settings. Motivated by this,
we propose a novel TCF method that leverages GNNs to learn user and item
representations, and RNNs to model their temporal dynamics. A challenge with
this method lies in the increased data sparsity, which negatively impacts
obtaining meaningful quality representations with GNNs. To overcome this
challenge, we train a GNN model at each time step using a set of observed
interactions accumulated time-wise. Comprehensive experiments on real-world
data show the improved performance obtained by our method over several
state-of-the-art temporal and non-temporal CF models.
| [
{
"version": "v1",
"created": "Tue, 13 Oct 2020 14:38:40 GMT"
}
] | 1,602,633,600,000 | [
[
"Bonet",
"Esther Rodrigo",
""
],
[
"Nguyen",
"Duc Minh",
""
],
[
"Deligiannis",
"Nikos",
""
]
] |
2010.06627 | Matthew Fontaine | Hejia Zhang, Matthew C. Fontaine, Amy K. Hoover, Julian Togelius,
Bistra Dilkina, Stefanos Nikolaidis | Video Game Level Repair via Mixed Integer Linear Programming | Accepted to AIIDE 2020 (oral) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in procedural content generation via machine learning
enable the generation of video-game levels that are aesthetically similar to
human-authored examples. However, the generated levels are often unplayable
without additional editing. We propose a generate-then-repair framework for
automatic generation of playable levels adhering to specific styles. The
framework constructs levels using a generative adversarial network (GAN)
trained with human-authored examples and repairs them using a mixed-integer
linear program (MIP) with playability constraints. A key component of the
framework is computing minimum cost edits between the GAN generated level and
the solution of the MIP solver, which we cast as a minimum cost network flow
problem. Results show that the proposed framework generates a diverse range of
playable levels, that capture the spatial relationships between objects
exhibited in the human-authored levels.
| [
{
"version": "v1",
"created": "Tue, 13 Oct 2020 18:37:58 GMT"
}
] | 1,602,720,000,000 | [
[
"Zhang",
"Hejia",
""
],
[
"Fontaine",
"Matthew C.",
""
],
[
"Hoover",
"Amy K.",
""
],
[
"Togelius",
"Julian",
""
],
[
"Dilkina",
"Bistra",
""
],
[
"Nikolaidis",
"Stefanos",
""
]
] |
2010.07126 | Lav Varshney | Lav R. Varshney, Nazneen Fatema Rajani, and Richard Socher | Explaining Creative Artifacts | 2020 Workshop on Human Interpretability in Machine Learning (WHI), at
ICML 2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human creativity is often described as the mental process of combining
associative elements into a new form, but emerging computational creativity
algorithms may not operate in this manner. Here we develop an inverse problem
formulation to deconstruct the products of combinatorial and compositional
creativity into associative chains as a form of post-hoc interpretation that
matches the human creative process. In particular, our formulation is
structured as solving a traveling salesman problem through a knowledge graph of
associative elements. We demonstrate our approach using an example in
explaining culinary computational creativity where there is an explicit
semantic structure, and two examples in language generation where we either
extract explicit concepts that map to a knowledge graph or we consider
distances in a word embedding space. We close by casting the length of an
optimal traveling salesman path as a measure of novelty in creativity.
| [
{
"version": "v1",
"created": "Wed, 14 Oct 2020 14:32:38 GMT"
}
] | 1,602,720,000,000 | [
[
"Varshney",
"Lav R.",
""
],
[
"Rajani",
"Nazneen Fatema",
""
],
[
"Socher",
"Richard",
""
]
] |
2010.07504 | Ayan Mukhopadhyay | Ayan Mukhopadhyay and Geoffrey Pettet and Mykel Kochenderfer and
Abhishek Dubey | Designing Emergency Response Pipelines : Lessons and Challenges | Accepted at the AI for Social Good Workshop, AAAI Fall Symposium
Series 2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emergency response to incidents such as accidents, crimes, and fires is a
major problem faced by communities. Emergency response management comprises of
several stages and sub-problems like forecasting, resource allocation, and
dispatch. The design of principled approaches to tackle each problem is
necessary to create efficient emergency response management (ERM) pipelines.
Over the last six years, we have worked with several first responder
organizations to design ERM pipelines. In this paper, we highlight some of the
challenges that we have identified and lessons that we have learned through our
experience in this domain. Such challenges are particularly relevant for
practitioners and researchers, and are important considerations even in the
design of response strategies to mitigate disasters like floods and
earthquakes.
| [
{
"version": "v1",
"created": "Thu, 15 Oct 2020 04:04:15 GMT"
}
] | 1,602,806,400,000 | [
[
"Mukhopadhyay",
"Ayan",
""
],
[
"Pettet",
"Geoffrey",
""
],
[
"Kochenderfer",
"Mykel",
""
],
[
"Dubey",
"Abhishek",
""
]
] |
2010.07533 | Liang Li | Bin-Bin Zhao and Liang Li and Hui-Dong Zhang | TDRE: A Tensor Decomposition Based Approach for Relation Extraction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extracting entity pairs along with relation types from unstructured texts is
a fundamental subtask of information extraction. Most existing joint models
rely on fine-grained labeling scheme or focus on shared embedding parameters.
These methods directly model the joint probability of multi-labeled triplets,
which suffer from extracting redundant triplets with all relation types.
However, each sentence may contain very few relation types. In this paper, we
first model the final triplet extraction result as a three-order tensor of
word-to-word pairs enriched with each relation type. And in order to obtain the
sentence contained relations, we introduce an independent but joint training
relation classification module. The tensor decomposition strategy is finally
utilized to decompose the triplet tensor with predicted relational components
which omits the calculations for unpredicted relation types. According to
effective decomposition methods, we propose the Tensor Decomposition based
Relation Extraction (TDRE) approach which is able to extract overlapping
triplets and avoid detecting unnecessary entity pairs. Experiments on benchmark
datasets NYT, CoNLL04 and ADE datasets demonstrate that the proposed method
outperforms existing strong baselines.
| [
{
"version": "v1",
"created": "Thu, 15 Oct 2020 05:29:34 GMT"
}
] | 1,602,806,400,000 | [
[
"Zhao",
"Bin-Bin",
""
],
[
"Li",
"Liang",
""
],
[
"Zhang",
"Hui-Dong",
""
]
] |
2010.07647 | Shakshi Sharma | Shakshi Sharma and Rajesh Sharma | Identifying Possible Rumor Spreaders on Twitter: A Weak Supervised
Learning Approach | Published at The International Joint Conference on Neural Networks
2021 (IJCNN2021). Please cite the IJCNN version | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online Social Media (OSM) platforms such as Twitter, Facebook are extensively
exploited by the users of these platforms for spreading the (mis)information to
a large audience effortlessly at a rapid pace. It has been observed that the
misinformation can cause panic, fear, and financial loss to society. Thus, it
is important to detect and control the misinformation in such platforms before
it spreads to the masses. In this work, we focus on rumors, which is one type
of misinformation (other types are fake news, hoaxes, etc). One way to control
the spread of the rumors is by identifying users who are possibly the rumor
spreaders, that is, users who are often involved in spreading the rumors. Due
to the lack of availability of rumor spreaders labeled dataset (which is an
expensive task), we use publicly available PHEME dataset, which contains rumor
and non-rumor tweets information, and then apply a weak supervised learning
approach to transform the PHEME dataset into rumor spreaders dataset. We
utilize three types of features, that is, user, text, and ego-network features,
before applying various supervised learning approaches. In particular, to
exploit the inherent network property in this dataset (user-user reply graph),
we explore Graph Convolutional Network (GCN), a type of Graph Neural Network
(GNN) technique. We compare GCN results with the other approaches: SVM, RF, and
LSTM. Extensive experiments performed on the rumor spreaders dataset, where we
achieve up to 0.864 value for F1-Score and 0.720 value for AUC-ROC, shows the
effectiveness of our methodology for identifying possible rumor spreaders using
the GCN technique.
| [
{
"version": "v1",
"created": "Thu, 15 Oct 2020 10:31:28 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jul 2021 09:16:25 GMT"
}
] | 1,625,616,000,000 | [
[
"Sharma",
"Shakshi",
""
],
[
"Sharma",
"Rajesh",
""
]
] |
2010.07710 | Mauro Vallati | Mauro Vallati and Lukas Chrpa and Thomas L. McCluskey and Frank Hutter | On the Importance of Domain Model Configuration for Automated Planning
Engines | Under consideration in Journal of Automated Reasoning | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of domain-independent planners within the AI Planning
community is leading to "off-the-shelf" technology that can be used in a wide
range of applications. Moreover, it allows a modular approach --in which
planners and domain knowledge are modules of larger software applications--
that facilitates substitutions or improvements of individual modules without
changing the rest of the system. This approach also supports the use of
reformulation and configuration techniques, which transform how a model is
represented in order to improve the efficiency of plan generation.
In this article, we investigate how the performance of domain-independent
planners is affected by domain model configuration, i.e., the order in which
elements are ordered in the model, particularly in the light of planner
comparisons. We then introduce techniques for the online and offline
configuration of domain models, and we analyse the impact of domain model
configuration on other reformulation approaches, such as macros.
| [
{
"version": "v1",
"created": "Thu, 15 Oct 2020 12:40:02 GMT"
}
] | 1,602,806,400,000 | [
[
"Vallati",
"Mauro",
""
],
[
"Chrpa",
"Lukas",
""
],
[
"McCluskey",
"Thomas L.",
""
],
[
"Hutter",
"Frank",
""
]
] |
2010.07722 | Pengfei Yang | Pengfei Yang, Renjue Li, Jianlin Li, Cheng-Chao Huang, Jingyi Wang,
Jun Sun, Bai Xue, Lijun Zhang | Improving Neural Network Verification through Spurious Region Guided
Refinement | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a spurious region guided refinement approach for robustness
verification of deep neural networks. Our method starts with applying the
DeepPoly abstract domain to analyze the network. If the robustness property
cannot be verified, the result is inconclusive. Due to the over-approximation,
the computed region in the abstraction may be spurious in the sense that it
does not contain any true counterexample. Our goal is to identify such spurious
regions and use them to guide the abstraction refinement. The core idea is to
make use of the obtained constraints of the abstraction to infer new bounds for
the neurons. This is achieved by linear programming techniques. With the new
bounds, we iteratively apply DeepPoly, aiming to eliminate spurious regions. We
have implemented our approach in a prototypical tool DeepSRGR. Experimental
results show that a large amount of regions can be identified as spurious, and
as a result, the precision of DeepPoly can be significantly improved. As a side
contribution, we show that our approach can be applied to verify quantitative
robustness properties.
| [
{
"version": "v1",
"created": "Thu, 15 Oct 2020 13:03:15 GMT"
}
] | 1,602,806,400,000 | [
[
"Yang",
"Pengfei",
""
],
[
"Li",
"Renjue",
""
],
[
"Li",
"Jianlin",
""
],
[
"Huang",
"Cheng-Chao",
""
],
[
"Wang",
"Jingyi",
""
],
[
"Sun",
"Jun",
""
],
[
"Xue",
"Bai",
""
],
[
"Zhang",
"Lijun",
""
]
] |
2010.07738 | Vinod Muthusamy | Vinod Muthusamy, Merve Unuvar, Hagen V\"olzer, Justin D. Weisz | Do's and Don'ts for Human and Digital Worker Integration | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robotic process automation (RPA) and its next evolutionary stage, intelligent
process automation, promise to drive improvements in efficiencies and process
outcomes. However, how can business leaders evaluate how to integrate
intelligent automation into business processes? What is an appropriate division
of labor between humans and machines? How should combined human-AI teams be
evaluated? For RPA, often the human labor cost and the robotic labor cost are
directly compared to make an automation decision. In this position paper, we
argue for a broader view that incorporates the potential for multiple levels of
autonomy and human involvement, as well as a wider range of metrics beyond
productivity when integrating digital workers into a business process
| [
{
"version": "v1",
"created": "Thu, 15 Oct 2020 13:30:23 GMT"
}
] | 1,602,806,400,000 | [
[
"Muthusamy",
"Vinod",
""
],
[
"Unuvar",
"Merve",
""
],
[
"Völzer",
"Hagen",
""
],
[
"Weisz",
"Justin D.",
""
]
] |
2010.07805 | Ziyao Xu | Yang Deng, Ziyao Xu, Li Zhou, Huanping Liu, Anqi Huang | Research on AI Composition Recognition Based on Music Rules | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of artificial intelligent composition has resulted in the
increasing popularity of machine-generated pieces, with frequent copyright
disputes consequently emerging. There is an insufficient amount of research on
the judgement of artificial and machine-generated works; the creation of a
method to identify and distinguish these works is of particular importance.
Starting from the essence of the music, the article constructs a
music-rule-identifying algorithm through extracting modes, which will identify
the stability of the mode of machine-generated music, to judge whether it is
artificial intelligent. The evaluation datasets used are provided by the
Conference on Sound and Music Technology(CSMT). Experimental results
demonstrate the algorithm to have a successful distinguishing ability between
datasets with different source distributions. The algorithm will also provide
some technological reference to the benign development of the music copyright
and artificial intelligent music.
| [
{
"version": "v1",
"created": "Thu, 15 Oct 2020 14:51:24 GMT"
}
] | 1,602,806,400,000 | [
[
"Deng",
"Yang",
""
],
[
"Xu",
"Ziyao",
""
],
[
"Zhou",
"Li",
""
],
[
"Liu",
"Huanping",
""
],
[
"Huang",
"Anqi",
""
]
] |
2010.08101 | Yilin Shen | Yilin Shen, Wenhu Chen, Hongxia Jin | Modeling Token-level Uncertainty to Learn Unknown Concepts in SLU via
Calibrated Dirichlet Prior RNN | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One major task of spoken language understanding (SLU) in modern personal
assistants is to extract semantic concepts from an utterance, called slot
filling. Although existing slot filling models attempted to improve extracting
new concepts that are not seen in training data, the performance in practice is
still not satisfied. Recent research collected question and answer annotated
data to learn what is unknown and should be asked, yet not practically scalable
due to the heavy data collection effort. In this paper, we incorporate
softmax-based slot filling neural architectures to model the sequence
uncertainty without question supervision. We design a Dirichlet Prior RNN to
model high-order uncertainty by degenerating as softmax layer for RNN model
training. To further enhance the uncertainty modeling robustness, we propose a
novel multi-task training to calibrate the Dirichlet concentration parameters.
We collect unseen concepts to create two test datasets from SLU benchmark
datasets Snips and ATIS. On these two and another existing Concept Learning
benchmark datasets, we show that our approach significantly outperforms
state-of-the-art approaches by up to 8.18%. Our method is generic and can be
applied to any RNN or Transformer based slot filling models with a softmax
layer.
| [
{
"version": "v1",
"created": "Fri, 16 Oct 2020 02:12:30 GMT"
}
] | 1,603,065,600,000 | [
[
"Shen",
"Yilin",
""
],
[
"Chen",
"Wenhu",
""
],
[
"Jin",
"Hongxia",
""
]
] |
2010.08140 | Farhana Faruqe | Farhana Faruqe, Ryan Watkins, and Larry Medsker | Monitoring Trust in Human-Machine Interactions for Public Sector
Applications | Presented at AAAI FSS-20: Artificial Intelligence in Government and
Public Sector, Washington, DC, USA | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The work reported here addresses the capacity of psychophysiological sensors
and measures using Electroencephalogram (EEG) and Galvanic Skin Response (GSR)
to detect levels of trust for humans using AI-supported Human-Machine
Interaction (HMI). Improvements to the analysis of EEG and GSR data may create
models that perform as well, or better than, traditional tools. A challenge to
analyzing the EEG and GSR data is the large amount of training data required
due to a large number of variables in the measurements. Researchers have
routinely used standard machine-learning classifiers like artificial neural
networks (ANN), support vector machines (SVM), and K-nearest neighbors (KNN).
Traditionally, these have provided few insights into which features of the EEG
and GSR data facilitate the more and least accurate predictions - thus making
it harder to improve the HMI and human-machine trust relationship. A key
ingredient to applying trust-sensor research results to practical situations
and monitoring trust in work environments is the understanding of which key
features are contributing to trust and then reducing the amount of data needed
for practical applications. We used the Local Interpretable Model-agnostic
Explanations (LIME) model as a process to reduce the volume of data required to
monitor and enhance trust in HMI systems - a technology that could be valuable
for governmental and public sector applications. Explainable AI can make HMI
systems transparent and promote trust. From customer service in government
agencies and community-level non-profit public service organizations to
national military and cybersecurity institutions, many public sector
organizations are increasingly concerned to have effective and ethical HMI with
services that are trustworthy, unbiased, and free of unintended negative
consequences.
| [
{
"version": "v1",
"created": "Fri, 16 Oct 2020 03:59:28 GMT"
}
] | 1,603,065,600,000 | [
[
"Faruqe",
"Farhana",
""
],
[
"Watkins",
"Ryan",
""
],
[
"Medsker",
"Larry",
""
]
] |
2010.08218 | Sunny Verma | Sunny Verma, Jiwei Wang, Zhefeng Ge, Rujia Shen, Fan Jin, Yang Wang,
Fang Chen, and Wei Liu | Deep-HOSeq: Deep Higher Order Sequence Fusion for Multimodal Sentiment
Analysis | Accepted at ICDM 2020 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Multimodal sentiment analysis utilizes multiple heterogeneous modalities for
sentiment classification. The recent multimodal fusion schemes customize LSTMs
to discover intra-modal dynamics and design sophisticated attention mechanisms
to discover the inter-modal dynamics from multimodal sequences. Although
powerful, these schemes completely rely on attention mechanisms which is
problematic due to two major drawbacks 1) deceptive attention masks, and 2)
training dynamics. Nevertheless, strenuous efforts are required to optimize
hyperparameters of these consolidate architectures, in particular their
custom-designed LSTMs constrained by attention schemes. In this research, we
first propose a common network to discover both intra-modal and inter-modal
dynamics by utilizing basic LSTMs and tensor based convolution networks. We
then propose unique networks to encapsulate temporal-granularity among the
modalities which is essential while extracting information within asynchronous
sequences. We then integrate these two kinds of information via a fusion layer
and call our novel multimodal fusion scheme as Deep-HOSeq (Deep network with
higher order Common and Unique Sequence information). The proposed Deep-HOSeq
efficiently discovers all-important information from multimodal sequences and
the effectiveness of utilizing both types of information is empirically
demonstrated on CMU-MOSEI and CMU-MOSI benchmark datasets. The source code of
our proposed Deep-HOSeq is and available at
https://github.com/sverma88/Deep-HOSeq--ICDM-2020.
| [
{
"version": "v1",
"created": "Fri, 16 Oct 2020 08:02:11 GMT"
}
] | 1,603,065,600,000 | [
[
"Verma",
"Sunny",
""
],
[
"Wang",
"Jiwei",
""
],
[
"Ge",
"Zhefeng",
""
],
[
"Shen",
"Rujia",
""
],
[
"Jin",
"Fan",
""
],
[
"Wang",
"Yang",
""
],
[
"Chen",
"Fang",
""
],
[
"Liu",
"Wei",
""
]
] |
2010.08660 | Manas Gaur | Manas Gaur, Keyur Faldu, Amit Sheth | Semantics of the Black-Box: Can knowledge graphs help make deep learning
systems more interpretable and explainable? | 6 pages + references, 4 figures, Accepted to IEEE internet computing
2020 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The recent series of innovations in deep learning (DL) have shown enormous
potential to impact individuals and society, both positively and negatively.
The DL models utilizing massive computing power and enormous datasets have
significantly outperformed prior historical benchmarks on increasingly
difficult, well-defined research tasks across technology domains such as
computer vision, natural language processing, signal processing, and
human-computer interactions. However, the Black-Box nature of DL models and
their over-reliance on massive amounts of data condensed into labels and dense
representations poses challenges for interpretability and explainability of the
system. Furthermore, DLs have not yet been proven in their ability to
effectively utilize relevant domain knowledge and experience critical to human
understanding. This aspect is missing in early data-focused approaches and
necessitated knowledge-infused learning and other strategies to incorporate
computational knowledge. This article demonstrates how knowledge, provided as a
knowledge graph, is incorporated into DL methods using knowledge-infused
learning, which is one of the strategies. We then discuss how this makes a
fundamental difference in the interpretability and explainability of current
approaches, and illustrate it with examples from natural language processing
for healthcare and education applications.
| [
{
"version": "v1",
"created": "Fri, 16 Oct 2020 22:55:23 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Nov 2020 02:28:43 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Nov 2020 15:52:55 GMT"
},
{
"version": "v4",
"created": "Fri, 11 Dec 2020 23:03:11 GMT"
}
] | 1,607,990,400,000 | [
[
"Gaur",
"Manas",
""
],
[
"Faldu",
"Keyur",
""
],
[
"Sheth",
"Amit",
""
]
] |
2010.08869 | Nishanth Kumar | Michael Fishman, Nishanth Kumar, Cameron Allen, Natasha Danas, Michael
Littman, Stefanie Tellex, George Konidaris | Task Scoping: Generating Task-Specific Abstractions for Planning in
Open-Scope Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A general-purpose planning agent requires an open-scope world model: one rich
enough to tackle any of the wide range of tasks it may be asked to solve over
its operational lifetime. This stands in contrast with typical planning
approaches, where the scope of a model is limited to a specific family of tasks
that share significant structure. Unfortunately, planning to solve any specific
task using an open-scope model is computationally intractable - even for
state-of-the-art methods - due to the many states and actions that are
necessarily present in the model but irrelevant to that problem. We propose
task scoping: a method that exploits knowledge of the initial state, goal
conditions, and transition system to automatically and efficiently remove
provably irrelevant variables and actions from a planning problem. Our approach
leverages causal link analysis and backwards reachability over state variables
(rather than states) along with operator merging (when effects on relevant
variables are identical). Using task scoping as a pre-planning step can shrink
the search space by orders of magnitude and dramatically decrease planning
time. We empirically demonstrate that these improvements occur across a variety
of open-scope domains, including Minecraft, where our approach leads to a 75x
reduction in search time with a state-of-the-art numeric planner, even after
including the time required for task scoping itself.
| [
{
"version": "v1",
"created": "Sat, 17 Oct 2020 21:19:25 GMT"
},
{
"version": "v2",
"created": "Tue, 11 May 2021 02:44:38 GMT"
},
{
"version": "v3",
"created": "Sat, 4 Feb 2023 23:45:11 GMT"
}
] | 1,675,728,000,000 | [
[
"Fishman",
"Michael",
""
],
[
"Kumar",
"Nishanth",
""
],
[
"Allen",
"Cameron",
""
],
[
"Danas",
"Natasha",
""
],
[
"Littman",
"Michael",
""
],
[
"Tellex",
"Stefanie",
""
],
[
"Konidaris",
"George",
""
]
] |
2010.09101 | David Mumford | David Mumford | The Convergence of AI code and Cortical Functioning -- a Commentary | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural nets, one of the oldest architectures for AI programming, are loosely
based on biological neurons and their properties. Recent work on language
applications has made the AI code closer to biological reality in several ways.
This commentary examines this convergence and, in light of what is known of
neocortical structure, addresses the question of whether ``general AI'' looks
attainable with these tools.
| [
{
"version": "v1",
"created": "Sun, 18 Oct 2020 20:50:45 GMT"
}
] | 1,603,152,000,000 | [
[
"Mumford",
"David",
""
]
] |
2010.09387 | Davide Corsi | Davide Corsi, Enrico Marchesini, Alessandro Farinelli | Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Groundbreaking successes have been achieved by Deep Reinforcement Learning
(DRL) in solving practical decision-making problems. Robotics, in particular,
can involve high-cost hardware and human interactions. Hence, scrupulous
evaluations of trained models are required to avoid unsafe behaviours in the
operational environment. However, designing metrics to measure the safety of a
neural network is an open problem, since standard evaluation parameters (e.g.,
total reward) are not informative enough. In this paper, we present a
semi-formal verification approach for decision-making tasks, based on interval
analysis, that addresses the computational demanding of previous verification
frameworks and design metrics to measure the safety of the models. Our method
obtains comparable results over standard benchmarks with respect to formal
verifiers, while drastically reducing the computation time. Moreover, our
approach allows to efficiently evaluate safety properties for decision-making
models in practical applications such as mapless navigation for mobile robots
and trajectory generation for manipulators.
| [
{
"version": "v1",
"created": "Mon, 19 Oct 2020 11:18:06 GMT"
}
] | 1,603,152,000,000 | [
[
"Corsi",
"Davide",
""
],
[
"Marchesini",
"Enrico",
""
],
[
"Farinelli",
"Alessandro",
""
]
] |
2010.11719 | An Nguyen | An Nguyen, Wenyu Zhang, Leo Schwinn, and Bjoern Eskofier | Conformance Checking for a Medical Training Process Using Petri net
Simulation and Sequence Alignment | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Process Mining has recently gained popularity in healthcare due to its
potential to provide a transparent, objective and data-based view on processes.
Conformance checking is a sub-discipline of process mining that has the
potential to answer how the actual process executions deviate from existing
guidelines. In this work, we analyze a medical training process for a surgical
procedure. Ten students were trained to install a Central Venous Catheters
(CVC) with ultrasound. Event log data was collected directly after instruction
by the supervisors during a first test run and additionally after a subsequent
individual training phase. In order to provide objective performance measures,
we formulate an optimal, global sequence alignment problem inspired by
approaches in bioinformatics. Therefore, we use the Petri net model
representation of the medical process guideline to simulate a representative
set of guideline conform sequences. Next, we calculate the optimal, global
sequence alignment of the recorded and simulated event logs. Finally, the
output measures and visualization of aligned sequences are provided for
objective feedback.
| [
{
"version": "v1",
"created": "Wed, 21 Oct 2020 16:29:09 GMT"
}
] | 1,603,411,200,000 | [
[
"Nguyen",
"An",
""
],
[
"Zhang",
"Wenyu",
""
],
[
"Schwinn",
"Leo",
""
],
[
"Eskofier",
"Bjoern",
""
]
] |
2010.11720 | Betania Campello Ms. | Betania S. C. Campello, Leonardo T. Duarte, Jo\~ao M. T. Romano | A study of the Multicriteria decision analysis based on the time-series
features and a TOPSIS method proposal for a tensorial approach | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A number of Multiple Criteria Decision Analysis (MCDA) methods have been
developed to rank alternatives based on several decision criteria. Usually,
MCDA methods deal with the criteria value at the time the decision is made
without considering their evolution over time. However, it may be relevant to
consider the criteria' time series since providing essential information for
decision-making (e.g., an improvement of the criteria). To deal with this
issue, we propose a new approach to rank the alternatives based on the criteria
time-series features (tendency, variance, etc.). In this novel approach, the
data is structured in three dimensions, which require a more complex data
structure, as the \textit{tensors}, instead of the classical matrix
representation used in MCDA. Consequently, we propose an extension for the
TOPSIS method to handle a tensor rather than a matrix. Computational results
reveal that it is possible to rank the alternatives from a new perspective by
considering meaningful decision-making information.
| [
{
"version": "v1",
"created": "Wed, 21 Oct 2020 14:37:02 GMT"
}
] | 1,603,411,200,000 | [
[
"Campello",
"Betania S. C.",
""
],
[
"Duarte",
"Leonardo T.",
""
],
[
"Romano",
"João M. T.",
""
]
] |
2010.12069 | Duncan McElfresh | Duncan C McElfresh, Michael Curry, Tuomas Sandholm, John P Dickerson | Improving Policy-Constrained Kidney Exchange via Pre-Screening | Appears at NeurIPS 2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In barter exchanges, participants swap goods with one another without
exchanging money; exchanges are often facilitated by a central clearinghouse,
with the goal of maximizing the aggregate quality (or number) of swaps. Barter
exchanges are subject to many forms of uncertainty--in participant preferences,
the feasibility and quality of various swaps, and so on. Our work is motivated
by kidney exchange, a real-world barter market in which patients in need of a
kidney transplant swap their willing living donors, in order to find a better
match. Modern exchanges include 2- and 3-way swaps, making the kidney exchange
clearing problem NP-hard. Planned transplants often fail for a variety of
reasons--if the donor organ is refused by the recipient's medical team, or if
the donor and recipient are found to be medically incompatible. Due to 2- and
3-way swaps, failed transplants can "cascade" through an exchange; one US-based
exchange estimated that about 85% of planned transplants failed in 2019. Many
optimization-based approaches have been designed to avoid these failures;
however most exchanges cannot implement these methods due to legal and policy
constraints. Instead we consider a setting where exchanges can query the
preferences of certain donors and recipients--asking whether they would accept
a particular transplant. We characterize this as a two-stage decision problem,
in which the exchange program (a) queries a small number of transplants before
committing to a matching, and (b) constructs a matching according to fixed
policy. We show that selecting these edges is a challenging combinatorial
problem, which is non-monotonic and non-submodular, in addition to being
NP-hard. We propose both a greedy heuristic and a Monte Carlo tree search,
which outperforms previous approaches, using experiments on both synthetic data
and real kidney exchange data from the United Network for Organ Sharing.
| [
{
"version": "v1",
"created": "Thu, 22 Oct 2020 21:07:36 GMT"
}
] | 1,603,670,400,000 | [
[
"McElfresh",
"Duncan C",
""
],
[
"Curry",
"Michael",
""
],
[
"Sandholm",
"Tuomas",
""
],
[
"Dickerson",
"John P",
""
]
] |
2010.12290 | Hanshuang Tong | Zhen Wang, Ben Teng, Yun Zhou, Hanshuang Tong and Guangtong Liu | Exploring Common and Individual Characteristics of Students via Matrix
Recovering | 8 pages, 9 figures, Submitted to AAAI 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Balancing group teaching and individual mentoring is an important issue in
education area. The nature behind this issue is to explore common
characteristics shared by multiple students and individual characteristics for
each student. Biclustering methods have been proved successful for detecting
meaningful patterns with the goal of driving group instructions based on
students' characteristics. However, these methods ignore the individual
characteristics of students as they only focus on common characteristics of
students. In this article, we propose a framework to detect both group
characteristics and individual characteristics of students simultaneously. We
assume that the characteristics matrix of students' is composed of two parts:
one is a low-rank matrix representing the common characteristics of students;
the other is a sparse matrix representing individual characteristics of
students. Thus, we treat the balancing issue as a matrix recovering problem.
The experiment results show the effectiveness of our method. Firstly, it can
detect meaningful biclusters that are comparable with the state-of-the-art
biclutering algorithms. Secondly, it can identify individual characteristics
for each student simultaneously. Both the source code of our algorithm and the
real datasets are available upon request.
| [
{
"version": "v1",
"created": "Fri, 23 Oct 2020 10:42:17 GMT"
}
] | 1,603,670,400,000 | [
[
"Wang",
"Zhen",
""
],
[
"Teng",
"Ben",
""
],
[
"Zhou",
"Yun",
""
],
[
"Tong",
"Hanshuang",
""
],
[
"Liu",
"Guangtong",
""
]
] |
2010.13033 | Tin Lai | Tin Lai, Philippe Morere | Robust Hierarchical Planning with Policy Delegation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel framework and algorithm for hierarchical planning based on
the principle of delegation. This framework, the Markov Intent Process,
features a collection of skills which are each specialised to perform a single
task well. Skills are aware of their intended effects and are able to analyse
planning goals to delegate planning to the best-suited skill. This principle
dynamically creates a hierarchy of plans, in which each skill plans for
sub-goals for which it is specialised. The proposed planning method features
on-demand execution---skill policies are only evaluated when needed. Plans are
only generated at the highest level, then expanded and optimised when the
latest state information is available. The high-level plan retains the initial
planning intent and previously computed skills, effectively reducing the
computation needed to adapt to environmental changes. We show this planning
approach is experimentally very competitive to classic planning and
reinforcement learning techniques on a variety of domains, both in terms of
solution length and planning time.
| [
{
"version": "v1",
"created": "Sun, 25 Oct 2020 04:36:20 GMT"
}
] | 1,603,756,800,000 | [
[
"Lai",
"Tin",
""
],
[
"Morere",
"Philippe",
""
]
] |
2010.13121 | Arthur Bit-Monnot | Arthur Bit-Monnot, Malik Ghallab, F\'elix Ingrand and David E. Smith | FAPE: a Constraint-based Planner for Generative and Hierarchical
Temporal Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal planning offers numerous advantages when based on an expressive
representation. Timelines have been known to provide the required
expressiveness but at the cost of search efficiency. We propose here a temporal
planner, called FAPE, which supports many of the expressive temporal features
of the ANML modeling language without loosing efficiency.
FAPE's representation coherently integrates flexible timelines with
hierarchical refinement methods that can provide efficient control knowledge. A
novel reachability analysis technique is proposed and used to develop causal
networks to constrain the search space. It is employed for the design of
informed heuristics, inference methods and efficient search strategies.
Experimental results on common benchmarks in the field permit to assess the
components and search strategies of FAPE, and to compare it to IPC planners.
The results show the proposed approach to be competitive with less expressive
planners and often superior when hierarchical control knowledge is provided.
FAPE, a freely available system, provides other features, not covered here,
such as the integration of planning with acting, and the handling of sensing
actions in partially observable environments.
| [
{
"version": "v1",
"created": "Sun, 25 Oct 2020 13:46:34 GMT"
}
] | 1,603,756,800,000 | [
[
"Bit-Monnot",
"Arthur",
""
],
[
"Ghallab",
"Malik",
""
],
[
"Ingrand",
"Félix",
""
],
[
"Smith",
"David E.",
""
]
] |
2010.13130 | Jingsong Wang | Jingsong Wang, Tom Ko, Zhen Xu, Xiawei Guo, Souxiang Liu, Wei-Wei Tu,
Lei Xie | AutoSpeech 2020: The Second Automated Machine Learning Challenge for
Speech Classification | 5 pages, 2 figures, Details about AutoSpeech 2020 Challenge | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The AutoSpeech challenge calls for automated machine learning (AutoML)
solutions to automate the process of applying machine learning to speech
processing tasks. These tasks, which cover a large variety of domains, will be
shown to the automated system in a random order. Each time when the tasks are
switched, the information of the new task will be hinted with its corresponding
training set. Thus, every submitted solution should contain an adaptation
routine which adapts the system to the new task. Compared to the first edition,
the 2020 edition includes advances of 1) more speech tasks, 2) noisier data in
each task, 3) a modified evaluation metric. This paper outlines the challenge
and describe the competition protocol, datasets, evaluation metric, starting
kit, and baseline systems.
| [
{
"version": "v1",
"created": "Sun, 25 Oct 2020 15:01:41 GMT"
}
] | 1,603,756,800,000 | [
[
"Wang",
"Jingsong",
""
],
[
"Ko",
"Tom",
""
],
[
"Xu",
"Zhen",
""
],
[
"Guo",
"Xiawei",
""
],
[
"Liu",
"Souxiang",
""
],
[
"Tu",
"Wei-Wei",
""
],
[
"Xie",
"Lei",
""
]
] |
2010.13266 | Ramya Srinivasan | Ramya Srinivasan, Kanji Uchino | Biases in Generative Art -- A Causal Look from the Lens of Art History | ACM FAccT March 3--10, 2021, Virtual Event, Canada | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | With rapid progress in artificial intelligence (AI), popularity of generative
art has grown substantially. From creating paintings to generating novel art
styles, AI based generative art has showcased a variety of applications.
However, there has been little focus concerning the ethical impacts of AI based
generative art. In this work, we investigate biases in the generative art AI
pipeline right from those that can originate due to improper problem
formulation to those related to algorithm design. Viewing from the lens of art
history, we discuss the socio-cultural impacts of these biases. Leveraging
causal models, we highlight how current methods fall short in modeling the
process of art creation and thus contribute to various types of biases. We
illustrate the same through case studies, in particular those related to style
transfer. To the best of our knowledge, this is the first extensive analysis
that investigates biases in the generative art AI pipeline from the perspective
of art history. We hope our work sparks interdisciplinary discussions related
to accountability of generative art.
| [
{
"version": "v1",
"created": "Mon, 26 Oct 2020 00:49:09 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Feb 2021 19:01:11 GMT"
}
] | 1,613,606,400,000 | [
[
"Srinivasan",
"Ramya",
""
],
[
"Uchino",
"Kanji",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.