id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2105.01454 | Stefanie Rinderle-Ma | Florian Stertz and Juergen Mangler and Stefanie Rinderle-Ma | The Role of Time and Data: Online Conformance Checking in the
Manufacturing Domain | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Process mining has matured as analysis instrument for process-oriented data
in recent years. Manufacturing is a challenging domain that craves for
process-oriented technologies to address digitalization challenges. We found
that process mining creates high expectations, but its implementation and usage
by manufacturing experts such as process supervisors and shopfloor workers
remain unclear to a certain extent. Reason (1) is that even though
manufacturing allows for well-structured processes, the actual workflow is
rarely captured in a process model. Even if a model is available, a software
for orchestrating and logging the execution is often missing. Reason (2) refers
to the work reality in manufacturing: a process instance is started by a
shopfloor worker who then turns to work on other things. Hence continuous
monitoring of the process instances does not happen, i.e., process monitoring
is merely a secondary task, and the shopfloor worker can only react to
problems/errors that have already occurred. (1) and (2) motivate the goals of
this study that is driven by Technical Action Research (TAR). Based on the
experimental artifact TIDATE -- a lightweight process execution and mining
framework -- it is studied how the correct execution of process instances can
be ensured and how a data set suitable for process mining can be generated at
run time in a real-world setting. Secondly, it is investigated whether and how
process mining supports domain experts during process monitoring as a secondary
task. The findings emphasize the importance of online conformance checking in
manufacturing and show how appropriate data sets can be identified and
generated.
| [
{
"version": "v1",
"created": "Tue, 4 May 2021 12:23:35 GMT"
}
] | 1,620,172,800,000 | [
[
"Stertz",
"Florian",
""
],
[
"Mangler",
"Juergen",
""
],
[
"Rinderle-Ma",
"Stefanie",
""
]
] |
2105.01929 | Jo\v{z}e Ro\v{z}anec | Jo\v{z}e M. Ro\v{z}anec, Patrik Zajec, Klemen Kenda, Inna Novalija,
Bla\v{z} Fortuna, Dunja Mladeni\'c | XAI-KG: knowledge graph to support XAI and decision-making in
manufacturing | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing adoption of artificial intelligence requires accurate
forecasts and means to understand the reasoning of artificial intelligence
models behind such a forecast. Explainable Artificial Intelligence (XAI) aims
to provide cues for why a model issued a certain prediction. Such cues are of
utmost importance to decision-making since they provide insights on the
features that influenced most certain forecasts and let the user decide if the
forecast can be trusted. Though many techniques were developed to explain
black-box models, little research was done on assessing the quality of those
explanations and their influence on decision-making. We propose an ontology and
knowledge graph to support collecting feedback regarding forecasts, forecast
explanations, recommended decision-making options, and user actions. This way,
we provide means to improve forecasting models, explanations, and
recommendations of decision-making options. We tailor the knowledge graph for
the domain of demand forecasting and validate it on real-world data.
| [
{
"version": "v1",
"created": "Wed, 5 May 2021 08:42:07 GMT"
},
{
"version": "v2",
"created": "Thu, 6 May 2021 03:41:32 GMT"
}
] | 1,620,345,600,000 | [
[
"Rožanec",
"Jože M.",
""
],
[
"Zajec",
"Patrik",
""
],
[
"Kenda",
"Klemen",
""
],
[
"Novalija",
"Inna",
""
],
[
"Fortuna",
"Blaž",
""
],
[
"Mladenić",
"Dunja",
""
]
] |
2105.02198 | Tyler Millhouse | Tyler Millhouse, Melanie Moses, Melanie Mitchell | Foundations of Intelligence in Natural and Artificial Systems: A
Workshop Report | 30 pages, 0 figures, workshop report | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In March of 2021, the Santa Fe Institute hosted a workshop as part of its
Foundations of Intelligence in Natural and Artificial Systems project. This
project seeks to advance the field of artificial intelligence by promoting
interdisciplinary research on the nature of intelligence. During the workshop,
speakers from diverse disciplines gathered to develop a taxonomy of
intelligence, articulating their own understanding of intelligence and how
their research has furthered that understanding. In this report, we summarize
the insights offered by each speaker and identify the themes that emerged
during the talks and subsequent discussions.
| [
{
"version": "v1",
"created": "Wed, 5 May 2021 17:11:58 GMT"
}
] | 1,620,259,200,000 | [
[
"Millhouse",
"Tyler",
""
],
[
"Moses",
"Melanie",
""
],
[
"Mitchell",
"Melanie",
""
]
] |
2105.02331 | Wei Guo | Wei Guo, Marc Brittain, Peng Wei | Safety Enhancement for Deep Reinforcement Learning in Autonomous
Separation Assurance | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The separation assurance task will be extremely challenging for air traffic
controllers in a complex and high density airspace environment. Deep
reinforcement learning (DRL) was used to develop an autonomous separation
assurance framework in our previous work where the learned model advised speed
maneuvers. In order to improve the safety of this model in unseen environments
with uncertainties, in this work we propose a safety module for DRL in
autonomous separation assurance applications. The proposed module directly
addresses both model uncertainty and state uncertainty to improve safety. Our
safety module consists of two sub-modules: (1) the state safety sub-module is
based on the execution-time data augmentation method to introduce state
disturbances in the model input state; (2) the model safety sub-module is a
Monte-Carlo dropout extension that learns the posterior distribution of the DRL
model policy. We demonstrate the effectiveness of the two sub-modules in an
open-source air traffic simulator with challenging environment settings.
Through extensive numerical experiments, our results show that the proposed
sub-safety modules help the DRL agent significantly improve its safety
performance in an autonomous separation assurance task.
| [
{
"version": "v1",
"created": "Wed, 5 May 2021 21:20:40 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jul 2021 19:42:45 GMT"
},
{
"version": "v3",
"created": "Sun, 20 Feb 2022 00:17:47 GMT"
}
] | 1,645,488,000,000 | [
[
"Guo",
"Wei",
""
],
[
"Brittain",
"Marc",
""
],
[
"Wei",
"Peng",
""
]
] |
2105.02658 | Tatsuya Sakai | Tatsuya Sakai and Takayuki Nagai | Explainable Autonomous Robots: A Survey and Perspective | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advanced communication protocols are critical to enable the coexistence of
autonomous robots with humans. Thus, the development of explanatory
capabilities is an urgent first step toward autonomous robots. This survey
provides an overview of the various types of "explainability" discussed in
machine learning research. Then, we discuss the definition of "explainability"
in the context of autonomous robots (i.e., explainable autonomous robots) by
exploring the question "what is an explanation?" We further conduct a research
survey based on this definition and present some relevant topics for future
research.
| [
{
"version": "v1",
"created": "Thu, 6 May 2021 13:38:02 GMT"
}
] | 1,620,345,600,000 | [
[
"Sakai",
"Tatsuya",
""
],
[
"Nagai",
"Takayuki",
""
]
] |
2105.02670 | Tatsuya Sakai | Tatsuya Sakai, Kazuki Miyazawa, Takato Horii and Takayuki Nagai | A Framework of Explanation Generation toward Reliable Autonomous Robots | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To realize autonomous collaborative robots, it is important to increase the
trust that users have in them. Toward this goal, this paper proposes an
algorithm which endows an autonomous agent with the ability to explain the
transition from the current state to the target state in a Markov decision
process (MDP). According to cognitive science, to generate an explanation that
is acceptable to humans, it is important to present the minimum information
necessary to sufficiently understand an event. To meet this requirement, this
study proposes a framework for identifying important elements in the
decision-making process using a prediction model for the world and generating
explanations based on these elements. To verify the ability of the proposed
method to generate explanations, we conducted an experiment using a grid
environment. It was inferred from the result of a simulation experiment that
the explanation generated using the proposed method was composed of the minimum
elements important for understanding the transition from the current state to
the target state. Furthermore, subject experiments showed that the generated
explanation was a good summary of the process of state transition, and that a
high evaluation was obtained for the explanation of the reason for an action.
| [
{
"version": "v1",
"created": "Thu, 6 May 2021 13:50:37 GMT"
}
] | 1,620,345,600,000 | [
[
"Sakai",
"Tatsuya",
""
],
[
"Miyazawa",
"Kazuki",
""
],
[
"Horii",
"Takato",
""
],
[
"Nagai",
"Takayuki",
""
]
] |
2105.02685 | Pierre Colombo | Pierre Colombo and Chloe Clavel and Pablo Piantanida | A Novel Estimator of Mutual Information for Learning to Disentangle
Textual Representations | null | ACL 2021 | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Learning disentangled representations of textual data is essential for many
natural language tasks such as fair classification, style transfer and sentence
generation, among others. The existent dominant approaches in the context of
text data {either rely} on training an adversary (discriminator) that aims at
making attribute values difficult to be inferred from the latent code {or rely
on minimising variational bounds of the mutual information between latent code
and the value attribute}. {However, the available methods suffer of the
impossibility to provide a fine-grained control of the degree (or force) of
disentanglement.} {In contrast to} {adversarial methods}, which are remarkably
simple, although the adversary seems to be performing perfectly well during the
training phase, after it is completed a fair amount of information about the
undesired attribute still remains. This paper introduces a novel variational
upper bound to the mutual information between an attribute and the latent code
of an encoder. Our bound aims at controlling the approximation error via the
Renyi's divergence, leading to both better disentangled representations and in
particular, a precise control of the desirable degree of disentanglement {than
state-of-the-art methods proposed for textual data}. Furthermore, it does not
suffer from the degeneracy of other losses in multi-class scenarios. We show
the superiority of this method on fair classification and on textual style
transfer tasks. Additionally, we provide new insights illustrating various
trade-offs in style transfer when attempting to learn disentangled
representations and quality of the generated sentence.
| [
{
"version": "v1",
"created": "Thu, 6 May 2021 14:05:06 GMT"
}
] | 1,620,345,600,000 | [
[
"Colombo",
"Pierre",
""
],
[
"Clavel",
"Chloe",
""
],
[
"Piantanida",
"Pablo",
""
]
] |
2105.02741 | Zhiyuan Wu | Zizhen Zhang, Zhiyuan Wu, Hang Zhang, Jiahai Wang | Meta-Learning-Based Deep Reinforcement Learning for Multiobjective
Optimization Problems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep reinforcement learning (DRL) has recently shown its success in tackling
complex combinatorial optimization problems. When these problems are extended
to multiobjective ones, it becomes difficult for the existing DRL approaches to
flexibly and efficiently deal with multiple subproblems determined by weight
decomposition of objectives. This paper proposes a concise meta-learning-based
DRL approach. It first trains a meta-model by meta-learning. The meta-model is
fine-tuned with a few update steps to derive submodels for the corresponding
subproblems. The Pareto front is then built accordingly. Compared with other
learning-based methods, our method can greatly shorten the training time of
multiple submodels. Due to the rapid and excellent adaptability of the
meta-model, more submodels can be derived so as to increase the quality and
diversity of the found solutions. The computational experiments on
multiobjective traveling salesman problems and multiobjective vehicle routing
problem with time windows demonstrate the superiority of our method over most
of learning-based and iteration-based approaches.
| [
{
"version": "v1",
"created": "Thu, 6 May 2021 15:09:35 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Feb 2022 09:36:50 GMT"
}
] | 1,644,883,200,000 | [
[
"Zhang",
"Zizhen",
""
],
[
"Wu",
"Zhiyuan",
""
],
[
"Zhang",
"Hang",
""
],
[
"Wang",
"Jiahai",
""
]
] |
2105.02851 | Colin Shea-Blymyer | Colin Shea-Blymyer and Houssam Abbas | Algorithmic Ethics: Formalization and Verification of Autonomous Vehicle
Obligations | To be published in ACT Transactions on Cyber-Physical Systems Special
Issue on Artificial Intelligence and Cyber-Physical Systems. arXiv admin
note: text overlap with arXiv:2009.00738 | null | 10.1145/3460975 | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | We develop a formal framework for automatic reasoning about the obligations
of autonomous cyber-physical systems, including their social and ethical
obligations. Obligations, permissions and prohibitions are distinct from a
system's mission, and are a necessary part of specifying advanced, adaptive
AI-equipped systems. They need a dedicated deontic logic of obligations to
formalize them. Most existing deontic logics lack corresponding algorithms and
system models that permit automatic verification. We demonstrate how a
particular deontic logic, Dominance Act Utilitarianism (DAU), is a suitable
starting point for formalizing the obligations of autonomous systems like
self-driving cars. We demonstrate its usefulness by formalizing a subset of
Responsibility-Sensitive Safety (RSS) in DAU; RSS is an industrial proposal for
how self-driving cars should and should not behave in traffic. We show that
certain logical consequences of RSS are undesirable, indicating a need to
further refine the proposal. We also demonstrate how obligations can change
over time, which is necessary for long-term autonomy. We then demonstrate a
model-checking algorithm for DAU formulas on weighted transition systems, and
illustrate it by model-checking obligations of a self-driving car controller
from the literature.
| [
{
"version": "v1",
"created": "Thu, 6 May 2021 17:41:06 GMT"
}
] | 1,620,345,600,000 | [
[
"Shea-Blymyer",
"Colin",
""
],
[
"Abbas",
"Houssam",
""
]
] |
2105.03192 | Gauthier Chassang | Gauthier Chassang (INSERM,PFGS), Mogens Thomsen (INSERM), Pierre
Rumeau, Florence S\`edes (IRIT), Alejandra Delfin (INSERM) | An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a comprehensive analysis of existing concepts coming from
different disciplines tackling the notion of intelligence, namely psychology
and engineering, and from disciplines aiming to regulate AI innovations, namely
AI ethics and law. The aim is to identify shared notions or discrepancies to
consider for qualifying AI systems. Relevant concepts are integrated into a
matrix intended to help defining more precisely when and how computing tools
(programs or devices) may be qualified as AI while highlighting critical
features to serve a specific technical, ethical and legal assessment of
challenges in AI development. Some adaptations of existing notions of AI
characteristics are proposed. The matrix is a risk-based conceptual model
designed to allow an empirical, flexible and scalable qualification of AI
technologies in the perspective of benefit-risk assessment practices,
technological monitoring and regulatory compliance: it offers a structured
reflection tool for stakeholders in AI development that are engaged in
responsible research and innovation.Pre-print version (achieved on May 2020)
| [
{
"version": "v1",
"created": "Fri, 7 May 2021 12:01:31 GMT"
}
] | 1,620,604,800,000 | [
[
"Chassang",
"Gauthier",
"",
"INSERM,PFGS"
],
[
"Thomsen",
"Mogens",
"",
"INSERM"
],
[
"Rumeau",
"Pierre",
"",
"IRIT"
],
[
"Sèdes",
"Florence",
"",
"IRIT"
],
[
"Delfin",
"Alejandra",
"",
"INSERM"
]
] |
2105.03414 | Niranj Jyothish | Ajay Krishnan, Niranj Jyothish, Xun Jia | Using reinforcement learning to design an AI assistantfor a satisfying
co-op experience | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this project, we designed an intelligent assistant player for the
single-player game Space Invaders with the aim to provide a satisfying co-op
experience. The agent behaviour was designed using reinforcement learning
techniques and evaluated based on several criteria. We validate the hypothesis
that an AI-driven computer player can provide a satisfying co-op experience.
| [
{
"version": "v1",
"created": "Fri, 7 May 2021 17:44:02 GMT"
}
] | 1,620,604,800,000 | [
[
"Krishnan",
"Ajay",
""
],
[
"Jyothish",
"Niranj",
""
],
[
"Jia",
"Xun",
""
]
] |
2105.03540 | Tianyu Liu | Lingyu Zhang and Tianyu Liu and Yunhai Wang | An Intelligent Model for Solving Manpower Scheduling Problems | none | BDAI 2021 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The manpower scheduling problem is a critical research field in the resource
management area. Based on the existing studies on scheduling problem solutions,
this paper transforms the manpower scheduling problem into a combinational
optimization problem under multi-constraint conditions from a new perspective.
It also uses logical paradigms to build a mathematical model for problem
solution and an improved multi-dimensional evolution algorithm for solving the
model. Moreover, the constraints discussed in this paper basically cover all
the requirements of human resource coordination in modern society and are
supported by our experiment results. In the discussion part, we compare our
model with other heuristic algorithms or linear programming methods and prove
that the model proposed in this paper makes a 25.7% increase in efficiency and
a 17% increase in accuracy at most. In addition, to the numerical solution of
the manpower scheduling problem, this paper also studies the algorithm for
scheduling task list generation and the method of displaying scheduling
results. As a result, we not only provide various modifications for the basic
algorithm to solve different condition problems but also propose a new
algorithm that increases at least 28.91% in time efficiency by comparing with
different baseline models.
| [
{
"version": "v1",
"created": "Fri, 7 May 2021 23:51:12 GMT"
}
] | 1,620,777,600,000 | [
[
"Zhang",
"Lingyu",
""
],
[
"Liu",
"Tianyu",
""
],
[
"Wang",
"Yunhai",
""
]
] |
2105.04088 | Zan Wang | Hanqing Wang, Zan Wang, Wei Liang, Lap-Fai Yu | PEARL: Parallelized Expert-Assisted Reinforcement Learning for Scene
Rearrangement Planning | 7 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene Rearrangement Planning (SRP) is an interior task proposed recently. The
previous work defines the action space of this task with handcrafted
coarse-grained actions that are inflexible to be used for transforming scene
arrangement and intractable to be deployed in practice. Additionally, this new
task lacks realistic indoor scene rearrangement data to feed popular
data-hungry learning approaches and meet the needs of quantitative evaluation.
To address these problems, we propose a fine-grained action definition for SRP
and introduce a large-scale scene rearrangement dataset. We also propose a
novel learning paradigm to efficiently train an agent through self-playing,
without any prior knowledge. The agent trained via our paradigm achieves
superior performance on the introduced dataset compared to the baseline agents.
We provide a detailed analysis of the design of our approach in our
experiments.
| [
{
"version": "v1",
"created": "Mon, 10 May 2021 03:27:16 GMT"
}
] | 1,620,691,200,000 | [
[
"Wang",
"Hanqing",
""
],
[
"Wang",
"Zan",
""
],
[
"Liang",
"Wei",
""
],
[
"Yu",
"Lap-Fai",
""
]
] |
2105.04120 | Pranshu Malviya | Yash Pratyush Sinha, Pranshu Malviya, Rupaj Kumar Nayak | Fast constraint satisfaction problem and learning-based algorithm for
solving Minesweeper | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Minesweeper is a popular spatial-based decision-making game that works with
incomplete information. As an exemplary NP-complete problem, it is a major area
of research employing various artificial intelligence paradigms. The present
work models this game as Constraint Satisfaction Problem (CSP) and Markov
Decision Process (MDP). We propose a new method named as dependents from the
independent set using deterministic solution search (DSScsp) for the faster
enumeration of all solutions of a CSP based Minesweeper game and improve the
results by introducing heuristics. Using MDP, we implement machine learning
methods on these heuristics. We train the classification model on sparse data
with results from CSP formulation. We also propose a new rewarding method for
applying a modified deep Q-learning for better accuracy and versatile learning
in the Minesweeper game. The overall results have been analyzed for different
kinds of Minesweeper games and their accuracies have been recorded. Results
from these experiments show that the proposed method of MDP based
classification model and deep Q-learning overall is the best methods in terms
of accuracy for games with given mine densities.
| [
{
"version": "v1",
"created": "Mon, 10 May 2021 05:27:15 GMT"
}
] | 1,620,691,200,000 | [
[
"Sinha",
"Yash Pratyush",
""
],
[
"Malviya",
"Pranshu",
""
],
[
"Nayak",
"Rupaj Kumar",
""
]
] |
2105.04158 | Alessandro Antonucci | Rafael Caba\~nas and Alessandro Antonucci | CREPO: An Open Repository to Benchmark Credal Network Algorithms | Isipta 2021 (Version with Supplementary Material) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Credal networks are a popular class of imprecise probabilistic graphical
models obtained as a Bayesian network generalization based on, so-called
credal, sets of probability mass functions. A Java library called CREMA has
been recently released to model, process and query credal networks. Despite the
NP-hardness of the (exact) task, a number of algorithms is available to
approximate credal network inferences. In this paper we present CREPO, an open
repository of synthetic credal networks, provided together with the exact
results of inference tasks on these models. A Python tool is also delivered to
load these data and interact with CREMA, thus making extremely easy to evaluate
and compare existing and novel inference algorithms. To demonstrate such
benchmarking scheme, we propose an approximate heuristic to be used inside
variable elimination schemes to keep a bound on the maximum number of vertices
generated during the combination step. A CREPO-based validation against
approximate procedures based on linearization and exact techniques performed in
CREMA is finally discussed.
| [
{
"version": "v1",
"created": "Mon, 10 May 2021 07:31:59 GMT"
}
] | 1,620,691,200,000 | [
[
"Cabañas",
"Rafael",
""
],
[
"Antonucci",
"Alessandro",
""
]
] |
2105.04250 | Dominik Drexler | Dominik Drexler and Jendrik Seipp and Hector Geffner | Expressing and Exploiting the Common Subgoal Structure of Classical
Planning Domains Using Sketches: Extended Version | This work will appear in the Proceedings of the 18th International
Conference on Principles of Knowledge Representation and Reasoning | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Width-based planning methods deal with conjunctive goals by decomposing
problems into subproblems of low width. Algorithms like SIW thus fail when the
goal is not easily serializable in this way or when some of the subproblems
have a high width. In this work, we address these limitations by using a simple
but powerful language for expressing finer problem decompositions introduced
recently by Bonet and Geffner, called policy sketches. A policy sketch over a
set of Boolean and numerical features is a set of sketch rules that express how
the values of these features are supposed to change. Like general policies,
policy sketches are domain general, but unlike policies, the changes captured
by sketch rules do not need to be achieved in a single step. We show that many
planning domains that cannot be solved by SIW are provably solvable in low
polynomial time with the SIW_R algorithm, the version of SIW that employs
user-provided policy sketches. Policy sketches are thus shown to be a powerful
language for expressing domain-specific knowledge in a simple and compact way
and a convenient alternative to languages such as HTNs or temporal logics.
Furthermore, they make it easy to express general problem decompositions and
prove key properties of them like their width and complexity.
| [
{
"version": "v1",
"created": "Mon, 10 May 2021 10:36:18 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jul 2021 09:57:29 GMT"
}
] | 1,625,788,800,000 | [
[
"Drexler",
"Dominik",
""
],
[
"Seipp",
"Jendrik",
""
],
[
"Geffner",
"Hector",
""
]
] |
2105.04342 | Michael Green | Michael Cerny Green, Victoria Yen, Sam Earle, Dipika Rajesh, Maria
Edwards, L. B. Soros | Exploring open-ended gameplay features with Micro RollerCoaster Tycoon | 8 pages, 10 figures, submitted to Foundations of Digital Games
Conference 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces MicroRCT, a novel open source simulator inspired by the
theme park sandbox game RollerCoaster Tycoon. The goal in MicroRCT is to place
rides and shops in an amusement park to maximize profit earned from park
guests. Thus, the challenges for game AI include both selecting high-earning
attractions and placing them in locations that are convenient to guests. In
this paper, the MAP-Elites algorithm is used to generate a diversity of park
layouts, exploring two theoretical questions about evolutionary algorithms and
game design: 1) Is there a benefit to starting from a minimal starting point
for evolution and complexifying incrementally? and 2) What are the effects of
resource limitations on creativity and optimization? Results indicate that
building from scratch with no costs results in the widest diversity of
high-performing designs.
| [
{
"version": "v1",
"created": "Mon, 10 May 2021 13:19:17 GMT"
}
] | 1,620,691,200,000 | [
[
"Green",
"Michael Cerny",
""
],
[
"Yen",
"Victoria",
""
],
[
"Earle",
"Sam",
""
],
[
"Rajesh",
"Dipika",
""
],
[
"Edwards",
"Maria",
""
],
[
"Soros",
"L. B.",
""
]
] |
2105.04595 | Md Solimul Chowdhury | Md Solimul Chowdhury, Martin M\"uller, Jia You | A Deep Dive into Conflict Generating Decisions | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Boolean Satisfiability (SAT) is a well-known NP-complete problem. Despite
this theoretical hardness, SAT solvers based on Conflict Driven Clause Learning
(CDCL) can solve large SAT instances from many important domains. CDCL learns
clauses from conflicts, a technique that allows a solver to prune its search
space. The selection heuristics in CDCL prioritize variables that are involved
in recent conflicts. While only a fraction of decisions generate any conflicts,
many generate multiple conflicts.
In this paper, we study conflict-generating decisions in CDCL in detail. We
investigate the impact of single conflict (sc) decisions, which generate only
one conflict, and multi-conflict (mc) decisions which generate two or more. We
empirically characterize these two types of decisions based on the quality of
the learned clauses produced by each type of decision. We also show an
important connection between consecutive clauses learned within the same mc
decision, where one learned clause triggers the learning of the next one
forming a chain of clauses. This leads to the consideration of similarity
between conflicts, for which we formulate the notion of conflictsproximity as a
similarity measure. We show that conflicts in mc decisions are more closely
related than consecutive conflicts generated from sc decisions. Finally, we
develop Common Reason Variable Reduction (CRVR) as a new decision strategy that
reduces the selection priority of some variables from the learned clauses of mc
decisions. Our empirical evaluation of CRVR implemented in three leading
solvers demonstrates performance gains in benchmarks from the main track of SAT
Competition-2020.
| [
{
"version": "v1",
"created": "Mon, 10 May 2021 18:17:52 GMT"
}
] | 1,620,777,600,000 | [
[
"Chowdhury",
"Md Solimul",
""
],
[
"Müller",
"Martin",
""
],
[
"You",
"Jia",
""
]
] |
2105.04620 | Steven Schockaert | Steven Schockaert, Yazm\'in Ib\'a\~nez-Garc\'ia, V\'ictor
Guti\'errez-Basulto | A Description Logic for Analogical Reasoning | Accepted for IJCAI 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Ontologies formalise how the concepts from a given domain are interrelated.
Despite their clear potential as a backbone for explainable AI, existing
ontologies tend to be highly incomplete, which acts as a significant barrier to
their more widespread adoption. To mitigate this issue, we present a mechanism
to infer plausible missing knowledge, which relies on reasoning by analogy. To
the best of our knowledge, this is the first paper that studies analogical
reasoning within the setting of description logic ontologies. After showing
that the standard formalisation of analogical proportion has important
limitations in this setting, we introduce an alternative semantics based on
bijective mappings between sets of features. We then analyse the properties of
analogies under the proposed semantics, and show among others how it enables
two plausible inference patterns: rule translation and rule extrapolation.
| [
{
"version": "v1",
"created": "Mon, 10 May 2021 19:06:07 GMT"
}
] | 1,620,777,600,000 | [
[
"Schockaert",
"Steven",
""
],
[
"Ibáñez-García",
"Yazmín",
""
],
[
"Gutiérrez-Basulto",
"Víctor",
""
]
] |
2105.05395 | Abhishek Ray | Marios Papamichalis, Abhishek Ray, Ilias Bilionis, Karthik Kannan,
Rajiv Krishnamurthy | Bayesian Model Averaging for Data Driven Decision Making when Causality
is Partially Known | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Probabilistic machine learning models are often insufficient to help with
decisions on interventions because those models find correlations - not causal
relationships. If observational data is only available and experimentation are
infeasible, the correct approach to study the impact of an intervention is to
invoke Pearl's causality framework. Even that framework assumes that the
underlying causal graph is known, which is seldom the case in practice. When
the causal structure is not known, one may use out-of-the-box algorithms to
find causal dependencies from observational data. However, there exists no
method that also accounts for the decision-maker's prior knowledge when
developing the causal structure either. The objective of this paper is to
develop rational approaches for making decisions from observational data in the
presence of causal graph uncertainty and prior knowledge from the
decision-maker. We use ensemble methods like Bayesian Model Averaging (BMA) to
infer set of causal graphs that can represent the data generation process. We
provide decisions by computing the expected value and risk of potential
interventions explicitly. We demonstrate our approach by applying them in
different example contexts.
| [
{
"version": "v1",
"created": "Wed, 12 May 2021 01:55:45 GMT"
}
] | 1,620,864,000,000 | [
[
"Papamichalis",
"Marios",
""
],
[
"Ray",
"Abhishek",
""
],
[
"Bilionis",
"Ilias",
""
],
[
"Kannan",
"Karthik",
""
],
[
"Krishnamurthy",
"Rajiv",
""
]
] |
2105.06268 | Michael Cohen | Michael K. Cohen, Badri Vellambi, Marcus Hutter | Intelligence and Unambitiousness Using Algorithmic Information Theory | 13 pages, 6 figures, 5-page appendix. arXiv admin note: text overlap
with arXiv:1905.12186 | Journal of Selected Areas in Information Theory 2 (2021) | 10.1109/JSAIT.2021.3073844 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Algorithmic Information Theory has inspired intractable constructions of
general intelligence (AGI), and undiscovered tractable approximations are
likely feasible. Reinforcement Learning (RL), the dominant paradigm by which an
agent might learn to solve arbitrary solvable problems, gives an agent a
dangerous incentive: to gain arbitrary "power" in order to intervene in the
provision of their own reward. We review the arguments that generally
intelligent algorithmic-information-theoretic reinforcement learners such as
Hutter's (2005) AIXI would seek arbitrary power, including over us. Then, using
an information-theoretic exploration schedule, and a setup inspired by causal
influence theory, we present a variant of AIXI which learns to not seek
arbitrary power; we call it "unambitious". We show that our agent learns to
accrue reward at least as well as a human mentor, while relying on that mentor
with diminishing probability. And given a formal assumption that we probe
empirically, we show that eventually, the agent's world-model incorporates the
following true fact: intervening in the "outside world" will have no effect on
reward acquisition; hence, it has no incentive to shape the outside world.
| [
{
"version": "v1",
"created": "Thu, 13 May 2021 13:10:28 GMT"
}
] | 1,620,950,400,000 | [
[
"Cohen",
"Michael K.",
""
],
[
"Vellambi",
"Badri",
""
],
[
"Hutter",
"Marcus",
""
]
] |
2105.06564 | Yingbo Li | Yingbo Li, Yucong Duan, Anamaria-Beatrice Spulber, Haoyang Che,
Zakaria Maamar, Zhao Li, Chen Yang, Yu lei | Physical Artificial Intelligence: The Concept Expansion of
Next-Generation Artificial Intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial Intelligence has been a growth catalyst to our society and is
cosidered across all idustries as a fundamental technology. However, its
development has been limited to the signal processing domain that relies on the
generated and collected data from other sensors. In recent research, concepts
of Digital Artificial Intelligence and Physicial Artifical Intelligence have
emerged and this can be considered a big step in the theoretical development of
Artifical Intelligence. In this paper we explore the concept of Physicial
Artifical Intelligence and propose two subdomains: Integrated Physicial
Artifical Intelligence and Distributed Physicial Artifical Intelligence. The
paper will also examine the trend and governance of Physicial Artifical
Intelligence.
| [
{
"version": "v1",
"created": "Thu, 13 May 2021 21:46:46 GMT"
},
{
"version": "v2",
"created": "Mon, 17 May 2021 00:38:03 GMT"
}
] | 1,621,296,000,000 | [
[
"Li",
"Yingbo",
""
],
[
"Duan",
"Yucong",
""
],
[
"Spulber",
"Anamaria-Beatrice",
""
],
[
"Che",
"Haoyang",
""
],
[
"Maamar",
"Zakaria",
""
],
[
"Li",
"Zhao",
""
],
[
"Yang",
"Chen",
""
],
[
"lei",
"Yu",
""
]
] |
2105.06706 | Paola Ardon Miss | Paola Ard\'on, \`Eric Pairet, Katrin S. Lohan, Subramanian
Ramamoorthy, Ronald P. A. Petrick | Building Affordance Relations for Robotic Agents - A Review | Accepted for IJCAI | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Affordances describe the possibilities for an agent to perform actions with
an object. While the significance of the affordance concept has been previously
studied from varied perspectives, such as psychology and cognitive science,
these approaches are not always sufficient to enable direct transfer, in the
sense of implementations, to artificial intelligence (AI)-based systems and
robotics. However, many efforts have been made to pragmatically employ the
concept of affordances, as it represents great potential for AI agents to
effectively bridge perception to action. In this survey, we review and find
common ground amongst different strategies that use the concept of affordances
within robotic tasks, and build on these methods to provide guidance for
including affordances as a mechanism to improve autonomy. To this end, we
outline common design choices for building representations of affordance
relations, and their implications on the generalisation capabilities of an
agent when facing previously unseen scenarios. Finally, we identify and discuss
a range of interesting research directions involving affordances that have the
potential to improve the capabilities of an AI agent.
| [
{
"version": "v1",
"created": "Fri, 14 May 2021 08:35:18 GMT"
}
] | 1,621,209,600,000 | [
[
"Ardón",
"Paola",
""
],
[
"Pairet",
"Èric",
""
],
[
"Lohan",
"Katrin S.",
""
],
[
"Ramamoorthy",
"Subramanian",
""
],
[
"Petrick",
"Ronald P. A.",
""
]
] |
2105.06948 | Mark Ho | Mark K. Ho, David Abel, Carlos G. Correa, Michael L. Littman, Jonathan
D. Cohen, Thomas L. Griffiths | People construct simplified mental representations to plan | 56 pages, 5 main figures, 10 extended data figures, supplementary
information is included in ancillary files | Nature, 606(7912), 129-136 (2022) | 10.1038/s41586-022-04743-9 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most striking features of human cognition is the capacity to plan.
Two aspects of human planning stand out: its efficiency and flexibility.
Efficiency is especially impressive because plans must often be made in complex
environments, and yet people successfully plan solutions to myriad everyday
problems despite having limited cognitive resources. Standard accounts in
psychology, economics, and artificial intelligence have suggested human
planning succeeds because people have a complete representation of a task and
then use heuristics to plan future actions in that representation. However,
this approach generally assumes that task representations are fixed. Here, we
propose that task representations can be controlled and that such control
provides opportunities to quickly simplify problems and more easily reason
about them. We propose a computational account of this simplification process
and, in a series of pre-registered behavioral experiments, show that it is
subject to online cognitive control and that people optimally balance the
complexity of a task representation and its utility for planning and acting.
These results demonstrate how strategically perceiving and conceiving problems
facilitates the effective use of limited cognitive resources.
| [
{
"version": "v1",
"created": "Fri, 14 May 2021 16:39:31 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Nov 2022 21:08:15 GMT"
}
] | 1,669,680,000,000 | [
[
"Ho",
"Mark K.",
""
],
[
"Abel",
"David",
""
],
[
"Correa",
"Carlos G.",
""
],
[
"Littman",
"Michael L.",
""
],
[
"Cohen",
"Jonathan D.",
""
],
[
"Griffiths",
"Thomas L.",
""
]
] |
2105.07224 | Mohammad Arif Ul Alam | Vaishali Mahipal and Mohammad Arif Ul Alam | Estimating Heterogeneous Causal Effect of Polysubstance Usage on Drug
Overdose from Large-Scale Electronic Health Record | Accepted in 44th Annual International Conference of the IEEE
Engineering in Medicine and Biology Society (IEEE EMBC). arXiv admin note:
text overlap with arXiv:2010.14774, arXiv:1905.03297 by other authors | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Drug overdose has become a public health crisis in the United States with
devastating consequences. However, most of the drug overdose incidences are the
consequence of recitative polysubstance usage over a defined period of time
which can be happened by either the intentional usage of required drug with
other drugs or by accident. Thus, predicting the effects of polysubstance usage
is extremely important for clinicians to decide which combination of drugs
should be prescribed. Recent advancement of structural causal models can
provide ample insights of causal effects from observational data via
identifiable causal directed graphs. In this paper, we propose a system to
estimate heterogeneous concurrent drug usage effects on overdose estimation,
that consists of efficient co-variate selection, sub-group selection and
heterogeneous causal effect estimation. We apply our framework to answer a
critical question, can concurrent usage of benzodiazepines and opioids have
heterogeneous causal effects on the opioid overdose epidemic? Using Truven
MarketScan claim data collected from 2001 to 2013 have shown significant
promise of our proposed framework's efficacy.
| [
{
"version": "v1",
"created": "Sat, 15 May 2021 13:52:20 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2022 08:26:28 GMT"
}
] | 1,649,808,000,000 | [
[
"Mahipal",
"Vaishali",
""
],
[
"Alam",
"Mohammad Arif Ul",
""
]
] |
2105.07382 | Tianxiang Zhan | Tianxiang Zhan, Yuanpeng He, Hanwen Li, Fuyuan Xiao | Uncertainty Measurement of Basic Probability Assignment Integrity Based
on Approximate Entropy in Evidence Theory | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evidence theory is that the extension of probability can better deal with
unknowns and inaccurate information. Uncertainty measurement plays a vital role
in both evidence theory and probability theory. Approximate Entropy (ApEn) is
proposed by Pincus to describe the irregularities of complex systems. The more
irregular the time series, the greater the approximate entropy. The ApEn of the
network represents the ability of a network to generate new nodes, or the
possibility of undiscovered nodes. Through the association of network
characteristics and basic probability assignment (BPA) , a measure of the
uncertainty of BPA regarding completeness can be obtained. The main
contribution of paper is to define the integrity of the basic probability
assignment then the approximate entropy of the BPA is proposed to measure the
uncertainty of the integrity of the BPA. The proposed method is based on the
logical network structure to calculate the uncertainty of BPA in evidence
theory. The uncertainty based on the proposed method represents the uncertainty
of integrity of BPA and contributes to the identification of the credibility of
BPA.
| [
{
"version": "v1",
"created": "Sun, 16 May 2021 08:41:38 GMT"
},
{
"version": "v2",
"created": "Tue, 18 May 2021 01:01:59 GMT"
}
] | 1,621,382,400,000 | [
[
"Zhan",
"Tianxiang",
""
],
[
"He",
"Yuanpeng",
""
],
[
"Li",
"Hanwen",
""
],
[
"Xiao",
"Fuyuan",
""
]
] |
2105.07426 | Romi Banerjee | Tejas Gaikwad, Romi Banerjee | Curiosity-driven Intuitive Physics Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Biological infants are naturally curious and try to comprehend their physical
surroundings by interacting, in myriad multisensory ways, with different
objects - primarily macroscopic solid objects - around them. Through their
various interactions, they build hypotheses and predictions, and eventually
learn, infer and understand the nature of the physical characteristics and
behavior of these objects. Inspired thus, we propose a model for
curiosity-driven learning and inference for real-world AI agents. This model is
based on the arousal of curiosity, deriving from observations along
discontinuities in the fundamental macroscopic solid-body physics parameters,
i.e., shape constancy, spatial-temporal continuity, and object permanence. We
use the term body-budget to represent the perceived fundamental properties of
solid objects. The model aims to support the emulation of learning from scratch
followed by substantiation through experience, irrespective of domain, in
real-world AI agents.
| [
{
"version": "v1",
"created": "Sun, 16 May 2021 12:58:05 GMT"
}
] | 1,621,296,000,000 | [
[
"Gaikwad",
"Tejas",
""
],
[
"Banerjee",
"Romi",
""
]
] |
2105.07508 | Scott Cheng-Hsin Yang | Scott Cheng-Hsin Yang, Tomas Folke, and Patrick Shafto | Abstraction, Validation, and Generalization for Explainable Artificial
Intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Neural network architectures are achieving superhuman performance on an
expanding range of tasks. To effectively and safely deploy these systems, their
decision-making must be understandable to a wide range of stakeholders. Methods
to explain AI have been proposed to answer this challenge, but a lack of theory
impedes the development of systematic abstractions which are necessary for
cumulative knowledge gains. We propose Bayesian Teaching as a framework for
unifying explainable AI (XAI) by integrating machine learning and human
learning. Bayesian Teaching formalizes explanation as a communication act of an
explainer to shift the beliefs of an explainee. This formalization decomposes
any XAI method into four components: (1) the inference to be explained, (2) the
explanatory medium, (3) the explainee model, and (4) the explainer model. The
abstraction afforded by Bayesian Teaching to decompose any XAI method
elucidates the invariances among them. The decomposition of XAI systems enables
modular validation, as each of the first three components listed can be tested
semi-independently. This decomposition also promotes generalization through
recombination of components from different XAI systems, which facilitates the
generation of novel variants. These new variants need not be evaluated one by
one provided that each component has been validated, leading to an exponential
decrease in development time. Finally, by making the goal of explanation
explicit, Bayesian Teaching helps developers to assess how suitable an XAI
system is for its intended real-world use case. Thus, Bayesian Teaching
provides a theoretical framework that encourages systematic, scientific
investigation of XAI.
| [
{
"version": "v1",
"created": "Sun, 16 May 2021 20:40:23 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Oct 2021 18:32:56 GMT"
}
] | 1,634,169,600,000 | [
[
"Yang",
"Scott Cheng-Hsin",
""
],
[
"Folke",
"Tomas",
""
],
[
"Shafto",
"Patrick",
""
]
] |
2105.07691 | Anubhav Singh | Anubhav Singh, Nir Lipovetzky, Miquel Ramirez, Javier Segovia-Aguas | Approximate Novelty Search | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Width-based search algorithms seek plans by prioritizing states according to
a suitably defined measure of novelty, that maps states into a set of novelty
categories. Space and time complexity to evaluate state novelty is known to be
exponential on the cardinality of the set. We present novel methods to obtain
polynomial approximations of novelty and width-based search. First, we
approximate novelty computation via random sampling and Bloom filters, reducing
the runtime and memory footprint. Second, we approximate the best-first search
using an adaptive policy that decides whether to forgo the expansion of nodes
in the open list. These two techniques are integrated into existing width-based
algorithms, resulting in new planners that perform significantly better than
other state-of-the-art planners over benchmarks from the International Planning
Competitions.
| [
{
"version": "v1",
"created": "Mon, 17 May 2021 09:21:48 GMT"
}
] | 1,621,296,000,000 | [
[
"Singh",
"Anubhav",
""
],
[
"Lipovetzky",
"Nir",
""
],
[
"Ramirez",
"Miquel",
""
],
[
"Segovia-Aguas",
"Javier",
""
]
] |
2105.07889 | Jiayi Chen | Jiayi Chen, Aidong Zhang | HetMAML: Task-Heterogeneous Model-Agnostic Meta-Learning for Few-Shot
Learning Across Modalities | Accepted by CIKM 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing gradient-based meta-learning approaches to few-shot learning assume
that all tasks have the same input feature space. However, in the real world
scenarios, there are many cases that the input structures of tasks can be
different, that is, different tasks may vary in the number of input modalities
or data types. Existing meta-learners cannot handle the heterogeneous task
distribution (HTD) as there is not only global meta-knowledge shared across
tasks but also type-specific knowledge that distinguishes each type of tasks.
To deal with task heterogeneity and promote fast within-task adaptions for each
type of tasks, in this paper, we propose HetMAML, a task-heterogeneous
model-agnostic meta-learning framework, which can capture both the
type-specific and globally shared knowledge and can achieve the balance between
knowledge customization and generalization. Specifically, we design a
multi-channel backbone module that encodes the input of each type of tasks into
the same length sequence of modality-specific embeddings. Then, we propose a
task-aware iterative feature aggregation network which can automatically take
into account the context of task-specific input structures and adaptively
project the heterogeneous input spaces to the same lower-dimensional embedding
space of concepts. Our experiments on six task-heterogeneous datasets
demonstrate that HetMAML successfully leverages type-specific and globally
shared meta-parameters for heterogeneous tasks and achieves fast within-task
adaptions for each type of tasks.
| [
{
"version": "v1",
"created": "Mon, 17 May 2021 14:22:58 GMT"
},
{
"version": "v2",
"created": "Fri, 28 May 2021 18:45:33 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Sep 2021 16:04:41 GMT"
}
] | 1,632,873,600,000 | [
[
"Chen",
"Jiayi",
""
],
[
"Zhang",
"Aidong",
""
]
] |
2105.07952 | Yuanpeng He | Yuanpeng He | MMGET: A Markov model for generalized evidence theory | 20 pages,24 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In real life, lots of information merges from time to time. To appropriately
describe the actual situations, lots of theories have been proposed. Among
them, Dempster-Shafer evidence theory is a very useful tool in managing
uncertain information. To better adapt to complex situations of open world, a
generalized evidence theory is designed. However, everything occurs in sequence
and owns some underlying relationships with each other. In order to further
embody the details of information and better conforms to situations of real
world, a Markov model is introduced into the generalized evidence theory which
helps extract complete information volume from evidence provided. Besides, some
numerical examples is offered to verify the correctness and rationality of the
proposed method.
| [
{
"version": "v1",
"created": "Wed, 12 May 2021 12:41:57 GMT"
}
] | 1,621,296,000,000 | [
[
"He",
"Yuanpeng",
""
]
] |
2105.07996 | Fatema Hasan | Fatema Hasan, Kevin S. Xu, James R. Foulds, Shimei Pan | Learning User Embeddings from Temporal Social Media Data: A Survey | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | User-generated data on social media contain rich information about who we
are, what we like and how we make decisions. In this paper, we survey
representative work on learning a concise latent user representation (a.k.a.
user embedding) that can capture the main characteristics of a social media
user. The learned user embeddings can later be used to support different
downstream user analysis tasks such as personality modeling, suicidal risk
assessment and purchase decision prediction. The temporal nature of
user-generated data on social media has largely been overlooked in much of the
existing user embedding literature. In this survey, we focus on research that
bridges the gap by incorporating temporal/sequential information in user
representation learning. We categorize relevant papers along several key
dimensions, identify limitations in the current work and suggest future
research directions.
| [
{
"version": "v1",
"created": "Mon, 17 May 2021 16:22:43 GMT"
}
] | 1,621,296,000,000 | [
[
"Hasan",
"Fatema",
""
],
[
"Xu",
"Kevin S.",
""
],
[
"Foulds",
"James R.",
""
],
[
"Pan",
"Shimei",
""
]
] |
2105.08244 | DiJia Su | Andy Su, Difei Su, John M.Mulvey, H.Vincent Poor | PoBRL: Optimizing Multi-Document Summarization by Blending Reinforcement
Learning Policies | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel reinforcement learning based framework PoBRL for solving
multi-document summarization. PoBRL jointly optimizes over the following three
objectives necessary for a high-quality summary: importance, relevance, and
length. Our strategy decouples this multi-objective optimization into different
subproblems that can be solved individually by reinforcement learning.
Utilizing PoBRL, we then blend each learned policies together to produce a
summary that is a concise and complete representation of the original input.
Our empirical analysis shows state-of-the-art performance on several
multi-document datasets. Human evaluation also shows that our method produces
high-quality output.
| [
{
"version": "v1",
"created": "Tue, 18 May 2021 02:55:42 GMT"
}
] | 1,621,382,400,000 | [
[
"Su",
"Andy",
""
],
[
"Su",
"Difei",
""
],
[
"Mulvey",
"John M.",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
2105.08313 | Junhao Hua | Junhao Hua, Ling Yan, Huan Xu, Cheng Yang | Markdowns in E-Commerce Fresh Retail: A Counterfactual Prediction and
Multi-Period Optimization Approach | 10 pages, 7 figures, accepted to KDD'21 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, by leveraging abundant observational transaction data, we
propose a novel data-driven and interpretable pricing approach for markdowns,
consisting of counterfactual prediction and multi-period price optimization.
Firstly, we build a semi-parametric structural model to learn individual price
elasticity and predict counterfactual demand. This semi-parametric model takes
advantage of both the predictability of nonparametric machine learning model
and the interpretability of economic model. Secondly, we propose a multi-period
dynamic pricing algorithm to maximize the overall profit of a perishable
product over its finite selling horizon. Different with the traditional
approaches that use the deterministic demand, we model the uncertainty of
counterfactual demand since it inevitably has randomness in the prediction
process. Based on the stochastic model, we derive a sequential pricing strategy
by Markov decision process, and design a two-stage algorithm to solve it. The
proposed algorithm is very efficient. It reduces the time complexity from
exponential to polynomial. Experimental results show the advantages of our
pricing algorithm, and the proposed framework has been successfully deployed to
the well-known e-commerce fresh retail scenario - Freshippo.
| [
{
"version": "v1",
"created": "Tue, 18 May 2021 07:01:37 GMT"
},
{
"version": "v2",
"created": "Wed, 19 May 2021 11:48:10 GMT"
}
] | 1,621,468,800,000 | [
[
"Hua",
"Junhao",
""
],
[
"Yan",
"Ling",
""
],
[
"Xu",
"Huan",
""
],
[
"Yang",
"Cheng",
""
]
] |
2105.08326 | Maurice Funk | Maurice Funk, Jean Christoph Jung, Carsten Lutz | Actively Learning Concepts and Conjunctive Queries under ELr-Ontologies | 7+18 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We consider the problem to learn a concept or a query in the presence of an
ontology formulated in the description logic ELr, in Angluin's framework of
active learning that allows the learning algorithm to interactively query an
oracle (such as a domain expert). We show that the following can be learned in
polynomial time: (1) EL-concepts, (2) symmetry-free ELI-concepts, and (3)
conjunctive queries (CQs) that are chordal, symmetry-free, and of bounded
arity. In all cases, the learner can pose to the oracle membership queries
based on ABoxes and equivalence queries that ask whether a given concept/query
from the considered class is equivalent to the target. The restriction to
bounded arity in (3) can be removed when we admit unrestricted CQs in
equivalence queries. We also show that EL-concepts are not polynomial query
learnable in the presence of ELI-ontologies.
| [
{
"version": "v1",
"created": "Tue, 18 May 2021 07:45:37 GMT"
},
{
"version": "v2",
"created": "Wed, 19 May 2021 11:36:06 GMT"
}
] | 1,621,468,800,000 | [
[
"Funk",
"Maurice",
""
],
[
"Jung",
"Jean Christoph",
""
],
[
"Lutz",
"Carsten",
""
]
] |
2105.08398 | Oliver Niggemann | Kaja Balzereit and Oliver Niggemann | Reconfiguring Hybrid Systems Using SAT | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reconfiguration aims at recovering a system from a fault by automatically
adapting the system configuration, such that the system goal can be reached
again. Classical approaches typically use a set of pre-defined faults for which
corresponding recovery actions are defined manually. This is not possible for
modern hybrid systems which are characterized by frequent changes. Instead,
AI-based approaches are needed which leverage on a model of the non-faulty
system and which search for a set of reconfiguration operations which will
establish a valid behavior again.
This work presents a novel algorithm which solves three main challenges: (i)
Only a model of the non-faulty system is needed, i.e. the faulty behavior does
not need to be modeled. (ii) It discretizes and reduces the search space which
originally is too large -- mainly due to the high number of continuous system
variables and control signals. (iii) It uses a SAT solver for propositional
logic for two purposes: First, it defines the binary concept of validity.
Second, it implements the search itself -- sacrificing the optimal solution for
a quick identification of an arbitrary solution. It is shown that the approach
is able to reconfigure faults on simulated process engineering systems.
| [
{
"version": "v1",
"created": "Tue, 18 May 2021 09:50:47 GMT"
}
] | 1,621,382,400,000 | [
[
"Balzereit",
"Kaja",
""
],
[
"Niggemann",
"Oliver",
""
]
] |
2105.08440 | Shuxin Li | Shuxin Li, Youzhi Zhang, Xinrun Wang, Wanqi Xue, Bo An | CFR-MIX: Solving Imperfect Information Extensive-Form Games with
Combinatorial Action Space | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many real-world scenarios, a team of agents coordinate with each other to
compete against an opponent. The challenge of solving this type of game is that
the team's joint action space grows exponentially with the number of agents,
which results in the inefficiency of the existing algorithms, e.g.,
Counterfactual Regret Minimization (CFR). To address this problem, we propose a
new framework of CFR: CFR-MIX. Firstly, we propose a new strategy
representation that represents a joint action strategy using individual
strategies of all agents and a consistency relationship to maintain the
cooperation between agents. To compute the equilibrium with individual
strategies under the CFR framework, we transform the consistency relationship
between strategies to the consistency relationship between the cumulative
regret values. Furthermore, we propose a novel decomposition method over
cumulative regret values to guarantee the consistency relationship between the
cumulative regret values. Finally, we introduce our new algorithm CFR-MIX which
employs a mixing layer to estimate cumulative regret values of joint actions as
a non-linear combination of cumulative regret values of individual actions.
Experimental results show that CFR-MIX outperforms existing algorithms on
various games significantly.
| [
{
"version": "v1",
"created": "Tue, 18 May 2021 11:19:37 GMT"
}
] | 1,621,382,400,000 | [
[
"Li",
"Shuxin",
""
],
[
"Zhang",
"Youzhi",
""
],
[
"Wang",
"Xinrun",
""
],
[
"Xue",
"Wanqi",
""
],
[
"An",
"Bo",
""
]
] |
2105.08476 | Quan Wang | Quan Wang, Haifeng Wang, Yajuan Lyu, Yong Zhu | Link Prediction on N-ary Relational Facts: A Graph-based Approach | Accepted to Findings of ACL 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Link prediction on knowledge graphs (KGs) is a key research topic. Previous
work mainly focused on binary relations, paying less attention to higher-arity
relations although they are ubiquitous in real-world KGs. This paper considers
link prediction upon n-ary relational facts and proposes a graph-based approach
to this task. The key to our approach is to represent the n-ary structure of a
fact as a small heterogeneous graph, and model this graph with edge-biased
fully-connected attention. The fully-connected attention captures universal
inter-vertex interactions, while with edge-aware attentive biases to
particularly encode the graph structure and its heterogeneity. In this fashion,
our approach fully models global and local dependencies in each n-ary fact, and
hence can more effectively capture associations therein. Extensive evaluation
verifies the effectiveness and superiority of our approach. It performs
substantially and consistently better than current state-of-the-art across a
variety of n-ary relational benchmarks. Our code is publicly available.
| [
{
"version": "v1",
"created": "Tue, 18 May 2021 12:40:35 GMT"
}
] | 1,621,382,400,000 | [
[
"Wang",
"Quan",
""
],
[
"Wang",
"Haifeng",
""
],
[
"Lyu",
"Yajuan",
""
],
[
"Zhu",
"Yong",
""
]
] |
2105.08541 | Theresa Eimer | Theresa Eimer, Andr\'e Biedenkapp, Maximilian Reimer, Steven
Adriaensen, Frank Hutter, Marius Lindauer | DACBench: A Benchmark Library for Dynamic Algorithm Configuration | Accepted at IJCAI 2021 | 30th International Joint Conference on Artificial Intelligence
(IJCAI 2021) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic Algorithm Configuration (DAC) aims to dynamically control a target
algorithm's hyperparameters in order to improve its performance. Several
theoretical and empirical results have demonstrated the benefits of dynamically
controlling hyperparameters in domains like evolutionary computation, AI
Planning or deep learning. Replicating these results, as well as studying new
methods for DAC, however, is difficult since existing benchmarks are often
specialized and incompatible with the same interfaces. To facilitate
benchmarking and thus research on DAC, we propose DACBench, a benchmark library
that seeks to collect and standardize existing DAC benchmarks from different AI
domains, as well as provide a template for new ones. For the design of
DACBench, we focused on important desiderata, such as (i) flexibility, (ii)
reproducibility, (iii) extensibility and (iv) automatic documentation and
visualization. To show the potential, broad applicability and challenges of
DAC, we explore how a set of six initial benchmarks compare in several
dimensions of difficulty.
| [
{
"version": "v1",
"created": "Tue, 18 May 2021 14:16:51 GMT"
}
] | 1,638,835,200,000 | [
[
"Eimer",
"Theresa",
""
],
[
"Biedenkapp",
"André",
""
],
[
"Reimer",
"Maximilian",
""
],
[
"Adriaensen",
"Steven",
""
],
[
"Hutter",
"Frank",
""
],
[
"Lindauer",
"Marius",
""
]
] |
2105.08683 | Luca Costabello | Sumit Pai, Luca Costabello | Learning Embeddings from Knowledge Graphs With Numeric Edge Attributes | IJCAI 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Numeric values associated to edges of a knowledge graph have been used to
represent uncertainty, edge importance, and even out-of-band knowledge in a
growing number of scenarios, ranging from genetic data to social networks.
Nevertheless, traditional knowledge graph embedding models are not designed to
capture such information, to the detriment of predictive power. We propose a
novel method that injects numeric edge attributes into the scoring layer of a
traditional knowledge graph embedding architecture. Experiments with publicly
available numeric-enriched knowledge graphs show that our method outperforms
traditional numeric-unaware baselines as well as the recent UKGE model.
| [
{
"version": "v1",
"created": "Tue, 18 May 2021 17:15:01 GMT"
}
] | 1,621,382,400,000 | [
[
"Pai",
"Sumit",
""
],
[
"Costabello",
"Luca",
""
]
] |
2105.08692 | Bo Liu | Bo Liu, Qiang Liu, Peter Stone, Animesh Garg, Yuke Zhu and Animashree
Anandkumar | Coach-Player Multi-Agent Reinforcement Learning for Dynamic Team
Composition | International Conference on Machine Learning | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In real-world multi-agent systems, agents with different capabilities may
join or leave without altering the team's overarching goals. Coordinating teams
with such dynamic composition is challenging: the optimal team strategy varies
with the composition. We propose COPA, a coach-player framework to tackle this
problem. We assume the coach has a global view of the environment and
coordinates the players, who only have partial views, by distributing
individual strategies. Specifically, we 1) adopt the attention mechanism for
both the coach and the players; 2) propose a variational objective to
regularize learning; and 3) design an adaptive communication method to let the
coach decide when to communicate with the players. We validate our methods on a
resource collection task, a rescue game, and the StarCraft micromanagement
tasks. We demonstrate zero-shot generalization to new team compositions. Our
method achieves comparable or better performance than the setting where all
players have a full view of the environment. Moreover, we see that the
performance remains high even when the coach communicates as little as 13% of
the time using the adaptive communication strategy.
| [
{
"version": "v1",
"created": "Tue, 18 May 2021 17:27:37 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Jun 2021 16:03:59 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Sep 2021 20:17:06 GMT"
}
] | 1,630,972,800,000 | [
[
"Liu",
"Bo",
""
],
[
"Liu",
"Qiang",
""
],
[
"Stone",
"Peter",
""
],
[
"Garg",
"Animesh",
""
],
[
"Zhu",
"Yuke",
""
],
[
"Anandkumar",
"Animashree",
""
]
] |
2105.08781 | Yuanpeng He | Yuanpeng He | Fortified quantum mass function utilizing ordinal pictorial check based
on time interval analysis and expertise | 33 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information management has enter a completely new era, quantum era. However,
there exists a lack of sufficient theory to extract truly useful quantum
information and transfer it to a form which is intuitive and straightforward
for decision making. Therefore, based on the quantum model of mass function, a
fortified dual check system is proposed to ensure the judgment generated
retains enough high accuracy. Moreover, considering the situations in real
life, everything takes place in an observable time interval, then the concept
of time interval is introduced into the frame of the check system. The proposed
model is very helpful in disposing uncertain quantum information in this paper.
And some applications are provided to verify the rationality and correctness of
the proposed method.
| [
{
"version": "v1",
"created": "Fri, 14 May 2021 05:30:16 GMT"
}
] | 1,621,468,800,000 | [
[
"He",
"Yuanpeng",
""
]
] |
2105.08867 | Xiwei Xu | Liming Zhu, Xiwei Xu, Qinghua Lu, Guido Governatori, Jon Whittle | AI and Ethics -- Operationalising Responsible AI | null | Humanity Driven AI: Productivity, Wellbeing, Sustainability and
Partnership, 2021 | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In the last few years, AI continues demonstrating its positive impact on
society while sometimes with ethically questionable consequences. Building and
maintaining public trust in AI has been identified as the key to successful and
sustainable innovation. This chapter discusses the challenges related to
operationalizing ethical AI principles and presents an integrated view that
covers high-level ethical AI principles, the general notion of
trust/trustworthiness, and product/process support in the context of
responsible AI, which helps improve both trust and trustworthiness of AI for a
wider set of stakeholders.
| [
{
"version": "v1",
"created": "Wed, 19 May 2021 00:55:40 GMT"
}
] | 1,621,900,800,000 | [
[
"Zhu",
"Liming",
""
],
[
"Xu",
"Xiwei",
""
],
[
"Lu",
"Qinghua",
""
],
[
"Governatori",
"Guido",
""
],
[
"Whittle",
"Jon",
""
]
] |
2105.08877 | Erick Delage | Abderrahim Fathan and Erick Delage | Deep Reinforcement Learning for Optimal Stopping with Application in
Financial Engineering | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimal stopping is the problem of deciding the right time at which to take a
particular action in a stochastic system, in order to maximize an expected
reward. It has many applications in areas such as finance, healthcare, and
statistics. In this paper, we employ deep Reinforcement Learning (RL) to learn
optimal stopping policies in two financial engineering applications: namely
option pricing, and optimal option exercise. We present for the first time a
comprehensive empirical evaluation of the quality of optimal stopping policies
identified by three state of the art deep RL algorithms: double deep Q-learning
(DDQN), categorical distributional RL (C51), and Implicit Quantile Networks
(IQN). In the case of option pricing, our findings indicate that in a
theoretical Black-Schole environment, IQN successfully identifies nearly
optimal prices. On the other hand, it is slightly outperformed by C51 when
confronted to real stock data movements in a put option exercise problem that
involves assets from the S&P500 index. More importantly, the C51 algorithm is
able to identify an optimal stopping policy that achieves 8% more out-of-sample
returns than the best of four natural benchmark policies. We conclude with a
discussion of our findings which should pave the way for relevant future
research.
| [
{
"version": "v1",
"created": "Wed, 19 May 2021 01:52:04 GMT"
}
] | 1,621,468,800,000 | [
[
"Fathan",
"Abderrahim",
""
],
[
"Delage",
"Erick",
""
]
] |
2105.09484 | Hoang D. Nguyen | Minh-Duc Hoang, Linh Le, Anh-Tuan Nguyen, Trang Le and Hoang D. Nguyen | Federated Artificial Intelligence for Unified Credit Assessment | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid adoption of Internet technologies, digital footprints have
become ubiquitous and versatile to revolutionise the financial industry in
digital transformation. This paper takes initiatives to investigate a new
paradigm of the unified credit assessment with the use of federated artificial
intelligence. We conceptualised digital human representation which consists of
social, contextual, financial and technological dimensions to assess the
commercial creditworthiness and social reputation of both banked and unbanked
individuals. A federated artificial intelligence platform is proposed with a
comprehensive set of system design for efficient and effective credit scoring.
The study considerably contributes to the cumulative development of financial
intelligence and social computing. It also provides a number of implications
for academic bodies, practitioners, and developers of financial technologies.
| [
{
"version": "v1",
"created": "Thu, 20 May 2021 03:05:42 GMT"
}
] | 1,621,555,200,000 | [
[
"Hoang",
"Minh-Duc",
""
],
[
"Le",
"Linh",
""
],
[
"Nguyen",
"Anh-Tuan",
""
],
[
"Le",
"Trang",
""
],
[
"Nguyen",
"Hoang D.",
""
]
] |
2105.09489 | Hoang D. Nguyen | Ethan Lim Ding Feng, Zhi-Wei Neo, Aaron William De Silva, Kellie Sim,
Hong-Ray Tan, Thi-Thanh Nguyen, Karen Wei Ling Koh, Wenru Wang and Hoang D.
Nguyen | Social Behaviour Understanding using Deep Neural Networks: Development
of Social Intelligence Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid development in artificial intelligence, social computing has
evolved beyond social informatics toward the birth of social intelligence
systems. This paper, therefore, takes initiatives to propose a social behaviour
understanding framework with the use of deep neural networks for social and
behavioural analysis. The integration of information fusion, person and object
detection, social signal understanding, behaviour understanding, and context
understanding plays a harmonious role to elicit social behaviours. Three
systems, including depression detection, activity recognition and cognitive
impairment screening, are developed to evidently demonstrate the importance of
social intelligence. The study considerably contributes to the cumulative
development of social computing and health informatics. It also provides a
number of implications for academic bodies, healthcare practitioners, and
developers of socially intelligent agents.
| [
{
"version": "v1",
"created": "Thu, 20 May 2021 03:19:55 GMT"
}
] | 1,621,555,200,000 | [
[
"Feng",
"Ethan Lim Ding",
""
],
[
"Neo",
"Zhi-Wei",
""
],
[
"De Silva",
"Aaron William",
""
],
[
"Sim",
"Kellie",
""
],
[
"Tan",
"Hong-Ray",
""
],
[
"Nguyen",
"Thi-Thanh",
""
],
[
"Koh",
"Karen Wei Ling",
""
],
[
"Wang",
"Wenru",
""
],
[
"Nguyen",
"Hoang D.",
""
]
] |
2105.09560 | Hanyang Liu | Guanjie Zheng, Hanyang Liu, Kai Xu, Zhenhui Li | Objective-aware Traffic Simulation via Inverse Reinforcement Learning | Accepted for publication by IJCAI 2021 | IJCAI 2021 | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Traffic simulators act as an essential component in the operating and
planning of transportation systems. Conventional traffic simulators usually
employ a calibrated physical car-following model to describe vehicles'
behaviors and their interactions with traffic environment. However, there is no
universal physical model that can accurately predict the pattern of vehicle's
behaviors in different situations. A fixed physical model tends to be less
effective in a complicated environment given the non-stationary nature of
traffic dynamics. In this paper, we formulate traffic simulation as an inverse
reinforcement learning problem, and propose a parameter sharing adversarial
inverse reinforcement learning model for dynamics-robust simulation learning.
Our proposed model is able to imitate a vehicle's trajectories in the real
world while simultaneously recovering the reward function that reveals the
vehicle's true objective which is invariant to different dynamics. Extensive
experiments on synthetic and real-world datasets show the superior performance
of our approach compared to state-of-the-art methods and its robustness to
variant dynamics of traffic.
| [
{
"version": "v1",
"created": "Thu, 20 May 2021 07:26:34 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Aug 2021 19:20:25 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Jul 2022 21:57:48 GMT"
}
] | 1,657,584,000,000 | [
[
"Zheng",
"Guanjie",
""
],
[
"Liu",
"Hanyang",
""
],
[
"Xu",
"Kai",
""
],
[
"Li",
"Zhenhui",
""
]
] |
2105.09574 | Dawid Wisniewski | Dawid Wi\'sniewski and J\k{e}drzej Potoniec and Agnieszka
{\L}awrynowicz | BigCQ: A large-scale synthetic dataset of competency question patterns
formalized into SPARQL-OWL query templates | 16 pages, 3 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Competency Questions (CQs) are used in many ontology engineering
methodologies to collect requirements and track the completeness and
correctness of an ontology being constructed. Although they are frequently
suggested by ontology engineering methodologies, the publicly available
datasets of CQs and their formalizations in ontology query languages are very
scarce. Since first efforts to automate processes utilizing CQs are being made,
it is of high importance to provide large and diverse datasets to fuel these
solutions. In this paper, we present BigCQ, the biggest dataset of CQ templates
with their formalizations into SPARQL-OWL query templates. BigCQ is created
automatically from a dataset of frequently used axiom shapes. These pairs of CQ
templates and query templates can be then materialized as actual CQs and
SPARQL-OWL queries if filled with resource labels and IRIs from a given
ontology. We describe the dataset in detail, provide a description of the
process leading to the creation of the dataset and analyze how well the dataset
covers real-world examples. We also publish the dataset as well as scripts
transforming axiom shapes into pairs of CQ patterns and SPARQL-OWL templates,
to make engineers able to adapt the process to their particular needs.
| [
{
"version": "v1",
"created": "Thu, 20 May 2021 07:59:59 GMT"
}
] | 1,621,555,200,000 | [
[
"Wiśniewski",
"Dawid",
""
],
[
"Potoniec",
"Jędrzej",
""
],
[
"Ławrynowicz",
"Agnieszka",
""
]
] |
2105.09740 | Siyuan Liu | Orcun Yalcin, Xiuyi Fan, Siyuan Liu | Evaluating the Correctness of Explainable AI Algorithms for
Classification | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explainable AI has attracted much research attention in recent years with
feature attribution algorithms, which compute "feature importance" in
predictions, becoming increasingly popular. However, there is little analysis
of the validity of these algorithms as there is no "ground truth" in the
existing datasets to validate their correctness. In this work, we develop a
method to quantitatively evaluate the correctness of XAI algorithms by creating
datasets with known explanation ground truth. To this end, we focus on the
binary classification problems. String datasets are constructed using formal
language derived from a grammar. A string is positive if and only if a certain
property is fulfilled. Symbols serving as explanation ground truth in a
positive string are part of an explanation if and only if they contributes to
fulfilling the property. Two popular feature attribution explainers, Local
Interpretable Model-agnostic Explanations (LIME) and SHapley Additive
exPlanations (SHAP), are used in our experiments.We show that: (1)
classification accuracy is positively correlated with explanation accuracy; (2)
SHAP provides more accurate explanations than LIME; (3) explanation accuracy is
negatively correlated with dataset complexity.
| [
{
"version": "v1",
"created": "Thu, 20 May 2021 13:36:41 GMT"
}
] | 1,621,555,200,000 | [
[
"Yalcin",
"Orcun",
""
],
[
"Fan",
"Xiuyi",
""
],
[
"Liu",
"Siyuan",
""
]
] |
2105.09914 | Bo-Hsiang (Andy) Tseng | Bo-Hsiang Tseng, Shruti Bhargava, Jiarui Lu, Joel Ruben Antony Moniz,
Dhivya Piraviperumal, Lin Li, Hong Yu | CREAD: Combined Resolution of Ellipses and Anaphora in Dialogues | Accepted as a long paper in the main conference by NAACL 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anaphora and ellipses are two common phenomena in dialogues. Without
resolving referring expressions and information omission, dialogue systems may
fail to generate consistent and coherent responses. Traditionally, anaphora is
resolved by coreference resolution and ellipses by query rewrite. In this work,
we propose a novel joint learning framework of modeling coreference resolution
and query rewriting for complex, multi-turn dialogue understanding. Given an
ongoing dialogue between a user and a dialogue assistant, for the user query,
our joint learning model first predicts coreference links between the query and
the dialogue context, and then generates a self-contained rewritten user query.
To evaluate our model, we annotate a dialogue based coreference resolution
dataset, MuDoCo, with rewritten queries. Results show that the performance of
query rewrite can be substantially boosted (+2.3% F1) with the aid of
coreference modeling. Furthermore, our joint model outperforms the
state-of-the-art coreference resolution model (+2% F1) on this dataset.
| [
{
"version": "v1",
"created": "Thu, 20 May 2021 17:17:26 GMT"
}
] | 1,621,555,200,000 | [
[
"Tseng",
"Bo-Hsiang",
""
],
[
"Bhargava",
"Shruti",
""
],
[
"Lu",
"Jiarui",
""
],
[
"Moniz",
"Joel Ruben Antony",
""
],
[
"Piraviperumal",
"Dhivya",
""
],
[
"Li",
"Lin",
""
],
[
"Yu",
"Hong",
""
]
] |
2105.10058 | Kanvaly Fadiga | Kanvaly Fadiga, Etienne Houz\'e, Ada Diaconescu and Jean-Louis
Dessalles | To do or not to do: finding causal relations in smart homes | 10 pages, 13 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research in Cognitive Science suggests that humans understand and represent
knowledge of the world through causal relationships. In addition to
observations, they can rely on experimenting and counterfactual reasoning --
i.e. referring to an alternative course of events -- to identify causal
relations and explain atypical situations. Different instances of control
systems, such as smart homes, would benefit from having a similar causal model,
as it would help the user understand the logic of the system and better react
when needed. However, while data-driven methods achieve high levels of
correlation detection, they mainly fall short of finding causal relations,
notably being limited to observations only. Notably, they struggle to identify
the cause from the effect when detecting a correlation between two variables.
This paper introduces a new way to learn causal models from a mixture of
experiments on the environment and observational data. The core of our method
is the use of selected interventions, especially our learning takes into
account the variables where it is impossible to intervene, unlike other
approaches. The causal model we obtain is then used to generate Causal Bayesian
Networks, which can be later used to perform diagnostic and predictive
inference. We use our method on a smart home simulation, a use case where
knowing causal relations pave the way towards explainable systems. Our
algorithm succeeds in generating a Causal Bayesian Network close to the
simulation's ground truth causal interactions, showing encouraging prospects
for application in real-life systems.
| [
{
"version": "v1",
"created": "Thu, 20 May 2021 22:36:04 GMT"
}
] | 1,621,814,400,000 | [
[
"Fadiga",
"Kanvaly",
""
],
[
"Houzé",
"Etienne",
""
],
[
"Diaconescu",
"Ada",
""
],
[
"Dessalles",
"Jean-Louis",
""
]
] |
2105.10095 | Rui Wang | Rui Wang, Deyu Zhou, Yuxuan Xiong, Haiping Huang | Variational Gaussian Topic Model with Invertible Neural Projections | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural topic models have triggered a surge of interest in extracting topics
from text automatically since they avoid the sophisticated derivations in
conventional topic models. However, scarce neural topic models incorporate the
word relatedness information captured in word embedding into the modeling
process. To address this issue, we propose a novel topic modeling approach,
called Variational Gaussian Topic Model (VaGTM). Based on the variational
auto-encoder, the proposed VaGTM models each topic with a multivariate Gaussian
in decoder to incorporate word relatedness. Furthermore, to address the
limitation that pre-trained word embeddings of topic-associated words do not
follow a multivariate Gaussian, Variational Gaussian Topic Model with
Invertible neural Projections (VaGTM-IP) is extended from VaGTM. Three
benchmark text corpora are used in experiments to verify the effectiveness of
VaGTM and VaGTM-IP. The experimental results show that VaGTM and VaGTM-IP
outperform several competitive baselines and obtain more coherent topics.
| [
{
"version": "v1",
"created": "Fri, 21 May 2021 02:23:02 GMT"
}
] | 1,621,814,400,000 | [
[
"Wang",
"Rui",
""
],
[
"Zhou",
"Deyu",
""
],
[
"Xiong",
"Yuxuan",
""
],
[
"Huang",
"Haiping",
""
]
] |
2105.10176 | Josef Bajada | Josef Bajada, Maria Fox and Derek Long | Efficient Temporal Piecewise-Linear Numeric Planning with Lazy
Consistency Checking | Accepted version to be published in IEEE Transactions on Artificial
Intelligence | null | 10.1109/TAI.2022.3146797 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal planning often involves numeric effects that are directly
proportional to their action's duration. These include continuous effects,
where a numeric variable is subjected to a rate of change while the action is
being executed, and discrete duration-dependent effects, where the variable is
updated instantaneously but the magnitude of such change is computed from the
action's duration. When these effects are linear, state--of--the--art temporal
planners often make use of Linear Programming to ensure that these numeric
updates are consistent with the chosen start times and durations of the plan's
actions. This is typically done for each evaluated state as part of the search
process. This exhaustive approach is not scalable to solve real-world problems
that require long plans, because the linear program's size becomes larger and
slower to solve. In this work we propose techniques that minimise this overhead
by computing these checks more selectively and formulating linear programs that
have a smaller footprint. The effectiveness of these techniques is demonstrated
on domains that use a mix of discrete and continuous effects, which is typical
of real-world planning problems. The resultant planner also outperforms most
state-of-the-art temporal-numeric and hybrid planners, in terms of both
coverage and scalability.
| [
{
"version": "v1",
"created": "Fri, 21 May 2021 07:36:54 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jan 2022 12:30:21 GMT"
}
] | 1,643,673,600,000 | [
[
"Bajada",
"Josef",
""
],
[
"Fox",
"Maria",
""
],
[
"Long",
"Derek",
""
]
] |
2105.10211 | Won Joon Yun | Won Joon Yun, Sungwon Yi, and Joongheon Kim | Multi-Agent Deep Reinforcement Learning using Attentive Graph Neural
Architectures for Real-Time Strategy Games | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In real-time strategy (RTS) game artificial intelligence research, various
multi-agent deep reinforcement learning (MADRL) algorithms are widely and
actively used nowadays. Most of the research is based on StarCraft II
environment because it is the most well-known RTS games in world-wide. In our
proposed MADRL-based algorithm, distributed MADRL is fundamentally used that is
called QMIX. In addition to QMIX-based distributed computation, we consider
state categorization which can reduce computational complexity significantly.
Furthermore, self-attention mechanisms are used for identifying the
relationship among agents in the form of graphs. Based on these approaches, we
propose a categorized state graph attention policy (CSGA-policy). As observed
in the performance evaluation of our proposed CSGA-policy with the most
well-known StarCraft II simulation environment, our proposed algorithm works
well in various settings, as expected.
| [
{
"version": "v1",
"created": "Fri, 21 May 2021 09:05:25 GMT"
}
] | 1,621,814,400,000 | [
[
"Yun",
"Won Joon",
""
],
[
"Yi",
"Sungwon",
""
],
[
"Kim",
"Joongheon",
""
]
] |
2105.10830 | Blai Bonet | Ivan D. Rodriguez, Blai Bonet, Javier Romero, Hector Geffner | Learning First-Order Representations for Planning from Black-Box States:
New Results | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently Bonet and Geffner have shown that first-order representations for
planning domains can be learned from the structure of the state space without
any prior knowledge about the action schemas or domain predicates. For this,
the learning problem is formulated as the search for a simplest first-order
domain description D that along with information about instances I_i (number of
objects and initial state) determine state space graphs G(P_i) that match the
observed state graphs G_i where P_i = (D, I_i). The search is cast and solved
approximately by means of a SAT solver that is called over a large family of
propositional theories that differ just in the parameters encoding the possible
number of action schemas and domain predicates, their arities, and the number
of objects. In this work, we push the limits of these learners by moving to an
answer set programming (ASP) encoding using the CLINGO system. The new
encodings are more transparent and concise, extending the range of possible
models while facilitating their exploration. We show that the domains
introduced by Bonet and Geffner can be solved more efficiently in the new
approach, often optimally, and furthermore, that the approach can be easily
extended to handle partial information about the state graphs as well as noise
that prevents some states from being distinguished.
| [
{
"version": "v1",
"created": "Sun, 23 May 2021 00:08:42 GMT"
}
] | 1,621,900,800,000 | [
[
"Rodriguez",
"Ivan D.",
""
],
[
"Bonet",
"Blai",
""
],
[
"Romero",
"Javier",
""
],
[
"Geffner",
"Hector",
""
]
] |
2105.10950 | Konstantin Sidorov | Konstantin Sidorov, Alexander Morozov | A review of approaches to modeling applied vehicle routing problems | 16 pages, 1 figure | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Due to the practical importance of vehicle routing problems (VRP), there
exists an ever-growing body of research in algorithms and (meta)heuristics for
solving such problems. However, the diversity of VRP domains creates the
separate problem of modeling such problems -- describing the domain entities
(and, in particular, the planning decisions), the set of valid planning
decisions, and the preferences between different plans. In this paper, we
review the approaches for modeling vehicle routing problems. To make the
comparison more straightforward, we formulate several criteria for evaluating
modeling methods reflecting the practical requirements of the development of
optimization algorithms for such problems. Finally, as a result of this
comparison, we discuss several future research avenues in the field of modeling
VRP domains.
| [
{
"version": "v1",
"created": "Sun, 23 May 2021 14:50:14 GMT"
}
] | 1,621,900,800,000 | [
[
"Sidorov",
"Konstantin",
""
],
[
"Morozov",
"Alexander",
""
]
] |
2105.11071 | Fangfang Liu | Fangfang Liu and Jia-huai You | Alternating Fixpoint Operator for Hybrid MKNF Knowledge Bases as an
Approximator of AFT | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Approximation fixpoint theory (AFT) provides an algebraic framework for the
study of fixpoints of operators on bilattices and has found its applications in
characterizing semantics for various classes of logic programs and nonmonotonic
languages. In this paper, we show one more application of this kind: the
alternating fixpoint operator by Knorr et al. for the study of the well-founded
semantics for hybrid MKNF knowledge bases is in fact an approximator of AFT in
disguise, which, thanks to the power of abstraction of AFT, characterizes not
only the well-founded semantics but also two-valued as well as three-valued
semantics for hybrid MKNF knowledge bases. Furthermore, we show an improved
approximator for these knowledge bases, of which the least stable fixpoint is
information richer than the one formulated from Knorr et al.'s construction.
This leads to an improved computation for the well-founded semantics. This work
is built on an extension of AFT that supports consistent as well as
inconsistent pairs in the induced product bilattice, to deal with
inconsistencies that arise in the context of hybrid MKNF knowledge bases. This
part of the work can be considered generalizing the original AFT from symmetric
approximators to arbitrary approximators.
| [
{
"version": "v1",
"created": "Mon, 24 May 2021 02:32:51 GMT"
},
{
"version": "v2",
"created": "Fri, 28 May 2021 11:46:41 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Jul 2021 05:40:51 GMT"
}
] | 1,625,788,800,000 | [
[
"Liu",
"Fangfang",
""
],
[
"You",
"Jia-huai",
""
]
] |
2105.11132 | Durgesh Agrawal | Durgesh Agrawal and Yash Pote and Kuldeep S Meel | Partition Function Estimation: A Quantitative Study | 10 pages, 3 figures, 2 tables, to be published in IJCAI-21 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic graphical models have emerged as a powerful modeling tool for
several real-world scenarios where one needs to reason under uncertainty. A
graphical model's partition function is a central quantity of interest, and its
computation is key to several probabilistic reasoning tasks. Given the
#P-hardness of computing the partition function, several techniques have been
proposed over the years with varying guarantees on the quality of estimates and
their runtime behavior. This paper seeks to present a survey of 18 techniques
and a rigorous empirical study of their behavior across an extensive set of
benchmarks. Our empirical study draws up a surprising observation: exact
techniques are as efficient as the approximate ones, and therefore, we conclude
with an optimistic view of opportunities for the design of approximate
techniques with enhanced scalability. Motivated by the observation of an order
of magnitude difference between the Virtual Best Solver and the best performing
tool, we envision an exciting line of research focused on the development of
portfolio solvers.
| [
{
"version": "v1",
"created": "Mon, 24 May 2021 07:25:43 GMT"
}
] | 1,621,900,800,000 | [
[
"Agrawal",
"Durgesh",
""
],
[
"Pote",
"Yash",
""
],
[
"Meel",
"Kuldeep S",
""
]
] |
2105.11266 | Kristijonas \v{C}yras | Kristijonas \v{C}yras, Antonio Rago, Emanuele Albini, Pietro Baroni,
Francesca Toni | Argumentative XAI: A Survey | IJCAI 2021 Survey Track preprint | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explainable AI (XAI) has been investigated for decades and, together with AI
itself, has witnessed unprecedented growth in recent years. Among various
approaches to XAI, argumentative models have been advocated in both the AI and
social science literature, as their dialectical nature appears to match some
basic desirable features of the explanation activity. In this survey we
overview XAI approaches built using methods from the field of computational
argumentation, leveraging its wide array of reasoning abstractions and
explanation delivery methods. We overview the literature focusing on different
types of explanation (intrinsic and post-hoc), different models with which
argumentation-based explanations are deployed, different forms of delivery, and
different argumentation frameworks they use. We also lay out a roadmap for
future work.
| [
{
"version": "v1",
"created": "Mon, 24 May 2021 13:32:59 GMT"
}
] | 1,621,900,800,000 | [
[
"Čyras",
"Kristijonas",
""
],
[
"Rago",
"Antonio",
""
],
[
"Albini",
"Emanuele",
""
],
[
"Baroni",
"Pietro",
""
],
[
"Toni",
"Francesca",
""
]
] |
2105.11308 | Henderik Alex Proper | H. A. Proper and Th. P. van der Weide | A General Theory for the Evolution of Application Models -- Full version | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this article we focus on evolving information systems. First a
delimitation of the concept of evolution is provided, resulting in a first
attempt to a general theory for such evolutions. The theory makes a distinction
between the underlying information structure at the conceptual level, its
evolution on the one hand, and the description and semantics of operations on
the information structure and its population on the other hand. Main issues
within this theory are object typing, type relatedness and identification of
objects. In terms of these concepts, we propose some axioms on the
well-formedness of evolution. In this general theory, the underlying data model
is a parameter, making the theory applicable for a wide range of modelling
techniques, including object-role modelling and object oriented techniques.
| [
{
"version": "v1",
"created": "Tue, 18 May 2021 17:24:35 GMT"
}
] | 1,621,900,800,000 | [
[
"Proper",
"H. A.",
""
],
[
"van der Weide",
"Th. P.",
""
]
] |
2105.11545 | Nasim Baharisangari | Nasim Baharisangari, Jean-Rapha\"el Gaglione, Daniel Neider, Ufuk
Topcu, Zhe Xu | Uncertainty-Aware Signal Temporal Logic Inference | 11 pages, 7 figures, 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal logic inference is the process of extracting formal descriptions of
system behaviors from data in the form of temporal logic formulas. The existing
temporal logic inference methods mostly neglect uncertainties in the data,
which results in limited applicability of such methods in real-world
deployments. In this paper, we first investigate the uncertainties associated
with trajectories of a system and represent such uncertainties in the form of
interval trajectories. We then propose two uncertainty-aware signal temporal
logic (STL) inference approaches to classify the undesired behaviors and
desired behaviors of a system. Instead of classifying finitely many
trajectories, we classify infinitely many trajectories within the interval
trajectories. In the first approach, we incorporate robust semantics of STL
formulas with respect to an interval trajectory to quantify the margin at which
an STL formula is satisfied or violated by the interval trajectory. The second
approach relies on the first learning algorithm and exploits the decision tree
to infer STL formulas to classify behaviors of a given system. The proposed
approaches also work for non-separable data by optimizing the worst-case
robustness in inferring an STL formula. Finally, we evaluate the performance of
the proposed algorithms in two case studies, where the proposed algorithms show
reductions in the computation time by up to four orders of magnitude in
comparison with the sampling-based baseline algorithms (for a dataset with 800
sampled trajectories in total).
| [
{
"version": "v1",
"created": "Mon, 24 May 2021 21:26:57 GMT"
},
{
"version": "v2",
"created": "Sun, 30 May 2021 17:02:42 GMT"
}
] | 1,622,505,600,000 | [
[
"Baharisangari",
"Nasim",
""
],
[
"Gaglione",
"Jean-Raphaël",
""
],
[
"Neider",
"Daniel",
""
],
[
"Topcu",
"Ufuk",
""
],
[
"Xu",
"Zhe",
""
]
] |
2105.11864 | Timo Bertram | Timo Bertram, Johannes F\"urnkranz, Martin M\"uller | Predicting Human Card Selection in Magic: The Gathering with Contextual
Preference Ranking | IEEE Conference on Games 2021 version | 3rd IEEE Conference on Games 2021 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Drafting, i.e., the selection of a subset of items from a larger candidate
set, is a key element of many games and related problems. It encompasses team
formation in sports or e-sports, as well as deck selection in many modern card
games. The key difficulty of drafting is that it is typically not sufficient to
simply evaluate each item in a vacuum and to select the best items. The
evaluation of an item depends on the context of the set of items that were
already selected earlier, as the value of a set is not just the sum of the
values of its members - it must include a notion of how well items go together.
In this paper, we study drafting in the context of the card game Magic: The
Gathering. We propose the use of a contextual preference network, which learns
to compare two possible extensions of a given deck of cards. We demonstrate
that the resulting network is better able to evaluate card decks in this game
than previous attempts.
| [
{
"version": "v1",
"created": "Tue, 25 May 2021 12:07:27 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Jun 2021 10:15:47 GMT"
}
] | 1,629,417,600,000 | [
[
"Bertram",
"Timo",
""
],
[
"Fürnkranz",
"Johannes",
""
],
[
"Müller",
"Martin",
""
]
] |
2105.12037 | Arianna Casanova | Juerg Kohlas, Arianna Casanova, Marco Zaffalon | Information algebras of coherent sets of gambles in general possibility
spaces | Accepted at ISIPTA 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we show that coherent sets of gambles can be embedded into the
algebraic structure of information algebra. This leads firstly, to a new
perspective of the algebraic and logical structure of desirability and
secondly, it connects desirability, hence imprecise probabilities, to other
formalism in computer science sharing the same underlying structure. Both the
domain-free and the labeled view of the information algebra of coherent sets of
gambles are presented, considering general possibility spaces.
| [
{
"version": "v1",
"created": "Tue, 25 May 2021 16:18:39 GMT"
}
] | 1,621,987,200,000 | [
[
"Kohlas",
"Juerg",
""
],
[
"Casanova",
"Arianna",
""
],
[
"Zaffalon",
"Marco",
""
]
] |
2105.12205 | Alessandro Antonucci | Alessandro Antonucci and Francesca Mangili and Claudio Bonesana and
Giorgia Adorni | A New Score for Adaptive Tests in Bayesian and Credal Networks | null | Vejnarov\'a J., Wilson N. (eds) Symbolic and Quantitative
Approaches to Reasoning with Uncertainty. ECSQARU 2021. Lecture Notes in
Computer Science, vol 12897. Springer, Cham | 10.1007/978-3-030-86772-0_29 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A test is adaptive when its sequence and number of questions is dynamically
tuned on the basis of the estimated skills of the taker. Graphical models, such
as Bayesian networks, are used for adaptive tests as they allow to model the
uncertainty about the questions and the skills in an explainable fashion,
especially when coping with multiple skills. A better elicitation of the
uncertainty in the question/skills relations can be achieved by interval
probabilities. This turns the model into a credal network, thus making more
challenging the inferential complexity of the queries required to select
questions. This is especially the case for the information theoretic quantities
used as scores to drive the adaptive mechanism. We present an alternative
family of scores, based on the mode of the posterior probabilities, and hence
easier to explain. This makes considerably simpler the evaluation in the credal
case, without significantly affecting the quality of the adaptive process.
Numerical tests on synthetic and real-world data are used to support this
claim.
| [
{
"version": "v1",
"created": "Tue, 25 May 2021 20:35:42 GMT"
}
] | 1,632,873,600,000 | [
[
"Antonucci",
"Alessandro",
""
],
[
"Mangili",
"Francesca",
""
],
[
"Bonesana",
"Claudio",
""
],
[
"Adorni",
"Giorgia",
""
]
] |
2105.12328 | Huale Li | Huale Li, Xuan Wang, Zengyue Guo, Jiajia Zhang, Shuhan Qi | D2CFR: Minimize Counterfactual Regret with Deep Dueling Neural Network | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Counterfactual Regret Minimization (CFR)} is the popular method for finding
approximate Nash equilibrium in two-player zero-sum games with imperfect
information. CFR solves games by travsersing the full game tree iteratively,
which limits its scalability in larger games. When applying CFR to solve
large-scale games in previously, large-scale games are abstracted into
small-scale games firstly. Secondly, CFR is used to solve the abstract game.
And finally, the solution strategy is mapped back to the original large-scale
game. However, this process requires considerable expert knowledge, and the
accuracy of abstraction is closely related to expert knowledge. In addition,
the abstraction also loses certain information, which will eventually affect
the accuracy of the solution strategy. Towards this problem, a recent method,
\textit{Deep CFR} alleviates the need for abstraction and expert knowledge by
applying deep neural networks directly to CFR in full games. In this paper, we
introduces \textit{Neural Network Counterfactual Regret Minimization (NNCFR)},
an improved variant of \textit{Deep CFR} that has a faster convergence by
constructing a dueling netwok as the value network. Moreover, an evaluation
module is designed by combining the value network and Monte Carlo, which
reduces the approximation error of the value network. In addition, a new loss
function is designed in the procedure of training policy network in the
proposed \textit{NNCFR}, which can be good to make the policy network more
stable. The extensive experimental tests are conducted to show that the
\textit{NNCFR} converges faster and performs more stable than \textit{Deep
CFR}, and outperforms \textit{Deep CFR} with respect to exploitability and
head-to-head performance on test games.
| [
{
"version": "v1",
"created": "Wed, 26 May 2021 04:58:36 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jan 2022 08:43:22 GMT"
}
] | 1,641,254,400,000 | [
[
"Li",
"Huale",
""
],
[
"Wang",
"Xuan",
""
],
[
"Guo",
"Zengyue",
""
],
[
"Zhang",
"Jiajia",
""
],
[
"Qi",
"Shuhan",
""
]
] |
2105.12552 | Eduard Torres | Carlos Ans\'otegui, Felip Many\`a, Jesus Ojeda, Josep M. Salvia,
Eduard Torres | Incomplete MaxSAT Approaches for Combinatorial Testing | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present a Satisfiability (SAT)-based approach for building Mixed Covering
Arrays with Constraints of minimum length, referred to as the Covering Array
Number problem. This problem is central in Combinatorial Testing for the
detection of system failures. In particular, we show how to apply Maximum
Satisfiability (MaxSAT) technology by describing efficient encodings for
different classes of complete and incomplete MaxSAT solvers to compute optimal
and suboptimal solutions, respectively. Similarly, we show how to solve through
MaxSAT technology a closely related problem, the Tuple Number problem, which we
extend to incorporate constraints. For this problem, we additionally provide a
new MaxSAT-based incomplete algorithm. The extensive experimental evaluation we
carry out on the available Mixed Covering Arrays with Constraints benchmarks
and the comparison with state-of-the-art tools confirm the good performance of
our approaches.
| [
{
"version": "v1",
"created": "Wed, 26 May 2021 14:00:56 GMT"
}
] | 1,622,073,600,000 | [
[
"Ansótegui",
"Carlos",
""
],
[
"Manyà",
"Felip",
""
],
[
"Ojeda",
"Jesus",
""
],
[
"Salvia",
"Josep M.",
""
],
[
"Torres",
"Eduard",
""
]
] |
2105.12846 | Matthew Stephenson | Matthew Stephenson, Dennis J. N. J. Soemers, Eric Piette, Cameron
Browne | General Game Heuristic Prediction Based on Ludeme Descriptions | 4 pages, 1 figure, 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates the performance of different general-game-playing
heuristics for games in the Ludii general game system. Based on these results,
we train several regression learning models to predict the performance of these
heuristics based on each game's description file. We also provide a condensed
analysis of the games available in Ludii, and the different ludemes that define
them.
| [
{
"version": "v1",
"created": "Wed, 26 May 2021 21:17:47 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jul 2021 07:16:24 GMT"
}
] | 1,625,529,600,000 | [
[
"Stephenson",
"Matthew",
""
],
[
"Soemers",
"Dennis J. N. J.",
""
],
[
"Piette",
"Eric",
""
],
[
"Browne",
"Cameron",
""
]
] |
2105.12899 | Xijun Li | Xijun Li, Weilin Luo, Mingxuan Yuan, Jun Wang, Jiawen Lu, Jie Wang,
Jinhu Lu and Jia Zeng | Learning to Optimize Industry-Scale Dynamic Pickup and Delivery Problems | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The Dynamic Pickup and Delivery Problem (DPDP) is aimed at dynamically
scheduling vehicles among multiple sites in order to minimize the cost when
delivery orders are not known a priori. Although DPDP plays an important role
in modern logistics and supply chain management, state-of-the-art DPDP
algorithms are still limited on their solution quality and efficiency. In
practice, they fail to provide a scalable solution as the numbers of vehicles
and sites become large. In this paper, we propose a data-driven approach,
Spatial-Temporal Aided Double Deep Graph Network (ST-DDGN), to solve
industry-scale DPDP. In our method, the delivery demands are first forecast
using spatial-temporal prediction method, which guides the neural network to
perceive spatial-temporal distribution of delivery demand when dispatching
vehicles. Besides, the relationships of individuals such as vehicles are
modelled by establishing a graph-based value function. ST-DDGN incorporates
attention-based graph embedding with Double DQN (DDQN). As such, it can make
the inference across vehicles more efficiently compared with traditional
methods. Our method is entirely data driven and thus adaptive, i.e., the
relational representation of adjacent vehicles can be learned and corrected by
ST-DDGN from data periodically. We have conducted extensive experiments over
real-world data to evaluate our solution. The results show that ST-DDGN reduces
11.27% number of the used vehicles and decreases 13.12% total transportation
cost on average over the strong baselines, including the heuristic algorithm
deployed in our UAT (User Acceptance Test) environment and a variety of vanilla
DRL methods. We are due to fully deploy our solution into our online logistics
system and it is estimated that millions of USD logistics cost can be saved per
year.
| [
{
"version": "v1",
"created": "Thu, 27 May 2021 01:16:00 GMT"
}
] | 1,622,160,000,000 | [
[
"Li",
"Xijun",
""
],
[
"Luo",
"Weilin",
""
],
[
"Yuan",
"Mingxuan",
""
],
[
"Wang",
"Jun",
""
],
[
"Lu",
"Jiawen",
""
],
[
"Wang",
"Jie",
""
],
[
"Lu",
"Jinhu",
""
],
[
"Zeng",
"Jia",
""
]
] |
2105.12908 | Masood Feyzbakhsh Rankooh | Masood Feyzbakhsh Rankooh, Jussi Rintanen | Propositional Encodings of Acyclicity and Reachability by using Vertex
Elimination | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce novel methods for encoding acyclicity and s-t-reachability
constraints for propositional formulas with underlying directed graphs. They
are based on vertex elimination graphs, which makes them suitable for cases
where the underlying graph is sparse. In contrast to solvers with ad hoc
constraint propagators for acyclicity and reachability constraints such as
GraphSAT, our methods encode these constraints as standard propositional
clauses, making them directly applicable with any SAT solver. An empirical
study demonstrates that our methods together with an efficient SAT solver can
outperform both earlier encodings of these constraints as well as GraphSAT,
particularly when underlying graphs are sparse.
| [
{
"version": "v1",
"created": "Thu, 27 May 2021 01:57:53 GMT"
}
] | 1,622,160,000,000 | [
[
"Rankooh",
"Masood Feyzbakhsh",
""
],
[
"Rintanen",
"Jussi",
""
]
] |
2105.13155 | Jan Niklas Adams | Jan Niklas Adams, Sebastiaan J. van Zelst, Lara Quack, Kathrin
Hausmann, Wil M.P. van der Aalst, and Thomas Rose | A Framework for Explainable Concept Drift Detection in Process Mining | null | null | 10.1007/978-3-030-85469-0_25 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Rapidly changing business environments expose companies to high levels of
uncertainty. This uncertainty manifests itself in significant changes that tend
to occur over the lifetime of a process and possibly affect its performance. It
is important to understand the root causes of such changes since this allows us
to react to change or anticipate future changes. Research in process mining has
so far only focused on detecting, locating and characterizing significant
changes in a process and not on finding root causes of such changes. In this
paper, we aim to close this gap. We propose a framework that adds an
explainability level onto concept drift detection in process mining and
provides insights into the cause-effect relationships behind significant
changes. We define different perspectives of a process, detect concept drifts
in these perspectives and plug the perspectives into a causality check that
determines whether these concept drifts can be causal to each other. We
showcase the effectiveness of our framework by evaluating it on both synthetic
and real event data. Our experiments show that our approach unravels
cause-effect relationships and provides novel insights into executed processes.
| [
{
"version": "v1",
"created": "Thu, 27 May 2021 14:03:19 GMT"
}
] | 1,631,577,600,000 | [
[
"Adams",
"Jan Niklas",
""
],
[
"van Zelst",
"Sebastiaan J.",
""
],
[
"Quack",
"Lara",
""
],
[
"Hausmann",
"Kathrin",
""
],
[
"van der Aalst",
"Wil M. P.",
""
],
[
"Rose",
"Thomas",
""
]
] |
2105.13700 | Mikolas Janota | Mikol\'a\v{s} Janota and Haniel Barbosa and Pascal Fontaine and Andrew
Reynolds | Fair and Adventurous Enumeration of Quantifier Instantiations | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | SMT solvers generally tackle quantifiers by instantiating their variables
with tuples of terms from the ground part of the formula. Recent enumerative
approaches for quantifier instantiation consider tuples of terms in some
heuristic order. This paper studies different strategies to order such tuples
and their impact on performance. We decouple the ordering problem into two
parts. First is the order of the sequence of terms to consider for each
quantified variable, and second is the order of the instantiation tuples
themselves. While the most and least preferred tuples, i.e. those with all
variables assigned to the most or least preferred terms, are clear, the
combinations in between allow flexibility in an implementation. We look at
principled strategies of complete enumeration, where some strategies are more
fair, meaning they treat all the variables the same but some strategies may be
more adventurous, meaning that they may venture further down the preference
list. We further describe new techniques for discarding irrelevant
instantiations which are crucial for the performance of these strategies in
practice. These strategies are implemented in the SMT solver cvc5, where they
contribute to the diversification of the solver's configuration space, as shown
by our experimental results.
| [
{
"version": "v1",
"created": "Fri, 28 May 2021 09:51:47 GMT"
}
] | 1,622,419,200,000 | [
[
"Janota",
"Mikoláš",
""
],
[
"Barbosa",
"Haniel",
""
],
[
"Fontaine",
"Pascal",
""
],
[
"Reynolds",
"Andrew",
""
]
] |
2105.14149 | Nandini Ramanan | Charanraj Thimmisetty, Praveen Tiwari, Didac Gil de la Iglesia,
Nandini Ramanan, Marjorie Sayer, Viswesh Ananthakrishnan, and Claudionor
Nunes Coelho Jr | Log2NS: Enhancing Deep Learning Based Analysis of Logs With Formal to
Prevent Survivorship Bias | 10 pages, 5 tables, 4 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Analysis of large observational data sets generated by a reactive system is a
common challenge in debugging system failures and determining their root cause.
One of the major problems is that these observational data suffer from
survivorship bias. Examples include analyzing traffic logs from networks, and
simulation logs from circuit design. In such applications, users want to detect
non-spurious correlations from observational data and obtain actionable
insights about them. In this paper, we introduce log to Neuro-symbolic
(Log2NS), a framework that combines probabilistic analysis from machine
learning (ML) techniques on observational data with certainties derived from
symbolic reasoning on an underlying formal model. We apply the proposed
framework to network traffic debugging by employing the following steps. To
detect patterns in network logs, we first generate global embedding vector
representations of entities such as IP addresses, ports, and applications.
Next, we represent large log flow entries as clusters that make it easier for
the user to visualize and detect interesting scenarios that will be further
analyzed. To generalize these patterns, Log2NS provides an ability to query
from static logs and correlation engines for positive instances, as well as
formal reasoning for negative and unseen instances. By combining the strengths
of deep learning and symbolic methods, Log2NS provides a very powerful
reasoning and debugging tool for log-based data. Empirical evaluations on a
real internal data set demonstrate the capabilities of Log2NS.
| [
{
"version": "v1",
"created": "Sat, 29 May 2021 00:01:08 GMT"
}
] | 1,622,505,600,000 | [
[
"Thimmisetty",
"Charanraj",
""
],
[
"Tiwari",
"Praveen",
""
],
[
"de la Iglesia",
"Didac Gil",
""
],
[
"Ramanan",
"Nandini",
""
],
[
"Sayer",
"Marjorie",
""
],
[
"Ananthakrishnan",
"Viswesh",
""
],
[
"Coelho",
"Claudionor Nunes",
"Jr"
]
] |
2105.14212 | Danny Arlen De Jes\'us G\'omez-Ram\'irez | Danny A. J. Gomez-Ramirez, Egil Nordqvist | Towards a General Many-Sorted Framework for Describing Certain Kinds of
Legal Statutes with a Potential Computational Realization | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Examining a 20th-century Scandinavian legal theoretical tradition, we can
extract an ontological naturalistic, a logical empiristic, and a modern
idealistic rationale. We introduce the mathematical syntactic figure present in
the `logical empiricism' in a contemporary mathematical logic. A new formal
framework for describing explicit purchase statutes (Sweden) is gradually
developed and subsequently proposed. This new framework is based on a
many-sorted first-order logic (MFOL) approach, where the semantics are grounded
in concrete `physical' objects and situations with a legal relevance.
Specifically, we present a concrete formal syntactic translation of one of the
central statutes of Swedish legislation for the purchase of immovable property.
Additionally, we discuss the potential implications that a subsequent
development of such formalisations would have for constructing artificial
agents (e.g., software) that can be used as `co-creative' legal assistance for
solving highly complex legal issues concerning the transfer of property, among
others.
| [
{
"version": "v1",
"created": "Sat, 29 May 2021 05:01:06 GMT"
}
] | 1,622,505,600,000 | [
[
"Gomez-Ramirez",
"Danny A. J.",
""
],
[
"Nordqvist",
"Egil",
""
]
] |
2105.14371 | Bahare Salmani | Bahare Salmani and Joost-Pieter Katoen | Fine-Tuning the Odds in Bayesian Networks | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper proposes various new analysis techniques for Bayes networks in
which conditional probability tables (CPTs) may contain symbolic variables. The
key idea is to exploit scalable and powerful techniques for synthesis problems
in parametric Markov chains. Our techniques are applicable to arbitrarily many,
possibly dependent parameters that may occur in various CPTs. This lifts the
severe restrictions on parameters, e.g., by restricting the number of
parametrized CPTs to one or two, or by avoiding parameter dependencies between
several CPTs, in existing works for parametric Bayes networks (pBNs). We
describe how our techniques can be used for various pBN synthesis problems
studied in the literature such as computing sensitivity functions (and values),
simple and difference parameter tuning, ratio parameter tuning, and minimal
change tuning. Experiments on several benchmarks show that our prototypical
tool built on top of the probabilistic model checker Storm can handle several
hundreds of parameters.
| [
{
"version": "v1",
"created": "Sat, 29 May 2021 20:41:56 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jul 2021 11:27:00 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Aug 2022 19:31:09 GMT"
}
] | 1,660,694,400,000 | [
[
"Salmani",
"Bahare",
""
],
[
"Katoen",
"Joost-Pieter",
""
]
] |
2105.14373 | S\'ergio Barreto Junior | S\'ergio Barreto, Ricardo Moura, Jonnathan Carvalho, Aline Paes,
Alexandre Plastino | Sentiment analysis in tweets: an assessment study from classical to
modern text representation models | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | With the growth of social medias, such as Twitter, plenty of user-generated
data emerge daily. The short texts published on Twitter -- the tweets -- have
earned significant attention as a rich source of information to guide many
decision-making processes. However, their inherent characteristics, such as the
informal, and noisy linguistic style, remain challenging to many natural
language processing (NLP) tasks, including sentiment analysis. Sentiment
classification is tackled mainly by machine learning-based classifiers. The
literature has adopted word representations from distinct natures to transform
tweets to vector-based inputs to feed sentiment classifiers. The
representations come from simple count-based methods, such as bag-of-words, to
more sophisticated ones, such as BERTweet, built upon the trendy BERT
architecture. Nevertheless, most studies mainly focus on evaluating those
models using only a small number of datasets. Despite the progress made in
recent years in language modelling, there is still a gap regarding a robust
evaluation of induced embeddings applied to sentiment analysis on tweets.
Furthermore, while fine-tuning the model from downstream tasks is prominent
nowadays, less attention has been given to adjustments based on the specific
linguistic style of the data. In this context, this study fulfils an assessment
of existing language models in distinguishing the sentiment expressed in tweets
by using a rich collection of 22 datasets from distinct domains and five
classification algorithms. The evaluation includes static and contextualized
representations. Contexts are assembled from Transformer-based autoencoder
models that are also fine-tuned based on the masked language model task, using
a plethora of strategies.
| [
{
"version": "v1",
"created": "Sat, 29 May 2021 21:05:28 GMT"
}
] | 1,622,505,600,000 | [
[
"Barreto",
"Sérgio",
""
],
[
"Moura",
"Ricardo",
""
],
[
"Carvalho",
"Jonnathan",
""
],
[
"Paes",
"Aline",
""
],
[
"Plastino",
"Alexandre",
""
]
] |
2105.14517 | Jiaqi Chen | Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang, Lingbo Liu,
Eric P. Xing, Liang Lin | GeoQA: A Geometric Question Answering Benchmark Towards Multimodal
Numerical Reasoning | Accepted to Findings of ACL 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic math problem solving has recently attracted increasing attention as
a long-standing AI benchmark. In this paper, we focus on solving geometric
problems, which requires a comprehensive understanding of textual descriptions,
visual diagrams, and theorem knowledge. However, the existing methods were
highly dependent on handcraft rules and were merely evaluated on small-scale
datasets. Therefore, we propose a Geometric Question Answering dataset GeoQA,
containing 4,998 geometric problems with corresponding annotated programs,
which illustrate the solving process of the given problems. Compared with
another publicly available dataset GeoS, GeoQA is 25 times larger, in which the
program annotations can provide a practical testbed for future research on
explicit and explainable numerical reasoning. Moreover, we introduce a Neural
Geometric Solver (NGS) to address geometric problems by comprehensively parsing
multimodal information and generating interpretable programs. We further add
multiple self-supervised auxiliary tasks on NGS to enhance cross-modal semantic
representation. Extensive experiments on GeoQA validate the effectiveness of
our proposed NGS and auxiliary tasks. However, the results are still
significantly lower than human performance, which leaves large room for future
research. Our benchmark and code are released at
https://github.com/chen-judge/GeoQA .
| [
{
"version": "v1",
"created": "Sun, 30 May 2021 12:34:17 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Jun 2021 02:53:03 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Jan 2022 03:50:31 GMT"
}
] | 1,641,945,600,000 | [
[
"Chen",
"Jiaqi",
""
],
[
"Tang",
"Jianheng",
""
],
[
"Qin",
"Jinghui",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Liu",
"Lingbo",
""
],
[
"Xing",
"Eric P.",
""
],
[
"Lin",
"Liang",
""
]
] |
2105.14796 | Binbin Xie | Binbin Xie, Jinsong Su, Yubin Ge, Xiang Li, Jianwei Cui, Junfeng Yao
and Bin Wang | Improving Tree-Structured Decoder Training for Code Generation via
Mutual Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Code generation aims to automatically generate a piece of code given an input
natural language utterance. Currently, among dominant models, it is treated as
a sequence-to-tree task, where a decoder outputs a sequence of actions
corresponding to the pre-order traversal of an Abstract Syntax Tree. However,
such a decoder only exploits the preorder traversal based preceding actions,
which are insufficient to ensure correct action predictions. In this paper, we
first throughly analyze the context modeling difference between neural code
generation models with different traversals based decodings (preorder traversal
vs breadth-first traversal), and then propose to introduce a mutual learning
framework to jointly train these models. Under this framework, we continuously
enhance both two models via mutual distillation, which involves synchronous
executions of two one-to-one knowledge transfers at each training step. More
specifically, we alternately choose one model as the student and the other as
its teacher, and require the student to fit the training data and the action
prediction distributions of its teacher. By doing so, both models can fully
absorb the knowledge from each other and thus could be improved simultaneously.
Experimental results and in-depth analysis on several benchmark datasets
demonstrate the effectiveness of our approach. We release our code at
https://github.com/DeepLearnXMU/CGML.
| [
{
"version": "v1",
"created": "Mon, 31 May 2021 08:44:13 GMT"
}
] | 1,622,505,600,000 | [
[
"Xie",
"Binbin",
""
],
[
"Su",
"Jinsong",
""
],
[
"Ge",
"Yubin",
""
],
[
"Li",
"Xiang",
""
],
[
"Cui",
"Jianwei",
""
],
[
"Yao",
"Junfeng",
""
],
[
"Wang",
"Bin",
""
]
] |
2105.14923 | Bestoun Ahmed Dr. | Kamal Z. Zamli, Md. Abdul Kader, Saiful Azad, Bestoun S. Ahmed | Hybrid Henry Gas Solubility Optimization Algorithm with Dynamic
Cluster-to-Algorithm Mapping for Search-based Software Engineering Problems | 31 pages | Neural Computing and Applications 2021 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper discusses a new variant of the Henry Gas Solubility Optimization
(HGSO) Algorithm, called Hybrid HGSO (HHGSO). Unlike its predecessor, HHGSO
allows multiple clusters serving different individual meta-heuristic algorithms
(i.e., with its own defined parameters and local best) to coexist within the
same population. Exploiting the dynamic cluster-to-algorithm mapping via
penalized and reward model with adaptive switching factor, HHGSO offers a novel
approach for meta-heuristic hybridization consisting of Jaya Algorithm, Sooty
Tern Optimization Algorithm, Butterfly Optimization Algorithm, and Owl Search
Algorithm, respectively. The acquired results from the selected two case
studies (i.e., involving team formation problem and combinatorial test suite
generation) indicate that the hybridization has notably improved the
performance of HGSO and gives superior performance against other competing
meta-heuristic and hyper-heuristic algorithms.
| [
{
"version": "v1",
"created": "Mon, 31 May 2021 12:42:15 GMT"
}
] | 1,622,505,600,000 | [
[
"Zamli",
"Kamal Z.",
""
],
[
"Kader",
"Md. Abdul",
""
],
[
"Azad",
"Saiful",
""
],
[
"Ahmed",
"Bestoun S.",
""
]
] |
2105.15135 | Sajib Mistry | Sajib Mistry and Athman Bouguettaya | Reputation Bootstrapping for Composite Services using CP-nets | 14 Pages, accepted and to appear in IEEE Transactions on Services
Computing | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We propose a novel framework to bootstrap the reputation of on-demand service
compositions. On-demand compositions are usually context-aware and have little
or no direct consumer feedback. The reputation bootstrapping of single or
atomic services does not consider the topology of the composition and
relationships among reputation-related factors. We apply Conditional Preference
Networks (CP-nets) of reputation-related factors for component services in a
composition. The reputation of a composite service is bootstrapped by the
composition of CP-nets. We consider the history of invocation among component
services to determine reputation-interdependence in a composition. The
composition rules are constructed using the composition topology and four types
of reputation-influence among component services. A heuristic-based Q-learning
approach is proposed to select the optimal set of reputation-related CP-nets.
Experimental results prove the efficiency of the proposed approach.
| [
{
"version": "v1",
"created": "Thu, 27 May 2021 02:51:23 GMT"
}
] | 1,622,505,600,000 | [
[
"Mistry",
"Sajib",
""
],
[
"Bouguettaya",
"Athman",
""
]
] |
2106.00133 | Maayan Shvo | Maayan Shvo, Zhiming Hu, Rodrigo Toro Icarte, Iqbal Mohomed, Allan
Jepson, Sheila A. McIlraith | AppBuddy: Learning to Accomplish Tasks in Mobile Apps via Reinforcement
Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human beings, even small children, quickly become adept at figuring out how
to use applications on their mobile devices. Learning to use a new app is often
achieved via trial-and-error, accelerated by transfer of knowledge from past
experiences with like apps. The prospect of building a smarter smartphone - one
that can learn how to achieve tasks using mobile apps - is tantalizing. In this
paper we explore the use of Reinforcement Learning (RL) with the goal of
advancing this aspiration. We introduce an RL-based framework for learning to
accomplish tasks in mobile apps. RL agents are provided with states derived
from the underlying representation of on-screen elements, and rewards that are
based on progress made in the task. Agents can interact with screen elements by
tapping or typing. Our experimental results, over a number of mobile apps, show
that RL agents can learn to accomplish multi-step tasks, as well as achieve
modest generalization across different apps. More generally, we develop a
platform which addresses several engineering challenges to enable an effective
RL training environment. Our AppBuddy platform is compatible with OpenAI Gym
and includes a suite of mobile apps and benchmark tasks that supports a
diversity of RL research in the mobile app setting.
| [
{
"version": "v1",
"created": "Mon, 31 May 2021 23:02:38 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Jun 2021 17:56:58 GMT"
}
] | 1,623,110,400,000 | [
[
"Shvo",
"Maayan",
""
],
[
"Hu",
"Zhiming",
""
],
[
"Icarte",
"Rodrigo Toro",
""
],
[
"Mohomed",
"Iqbal",
""
],
[
"Jepson",
"Allan",
""
],
[
"McIlraith",
"Sheila A.",
""
]
] |
2106.00258 | Qianyu Feng | Qianyu Feng, Bang Zhang, Yi Yang | Divide and Rule: Recurrent Partitioned Network for Dynamic Processes | arXiv admin note: text overlap with arXiv:2007.15240,
arXiv:2007.00631 by other authors | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In general, many dynamic processes are involved with interacting variables,
from physical systems to sociological analysis. The interplay of components in
the system can give rise to confounding dynamic behavior. Many approaches model
temporal sequences holistically ignoring the internal interaction which are
impotent in capturing the protogenic actuation. Differently, our goal is to
represent a system with a part-whole hierarchy and discover the implied
dependencies among intra-system variables: inferring the interactions that
possess causal effects on the sub-system behavior with REcurrent partItioned
Network (REIN). The proposed architecture consists of (i) a perceptive module
that extracts a hierarchical and temporally consistent representation of the
observation at multiple levels, (ii) a deductive module for determining the
relational connection between neurons at each level, and (iii) a statistical
module that can predict the future by conditioning on the temporal
distributional estimation. Our model is demonstrated to be effective in
identifying the componential interactions with limited observation and stable
in long-term future predictions experimented with diverse physical systems.
| [
{
"version": "v1",
"created": "Tue, 1 Jun 2021 06:45:56 GMT"
}
] | 1,622,592,000,000 | [
[
"Feng",
"Qianyu",
""
],
[
"Zhang",
"Bang",
""
],
[
"Yang",
"Yi",
""
]
] |
2106.00263 | Mengfan Liu | Mengfan Liu, Pengyang Shao, Kun Zhang | Graph-based Exercise- and Knowledge-Aware Learning Network for Student
Performance Prediction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting student performance is a fundamental task in Intelligent Tutoring
Systems (ITSs), by which we can learn about students' knowledge level and
provide personalized teaching strategies for them. Researchers have made plenty
of efforts on this task. They either leverage educational psychology methods to
predict students' scores according to the learned knowledge proficiency, or
make full use of Collaborative Filtering (CF) models to represent latent
factors of students and exercises. However, most of these methods either
neglect the exercise-specific characteristics (e.g., exercise materials), or
cannot fully explore the high-order interactions between students, exercises,
as well as knowledge concepts. To this end, we propose a Graph-based Exercise-
and Knowledge-Aware Learning Network for accurate student score prediction.
Specifically, we learn students' mastery of exercises and knowledge concepts
respectively to model the two-fold effects of exercises and knowledge concepts.
Then, to model the high-order interactions, we apply graph convolution
techniques in the prediction process. Extensive experiments on two real-world
datasets prove the effectiveness of our proposed Graph-EKLN.
| [
{
"version": "v1",
"created": "Tue, 1 Jun 2021 06:53:17 GMT"
}
] | 1,622,592,000,000 | [
[
"Liu",
"Mengfan",
""
],
[
"Shao",
"Pengyang",
""
],
[
"Zhang",
"Kun",
""
]
] |
2106.00266 | Oriol Corcoll | Oriol Corcoll, Youssef Mohamed, Raul Vicente | Did I do that? Blame as a means to identify controlled effects in
reinforcement learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Identifying controllable aspects of the environment has proven to be an
extraordinary intrinsic motivator to reinforcement learning agents. Despite
repeatedly achieving State-of-the-Art results, this approach has only been
studied as a proxy to a reward-based task and has not yet been evaluated on its
own. Current methods are based on action-prediction. Humans, on the other hand,
assign blame to their actions to decide what they controlled. This work
proposes Controlled Effect Network (CEN), an unsupervised method based on
counterfactual measures of blame to identify effects on the environment
controlled by the agent. CEN is evaluated in a wide range of environments
showing that it can accurately identify controlled effects. Moreover, we
demonstrate CEN's capabilities as intrinsic motivator by integrating it in the
state-of-the-art exploration method, achieving substantially better performance
than action-prediction models.
| [
{
"version": "v1",
"created": "Tue, 1 Jun 2021 06:58:31 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Oct 2021 08:06:25 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Feb 2022 08:00:30 GMT"
}
] | 1,645,142,400,000 | [
[
"Corcoll",
"Oriol",
""
],
[
"Mohamed",
"Youssef",
""
],
[
"Vicente",
"Raul",
""
]
] |
2106.00306 | Vasiliki Voukelatou | Vasiliki Voukelatou, Ioanna Miliou, Fosca Giannotti, Luca Pappalardo | Understanding peacefulness through the world news | 40 pages, 23 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Peacefulness is a principal dimension of well-being and is the way out of
inequity and violence. Thus, its measurement has drawn the attention of
researchers, policymakers, and peacekeepers. During the last years, novel
digital data streams have drastically changed the research in this field. The
current study exploits information extracted from a new digital database called
Global Data on Events, Location, and Tone (GDELT) to capture peacefulness
through the Global Peace Index (GPI). Applying predictive machine learning
models, we demonstrate that news media attention from GDELT can be used as a
proxy for measuring GPI at a monthly level. Additionally, we use explainable AI
techniques to obtain the most important variables that drive the predictions.
This analysis highlights each country's profile and provides explanations for
the predictions, and particularly for the errors and the events that drive
these errors. We believe that digital data exploited by researchers,
policymakers, and peacekeepers, with data science tools as powerful as machine
learning, could contribute to maximizing the societal benefits and minimizing
the risks to peacefulness.
| [
{
"version": "v1",
"created": "Tue, 1 Jun 2021 08:24:57 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Jun 2021 14:17:03 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Sep 2021 18:33:45 GMT"
},
{
"version": "v4",
"created": "Tue, 26 Oct 2021 07:12:18 GMT"
}
] | 1,635,292,800,000 | [
[
"Voukelatou",
"Vasiliki",
""
],
[
"Miliou",
"Ioanna",
""
],
[
"Giannotti",
"Fosca",
""
],
[
"Pappalardo",
"Luca",
""
]
] |
2106.00327 | Zixuan Li | Zixuan Li, Xiaolong Jin, Saiping Guan, Wei Li, Jiafeng Guo, Yuanzhuo
Wang and Xueqi Cheng | Search from History and Reason for Future: Two-stage Reasoning on
Temporal Knowledge Graphs | ACL 2021 long paper (main conference) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal Knowledge Graphs (TKGs) have been developed and used in many
different areas. Reasoning on TKGs that predicts potential facts (events) in
the future brings great challenges to existing models. When facing a prediction
task, human beings usually search useful historical information (i.e., clues)
in their memories and then reason for future meticulously. Inspired by this
mechanism, we propose CluSTeR to predict future facts in a two-stage manner,
Clue Searching and Temporal Reasoning, accordingly. Specifically, at the clue
searching stage, CluSTeR learns a beam search policy via reinforcement learning
(RL) to induce multiple clues from historical facts. At the temporal reasoning
stage, it adopts a graph convolution network based sequence method to deduce
answers from clues. Experiments on four datasets demonstrate the substantial
advantages of CluSTeR compared with the state-of-the-art methods. Moreover, the
clues found by CluSTeR further provide interpretability for the results.
| [
{
"version": "v1",
"created": "Tue, 1 Jun 2021 09:01:22 GMT"
}
] | 1,622,592,000,000 | [
[
"Li",
"Zixuan",
""
],
[
"Jin",
"Xiaolong",
""
],
[
"Guan",
"Saiping",
""
],
[
"Li",
"Wei",
""
],
[
"Guo",
"Jiafeng",
""
],
[
"Wang",
"Yuanzhuo",
""
],
[
"Cheng",
"Xueqi",
""
]
] |
2106.00390 | Laura Giordano | Laura Giordano | On the KLM properties of a fuzzy DL with Typicality | 15 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper investigates the properties of a fuzzy logic of typicality. The
extension of fuzzy logic with a typicality operator was proposed in recent work
to define a fuzzy multipreference semantics for Multilayer Perceptrons, by
regarding the deep neural network as a conditional knowledge base. In this
paper, we study its properties. First, a monotonic extension of a fuzzy ALC
with typicality is considered (called ALC^FT) and a reformulation the KLM
properties of a preferential consequence relation for this logic is devised.
Most of the properties are satisfied, depending on the reformulation and on the
fuzzy combination functions considered. We then strengthen ALC^FT with a
closure construction by introducing a notion of faithful model of a weighted
knowledge base, which generalizes the notion of coherent model of a conditional
knowledge base previously introduced, and we study its properties.
| [
{
"version": "v1",
"created": "Tue, 1 Jun 2021 10:57:46 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jul 2021 12:31:46 GMT"
}
] | 1,626,307,200,000 | [
[
"Giordano",
"Laura",
""
]
] |
2106.00517 | Zhou Tianze | Tianze Zhou, Fubiao Zhang, Kun Shao, Kai Li, Wenhan Huang, Jun Luo,
Weixun Wang, Yaodong Yang, Hangyu Mao, Bin Wang, Dong Li, Wulong Liu, Jianye
Hao | Cooperative Multi-Agent Transfer Learning with Level-Adaptive Credit
Assignment | 12 pages, 9 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Extending transfer learning to cooperative multi-agent reinforcement learning
(MARL) has recently received much attention. In contrast to the single-agent
setting, the coordination indispensable in cooperative MARL constrains each
agent's policy. However, existing transfer methods focus exclusively on agent
policy and ignores coordination knowledge. We propose a new architecture that
realizes robust coordination knowledge transfer through appropriate
decomposition of the overall coordination into several coordination patterns.
We use a novel mixing network named level-adaptive QTransformer
(LA-QTransformer) to realize agent coordination that considers credit
assignment, with appropriate coordination patterns for different agents
realized by a novel level-adaptive Transformer (LA-Transformer) dedicated to
the transfer of coordination knowledge. In addition, we use a novel agent
network named Population Invariant agent with Transformer (PIT) to realize the
coordination transfer in more varieties of scenarios. Extensive experiments in
StarCraft II micro-management show that LA-QTransformer together with PIT
achieves superior performance compared with state-of-the-art baselines.
| [
{
"version": "v1",
"created": "Tue, 1 Jun 2021 14:22:57 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Jun 2021 06:16:03 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Jun 2021 09:30:05 GMT"
}
] | 1,622,764,800,000 | [
[
"Zhou",
"Tianze",
""
],
[
"Zhang",
"Fubiao",
""
],
[
"Shao",
"Kun",
""
],
[
"Li",
"Kai",
""
],
[
"Huang",
"Wenhan",
""
],
[
"Luo",
"Jun",
""
],
[
"Wang",
"Weixun",
""
],
[
"Yang",
"Yaodong",
""
],
[
"Mao",
"Hangyu",
""
],
[
"Wang",
"Bin",
""
],
[
"Li",
"Dong",
""
],
[
"Liu",
"Wulong",
""
],
[
"Hao",
"Jianye",
""
]
] |
2106.00538 | Laurens Arp | Laurens Arp | A Markov Reward Process-Based Approach to Spatial Interpolation | This is a Master Thesis for the Computer Science MSc programme at
Leiden University | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The interpolation of spatial data can be of tremendous value in various
applications, such as forecasting weather from only a few measurements of
meteorological or remote sensing data. Existing methods for spatial
interpolation, such as variants of kriging and spatial autoregressive models,
tend to suffer from at least one of the following limitations: (a) the
assumption of stationarity, (b) the assumption of isotropy, and (c) the
trade-off between modelling local or global spatial interaction. Addressing
these issues in this work, we propose the use of Markov reward processes (MRPs)
as a spatial interpolation method, and we introduce three variants thereof: (i)
a basic static discount MRP (SD-MRP), (ii) an accurate but mostly theoretical
optimised MRP (O-MRP), and (iii) a transferable weight prediction MRP (WP-MRP).
All variants of MRP interpolation operate locally, while also implicitly
accounting for global spatial relationships in the entire system through
recursion. Additionally, O-MRP and WP-MRP no longer assume stationarity and are
robust to anisotropy. We evaluated our proposed methods by comparing the mean
absolute errors of their interpolated grid cells to those of 7 common
baselines, selected from models based on spatial autocorrelation, (spatial)
regression, and deep learning.
We performed detailed evaluations on two publicly available datasets (local
GDP values, and COVID-19 patient trajectory data). The results from these
experiments clearly show the competitive advantage of MRP interpolation, which
achieved significantly lower errors than the existing methods in 23 out of 40
experimental conditions, or 35 out of 40 when including O-MRP.
| [
{
"version": "v1",
"created": "Tue, 1 Jun 2021 14:52:54 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Jun 2021 11:13:33 GMT"
}
] | 1,623,801,600,000 | [
[
"Arp",
"Laurens",
""
]
] |
2106.00905 | Rakhmatulin Ildar | R. Ildar and E. Pomazov | Low-cost Stereovision system (disparity map) for few dollars | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | The paper presents an analysis of the latest developments in the field of
stereo vision in the low-cost segment, both for prototypes and for industrial
designs. We described the theory of stereo vision and presented information
about cameras and data transfer protocols and their compatibility with various
devices. The theory in the field of image processing for stereo vision
processes is considered and the calibration process is described in detail.
Ultimately, we presented the developed stereo vision system and provided the
main points that need to be considered when developing such systems. The final,
we presented software for adjusting stereo vision parameters in real-time in
the python language in the Windows operating system.
| [
{
"version": "v1",
"created": "Wed, 2 Jun 2021 02:55:03 GMT"
}
] | 1,622,678,400,000 | [
[
"Ildar",
"R.",
""
],
[
"Pomazov",
"E.",
""
]
] |
2106.00978 | Tuan-Anh Nguyen Dang | Tuan-Anh D. Nguyen, Hieu M. Vu, Nguyen Hong Son, Minh-Tien Nguyen | A Span Extraction Approach for Information Extraction on Visually-Rich
Documents | Accepted to Document Images and Language Workshop at ICDAR 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Information extraction (IE) for visually-rich documents (VRDs) has achieved
SOTA performance recently thanks to the adaptation of Transformer-based
language models, which shows the great potential of pre-training methods. In
this paper, we present a new approach to improve the capability of language
model pre-training on VRDs. Firstly, we introduce a new query-based IE model
that employs span extraction instead of using the common sequence labeling
approach. Secondly, to further extend the span extraction formulation, we
propose a new training task that focuses on modelling the relationships among
semantic entities within a document. This task enables target spans to be
extracted recursively and can be used to pre-train the model or as an IE
downstream task. Evaluation on three datasets of popular business documents
(invoices, receipts) shows that our proposed method achieves significant
improvements compared to existing models. The method also provides a mechanism
for knowledge accumulation from multiple downstream IE tasks.
| [
{
"version": "v1",
"created": "Wed, 2 Jun 2021 06:50:04 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jul 2021 08:05:17 GMT"
}
] | 1,625,616,000,000 | [
[
"Nguyen",
"Tuan-Anh D.",
""
],
[
"Vu",
"Hieu M.",
""
],
[
"Son",
"Nguyen Hong",
""
],
[
"Nguyen",
"Minh-Tien",
""
]
] |
2106.00980 | Tuan-Anh Nguyen Dang | Tuan-Anh Nguyen Dang, Duc-Thanh Hoang, Quang-Bach Tran, Chih-Wei Pan,
Thanh-Dat Nguyen | End-to-End Hierarchical Relation Extraction for Generic Form
Understanding | Accepted to ICPR 2020 | 2020 25th International Conference on Pattern Recognition (ICPR) | 10.1109/ICPR48806.2021.9412778 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Form understanding is a challenging problem which aims to recognize semantic
entities from the input document and their hierarchical relations. Previous
approaches face significant difficulty dealing with the complexity of the task,
thus treat these objectives separately. To this end, we present a novel deep
neural network to jointly perform both entity detection and link prediction in
an end-to-end fashion. Our model extends the Multi-stage Attentional U-Net
architecture with the Part-Intensity Fields and Part-Association Fields for
link prediction, enriching the spatial information flow with the additional
supervision from entity linking. We demonstrate the effectiveness of the model
on the Form Understanding in Noisy Scanned Documents (FUNSD) dataset, where our
method substantially outperforms the original model and state-of-the-art
baselines in both Entity Labeling and Entity Linking task.
| [
{
"version": "v1",
"created": "Wed, 2 Jun 2021 06:51:35 GMT"
}
] | 1,622,678,400,000 | [
[
"Dang",
"Tuan-Anh Nguyen",
""
],
[
"Hoang",
"Duc-Thanh",
""
],
[
"Tran",
"Quang-Bach",
""
],
[
"Pan",
"Chih-Wei",
""
],
[
"Nguyen",
"Thanh-Dat",
""
]
] |
2106.00990 | Shih-Hung Tsai | Shih-hung Tsai, Chao-Chun Liang, Hsin-Min Wang, Keh-Yih Su | Sequence to General Tree: Knowledge-Guided Geometry Word Problem Solving | ACL2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | With the recent advancements in deep learning, neural solvers have gained
promising results in solving math word problems. However, these SOTA solvers
only generate binary expression trees that contain basic arithmetic operators
and do not explicitly use the math formulas. As a result, the expression trees
they produce are lengthy and uninterpretable because they need to use multiple
operators and constants to represent one single formula. In this paper, we
propose sequence-to-general tree (S2G) that learns to generate interpretable
and executable operation trees where the nodes can be formulas with an
arbitrary number of arguments. With nodes now allowed to be formulas, S2G can
learn to incorporate mathematical domain knowledge into problem-solving, making
the results more interpretable. Experiments show that S2G can achieve a better
performance against strong baselines on problems that require domain knowledge.
| [
{
"version": "v1",
"created": "Wed, 2 Jun 2021 07:15:06 GMT"
}
] | 1,622,678,400,000 | [
[
"Tsai",
"Shih-hung",
""
],
[
"Liang",
"Chao-Chun",
""
],
[
"Wang",
"Hsin-Min",
""
],
[
"Su",
"Keh-Yih",
""
]
] |
2106.01134 | Wei Liao | Wei Liao and Xiaohui Wei and Jizhou Lai | Smooth Q-learning: Accelerate Convergence of Q-learning Using Similarity | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An improvement of Q-learning is proposed in this paper. It is different from
classic Q-learning in that the similarity between different states and actions
is considered in the proposed method. During the training, a new updating
mechanism is used, in which the Q value of the similar state-action pairs are
updated synchronously. The proposed method can be used in combination with both
tabular Q-learning function and deep Q-learning. And the results of numerical
examples illustrate that compared to the classic Q-learning, the proposed
method has a significantly better performance.
| [
{
"version": "v1",
"created": "Wed, 2 Jun 2021 13:05:24 GMT"
}
] | 1,622,678,400,000 | [
[
"Liao",
"Wei",
""
],
[
"Wei",
"Xiaohui",
""
],
[
"Lai",
"Jizhou",
""
]
] |
2106.01410 | Prasanna Sattigeri | Soumya Ghosh, Q. Vera Liao, Karthikeyan Natesan Ramamurthy, Jiri
Navratil, Prasanna Sattigeri, Kush R. Varshney, Yunfeng Zhang | Uncertainty Quantification 360: A Holistic Toolkit for Quantifying and
Communicating the Uncertainty of AI | Added references | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we describe an open source Python toolkit named Uncertainty
Quantification 360 (UQ360) for the uncertainty quantification of AI models. The
goal of this toolkit is twofold: first, to provide a broad range of
capabilities to streamline as well as foster the common practices of
quantifying, evaluating, improving, and communicating uncertainty in the AI
application development lifecycle; second, to encourage further exploration of
UQ's connections to other pillars of trustworthy AI such as fairness and
transparency through the dissemination of latest research and education
materials. Beyond the Python package (\url{https://github.com/IBM/UQ360}), we
have developed an interactive experience (\url{http://uq360.mybluemix.net}) and
guidance materials as educational tools to aid researchers and developers in
producing and communicating high-quality uncertainties in an effective manner.
| [
{
"version": "v1",
"created": "Wed, 2 Jun 2021 18:29:04 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Jun 2021 01:08:35 GMT"
}
] | 1,623,024,000,000 | [
[
"Ghosh",
"Soumya",
""
],
[
"Liao",
"Q. Vera",
""
],
[
"Ramamurthy",
"Karthikeyan Natesan",
""
],
[
"Navratil",
"Jiri",
""
],
[
"Sattigeri",
"Prasanna",
""
],
[
"Varshney",
"Kush R.",
""
],
[
"Zhang",
"Yunfeng",
""
]
] |
2106.01416 | Absalom Ezugwu | Olaide N. Oyelade and Absalom E. Ezugwu | Ebola Optimization Search Algorithm (EOSA): A new metaheuristic
algorithm based on the propagation model of Ebola virus disease | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Ebola virus and the disease in effect tend to randomly move individuals
in the population around susceptible, infected, quarantined, hospitalized,
recovered, and dead sub-population. Motivated by the effectiveness in
propagating the disease through the virus, a new bio-inspired and
population-based optimization algorithm is proposed. This paper presents a
novel metaheuristic algorithm named Ebola optimization algorithm (EOSA). To
correctly achieve this, this study models the propagation mechanism of the
Ebola virus disease, emphasising all consistent states of the propagation. The
model was further represented using a mathematical model based on first-order
differential equations. After that, the combined propagation and mathematical
models were adapted for developing the new metaheuristic algorithm. To evaluate
the proposed method's performance and capability compared with other
optimization methods, the underlying propagation and mathematical models were
first investigated to determine how they successfully simulate the EVD.
Furthermore, two sets of benchmark functions consisting of forty-seven (47)
classical and over thirty (30) constrained IEEE CEC-2017 benchmark functions
are investigated numerically. The results indicate that the performance of the
proposed algorithm is competitive with other state-of-the-art optimization
methods based on scalability analysis, convergence analysis, and sensitivity
analysis. Extensive simulation results indicate that the EOSA outperforms other
state-of-the-art popular metaheuristic optimization algorithms such as the
Particle Swarm Optimization Algorithm (PSO), Genetic Algorithm (GA), and
Artificial Bee Colony Algorithm (ABC) on some shifted, high dimensional and
large search range problems.
| [
{
"version": "v1",
"created": "Wed, 2 Jun 2021 18:41:56 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Jun 2021 21:02:53 GMT"
}
] | 1,624,320,000,000 | [
[
"Oyelade",
"Olaide N.",
""
],
[
"Ezugwu",
"Absalom E.",
""
]
] |
2106.01639 | Chathura Gamage | Chathura Gamage, Matthew Stephenson, Vimukthini Pinto, Jochen Renz | Deceptive Level Generation for Angry Birds | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Angry Birds AI competition has been held over many years to encourage the
development of AI agents that can play Angry Birds game levels better than
human players. Many different agents with various approaches have been employed
over the competition's lifetime to solve this task. Even though the performance
of these agents has increased significantly over the past few years, they still
show major drawbacks in playing deceptive levels. This is because most of the
current agents try to identify the best next shot rather than planning an
effective sequence of shots. In order to encourage advancements in such agents,
we present an automated methodology to generate deceptive game levels for Angry
Birds. Even though there are many existing content generators for Angry Birds,
they do not focus on generating deceptive levels. In this paper, we propose a
procedure to generate deceptive levels for six deception categories that can
fool the state-of-the-art Angry Birds playing AI agents. Our results show that
generated deceptive levels exhibit similar characteristics of human-created
deceptive levels. Additionally, we define metrics to measure the stability,
solvability, and degree of deception of the generated levels.
| [
{
"version": "v1",
"created": "Thu, 3 Jun 2021 07:20:30 GMT"
}
] | 1,622,764,800,000 | [
[
"Gamage",
"Chathura",
""
],
[
"Stephenson",
"Matthew",
""
],
[
"Pinto",
"Vimukthini",
""
],
[
"Renz",
"Jochen",
""
]
] |
2106.01786 | Charbel Merhej | Charbel Merhej, Ryan Beal, Sarvapali Ramchurn (University of
Southampton), Tim Matthews (Sentient Sports) | What Happened Next? Using Deep Learning to Value Defensive Actions in
Football Event-Data | 10 pages, 7 figures, Proceedings of the 27th ACM SIGKDD Conference on
Knowledge Discovery and Data Mining (KDD '21), August 14--18, 2021, Virtual
Event, Singapore | null | 10.1145/3447548.3467090 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Objectively quantifying the value of player actions in football (soccer) is a
challenging problem. To date, studies in football analytics have mainly focused
on the attacking side of the game, while there has been less work on
event-driven metrics for valuing defensive actions (e.g., tackles and
interceptions). Therefore in this paper, we use deep learning techniques to
define a novel metric that values such defensive actions by studying the threat
of passages of play that preceded them. By doing so, we are able to value
defensive actions based on what they prevented from happening in the game. Our
Defensive Action Expected Threat (DAxT) model has been validated using
real-world event-data from the 2017/2018 and 2018/2019 English Premier League
seasons, and we combine our model outputs with additional features to derive an
overall rating of defensive ability for players. Overall, we find that our
model is able to predict the impact of defensive actions allowing us to better
value defenders using event-data.
| [
{
"version": "v1",
"created": "Thu, 3 Jun 2021 12:18:26 GMT"
}
] | 1,622,764,800,000 | [
[
"Merhej",
"Charbel",
"",
"University of\n Southampton"
],
[
"Beal",
"Ryan",
"",
"University of\n Southampton"
],
[
"Ramchurn",
"Sarvapali",
"",
"University of\n Southampton"
],
[
"Matthews",
"Tim",
"",
"Sentient Sports"
]
] |
2106.01977 | Alexandros Nikou PhD | Alexandros Nikou, Anusha Mujumdar, Vaishnavi Sundararajan, Marin
Orlic, Aneta Vulgarakis Feljan | Safe RAN control: A Symbolic Reinforcement Learning Approach | To appear in International Conference of Control and Automation
(ICCA) 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we present a Symbolic Reinforcement Learning (SRL) based
architecture for safety control of Radio Access Network (RAN) applications. In
particular, we provide a purely automated procedure in which a user can specify
high-level logical safety specifications for a given cellular network topology
in order for the latter to execute optimal safe performance which is measured
through certain Key Performance Indicators (KPIs). The network consists of a
set of fixed Base Stations (BS) which are equipped with antennas, which one can
control by adjusting their vertical tilt angle. The aforementioned process is
called Remote Electrical Tilt (RET) optimization. Recent research has focused
on performing this RET optimization by employing Reinforcement Learning (RL)
strategies due to the fact that they have self-learning capabilities to adapt
in uncertain environments. The term safety refers to particular constraints
bounds of the network KPIs in order to guarantee that when the algorithms are
deployed in a live network, the performance is maintained. In our proposed
architecture the safety is ensured through model-checking techniques over
combined discrete system models (automata) that are abstracted through the
learning process. We introduce a user interface (UI) developed to help a user
set intent specifications to the system, and inspect the difference in agent
proposed actions, and those that are allowed and blocked according to the
safety specification.
| [
{
"version": "v1",
"created": "Thu, 3 Jun 2021 16:45:40 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2022 09:29:18 GMT"
}
] | 1,650,931,200,000 | [
[
"Nikou",
"Alexandros",
""
],
[
"Mujumdar",
"Anusha",
""
],
[
"Sundararajan",
"Vaishnavi",
""
],
[
"Orlic",
"Marin",
""
],
[
"Feljan",
"Aneta Vulgarakis",
""
]
] |
2106.02003 | Kaiwen Jiang | Kaiwen Jiang, Stephanie Stacy, Chuyu Wei, Adelpha Chan, Federico
Rossano, Yixin Zhu, Tao Gao | Individual vs. Joint Perception: a Pragmatic Model of Pointing as
Communicative Smithian Helping | 7 pages, 3 figures. Accepted to CogSci 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The simple gesture of pointing can greatly augment ones ability to comprehend
states of the world based on observations. It triggers additional inferences
relevant to ones task at hand. We model an agents update to its belief of the
world based on individual observations using a partially observable Markov
decision process (POMDP), a mainstream artificial intelligence (AI) model of
how to act rationally according to beliefs formed through observation. On top
of that, we model pointing as a communicative act between agents who have a
mutual understanding that the pointed observation must be relevant and
interpretable. Our model measures relevance by defining a Smithian Value of
Information (SVI) as the utility improvement of the POMDP agent before and
after receiving the pointing. We model that agents calculate SVI by using the
cognitive theory of Smithian helping as a principle of coordinating separate
beliefs for action prediction and action evaluation. We then import SVI into
rational speech act (RSA) as the utility function of an utterance. These lead
us to a pragmatic model of pointing allowing for contextually flexible
interpretations. We demonstrate the power of our Smithian pointing model by
extending the Wumpus world, a classic AI task where a hunter hunts a monster
with only partial observability of the world. We add another agent as a guide
who can only help by marking an observation already perceived by the hunter
with a pointing or not, without providing new observations or offering any
instrumental help. Our results show that this severely limited and overloaded
communication nevertheless significantly improves the hunters performance. The
advantage of pointing is indeed due to a computation of relevance based on
Smithian helping, as it disappears completely when the task is too difficult or
too easy for the guide to help.
| [
{
"version": "v1",
"created": "Thu, 3 Jun 2021 17:21:23 GMT"
}
] | 1,622,764,800,000 | [
[
"Jiang",
"Kaiwen",
""
],
[
"Stacy",
"Stephanie",
""
],
[
"Wei",
"Chuyu",
""
],
[
"Chan",
"Adelpha",
""
],
[
"Rossano",
"Federico",
""
],
[
"Zhu",
"Yixin",
""
],
[
"Gao",
"Tao",
""
]
] |
2106.02164 | Stephanie Stacy | Stephanie Stacy, Chenfei Li, Minglu Zhao, Yiling Yun, Qingyi Zhao, Max
Kleiman-Weiner and Tao Gao | Modeling Communication to Coordinate Perspectives in Cooperation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Communication is highly overloaded. Despite this, even young children are
good at leveraging context to understand ambiguous signals. We propose a
computational account of overloaded signaling from a shared agency perspective
which we call the Imagined We for Communication. Under this framework,
communication helps cooperators coordinate their perspectives, allowing them to
act together to achieve shared goals. We assume agents are rational
cooperators, which puts constraints on how signals can be sent and interpreted.
We implement this model in a set of simulations demonstrating this model's
success under increasing ambiguity as well as increasing layers of reasoning.
Our model is capable of improving performance with deeper recursive reasoning;
however, it outperforms comparison baselines at even the shallowest level,
highlighting how shared knowledge and cooperative logic can do much of the
heavy-lifting in language.
| [
{
"version": "v1",
"created": "Thu, 3 Jun 2021 22:37:20 GMT"
}
] | 1,623,024,000,000 | [
[
"Stacy",
"Stephanie",
""
],
[
"Li",
"Chenfei",
""
],
[
"Zhao",
"Minglu",
""
],
[
"Yun",
"Yiling",
""
],
[
"Zhao",
"Qingyi",
""
],
[
"Kleiman-Weiner",
"Max",
""
],
[
"Gao",
"Tao",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.