id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
listlengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2203.12200 | Basem Suleiman PhD | Xiao Liu, Bonan Gao, Basem Suleiman, Han You, Zisu Ma, Yu Liu, and Ali
Anaissi | Privacy-Preserving Personalized Fitness Recommender System (P3FitRec): A
Multi-level Deep Learning Approach | 30 pages, 16 figures, 36 references | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recommender systems have been successfully used in many domains with the help
of machine learning algorithms. However, such applications tend to use
multi-dimensional user data, which has raised widespread concerns about the
breach of users privacy. Meanwhile, wearable technologies have enabled users to
collect fitness-related data through embedded sensors to monitor their
conditions or achieve personalized fitness goals. In this paper, we propose a
novel privacy-aware personalized fitness recommender system. We introduce a
multi-level deep learning framework that learns important features from a
large-scale real fitness dataset that is collected from wearable IoT devices to
derive intelligent fitness recommendations. Unlike most existing approaches,
our approach achieves personalization by inferring the fitness characteristics
of users from sensory data and thus minimizing the need for explicitly
collecting user identity or biometric information, such as name, age, height,
weight. In particular, our proposed models and algorithms predict (a)
personalized exercise distance recommendations to help users to achieve target
calories, (b) personalized speed sequence recommendations to adjust exercise
speed given the nature of the exercise and the chosen route, and (c)
personalized heart rate sequence to guide the user of the potential health
status for future exercises. Our experimental evaluation on a real-world Fitbit
dataset demonstrated high accuracy in predicting exercise distance, speed
sequence, and heart rate sequence compared to similar studies. Furthermore, our
approach is novel compared to existing studies as it does not require
collecting and using users sensitive information, and thus it preserves the
users privacy.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2022 05:27:35 GMT"
}
]
| 1,648,080,000,000 | [
[
"Liu",
"Xiao",
""
],
[
"Gao",
"Bonan",
""
],
[
"Suleiman",
"Basem",
""
],
[
"You",
"Han",
""
],
[
"Ma",
"Zisu",
""
],
[
"Liu",
"Yu",
""
],
[
"Anaissi",
"Ali",
""
]
]
|
2203.12275 | Bart Bogaerts | Bart Bogaerts, Stephan Gocht, Ciaran McCreesh, Jakob Nordstr\"om | Certified Symmetry and Dominance Breaking for Combinatorial Optimisation | This paper was published in the Journal of Artificial Intelligence
Research https://doi.org/10.1613/jair.1.14296 It is an extended version of
our (equally-named) paper to appear in the proceedings of AAAI 2022
https://ojs.aaai.org/index.php/AAAI/article/view/20283 | Journal of Artificial Intelligence Research, volume 77: pages
1539-1589, 2023 | 10.1613/jair.1.14296 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symmetry and dominance breaking can be crucial for solving hard combinatorial
search and optimisation problems, but the correctness of these techniques
sometimes relies on subtle arguments. For this reason, it is desirable to
produce efficient, machine-verifiable certificates that solutions have been
computed correctly. Building on the cutting planes proof system, we develop a
certification method for optimisation problems in which symmetry and dominance
breaking are easily expressible. Our experimental evaluation demonstrates that
we can efficiently verify fully general symmetry breaking in Boolean
satisfiability (SAT) solving, thus providing, for the first time, a unified
method to certify a range of advanced SAT techniques that also includes XOR and
cardinality reasoning. In addition, we apply our method to maximum clique
solving and constraint programming as a proof of concept that the approach
applies to a wider range of combinatorial problems.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2022 08:45:35 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 06:49:42 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Aug 2023 09:34:55 GMT"
}
]
| 1,692,230,400,000 | [
[
"Bogaerts",
"Bart",
""
],
[
"Gocht",
"Stephan",
""
],
[
"McCreesh",
"Ciaran",
""
],
[
"Nordström",
"Jakob",
""
]
]
|
2203.12285 | Zexi Li | Zexi Li, Jiaxun Lu, Shuang Luo, Didi Zhu, Yunfeng Shao, Yinchuan Li,
Zhimeng Zhang, Yongheng Wang, Chao Wu | Towards Effective Clustered Federated Learning: A Peer-to-peer Framework
with Adaptive Neighbor Matching | Published in IEEE Transactions on Big Data, 2022 | null | 10.1109/TBDATA.2022.3222971 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In federated learning (FL), clients may have diverse objectives, and merging
all clients' knowledge into one global model will cause negative transfer to
local performance. Thus, clustered FL is proposed to group similar clients into
clusters and maintain several global models. In the literature, centralized
clustered FL algorithms require the assumption of the number of clusters and
hence are not effective enough to explore the latent relationships among
clients. In this paper, without assuming the number of clusters, we propose a
peer-to-peer (P2P) FL algorithm named PANM. In PANM, clients communicate with
peers to adaptively form an effective clustered topology. Specifically, we
present two novel metrics for measuring client similarity and a two-stage
neighbor matching algorithm based Monte Carlo method and Expectation
Maximization under the Gaussian Mixture Model assumption. We have conducted
theoretical analyses of PANM on the probability of neighbor estimation and the
error gap to the clustered optimum. We have also implemented extensive
experiments under both synthetic and real-world clustered heterogeneity.
Theoretical analysis and empirical experiments show that the proposed algorithm
is superior to the P2P FL counterparts, and it achieves better performance than
the centralized cluster FL method. PANM is effective even under extremely low
communication budgets.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2022 09:10:14 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Nov 2022 15:15:17 GMT"
}
]
| 1,669,075,200,000 | [
[
"Li",
"Zexi",
""
],
[
"Lu",
"Jiaxun",
""
],
[
"Luo",
"Shuang",
""
],
[
"Zhu",
"Didi",
""
],
[
"Shao",
"Yunfeng",
""
],
[
"Li",
"Yinchuan",
""
],
[
"Zhang",
"Zhimeng",
""
],
[
"Wang",
"Yongheng",
""
],
[
"Wu",
"Chao",
""
]
]
|
2203.12499 | Roni Stern | Brendan Juba, Roni Stern | An Example of the SAM+ Algorithm for Learning Action Models for
Stochastic Worlds | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this technical report, we provide a complete example of running the SAM+
algorithm, an algorithm for learning stochastic planning action models, on a
simplified PPDDL version of the Coffee problem. We provide a very brief
description of the SAM+ algorithm and detailed description of our simplified
version of the Coffee domain, and then describe the results of running it on
the simplified Coffee domain.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2022 15:51:40 GMT"
}
]
| 1,648,080,000,000 | [
[
"Juba",
"Brendan",
""
],
[
"Stern",
"Roni",
""
]
]
|
2203.12673 | Yibo Guo | Yibo Guo, Lishuo Hou, Mingxin Li, Yue Yuan, Shun Liu, Jingyi Xue,
Yafang Han, Mingliang Xu | Decision-making of Emergent Incident based on P-MADDPG | 15 pages, 13 figures, 25 conferences | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In recent years, human casualties and damage to resources caused by emergent
incidents have become a serious problem worldwide. In this paper, we model the
emergency decision-making problem and use Multi-agent System (MAS) to solve the
problem that the decision speed cannot keep up with the spreading speed. MAS
can play an important role in the automated execution of these tasks to reduce
mission completion time. In this paper, we propose a P-MADDPG algorithm to
solve the emergency decision-making problem of emergent incidents, which
predicts the nodes where an incident may occur in the next time by GRU model
and makes decisions before the incident occurs, thus solving the problem that
the decision speed cannot keep up with the spreading speed. A simulation
environment was established for realistic scenarios, and three scenarios were
selected to test the performance of P-MADDPG in emergency decision-making
problems for emergent incidents: unmanned storage, factory assembly line, and
civil airport baggage transportation. Simulation results using the P-MADDPG
algorithm are compared with the greedy algorithm and the MADDPG algorithm, and
the final experimental results show that the P-MADDPG algorithm converges
faster and better than the other algorithms in scenarios of different sizes.
This shows that the P-MADDP algorithm is effective for emergency
decision-making in emergent incident.
| [
{
"version": "v1",
"created": "Sat, 19 Mar 2022 09:48:16 GMT"
}
]
| 1,648,166,400,000 | [
[
"Guo",
"Yibo",
""
],
[
"Hou",
"Lishuo",
""
],
[
"Li",
"Mingxin",
""
],
[
"Yuan",
"Yue",
""
],
[
"Liu",
"Shun",
""
],
[
"Xue",
"Jingyi",
""
],
[
"Han",
"Yafang",
""
],
[
"Xu",
"Mingliang",
""
]
]
|
2203.12802 | Hao Nie | Bu XuSong and Nie Hao and Zhang Zhan and Zhang Qin | A platform for causal knowledge representation and inference in
industrial fault diagnosis based on cubic DUCG | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The working conditions of large-scale industrial systems are very complex.
Once a failure occurs, it will affect industrial production, cause property
damage, and even endanger the workers' lives. Therefore, it is important to
control the operation of the system to accurately grasp the operation status of
the system and find out the failure in time. The occurrence of system failure
is a gradual process, and the occurrence of the current system failure may
depend on the previous state of the system, which is sequential. The fault
diagnosis technology based on time series can monitor the operating status of
the system in real-time, detect the abnormal operation of the system within the
allowable time interval, diagnose the root cause of the fault and predict the
status trend. In order to guide the technical personnel to troubleshoot and
solve related faults, in this paper, an industrial fault diagnosis system is
implemented based on the cubic DUCG theory. The diagnostic model of the system
is constructed based on expert knowledge and experience. At the same time, it
can perform real-time fault diagnosis based on time sequence, which solves the
problem of fault diagnosis of industrial systems without sample data.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2022 02:06:22 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Mar 2022 17:00:36 GMT"
}
]
| 1,648,512,000,000 | [
[
"XuSong",
"Bu",
""
],
[
"Hao",
"Nie",
""
],
[
"Zhan",
"Zhang",
""
],
[
"Qin",
"Zhang",
""
]
]
|
2203.12817 | Bo Liu | Bo Liu, Qiang Liu, Peter Stone | Continual Learning and Private Unlearning | Conference on Lifelong Learning Agents | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As intelligent agents become autonomous over longer periods of time, they may
eventually become lifelong counterparts to specific people. If so, it may be
common for a user to want the agent to master a task temporarily but later on
to forget the task due to privacy concerns. However enabling an agent to
\emph{forget privately} what the user specified without degrading the rest of
the learned knowledge is a challenging problem. With the aim of addressing this
challenge, this paper formalizes this continual learning and private unlearning
(CLPU) problem. The paper further introduces a straightforward but exactly
private solution, CLPU-DER++, as the first step towards solving the CLPU
problem, along with a set of carefully designed benchmark problems to evaluate
the effectiveness of the proposed solution. The code is available at
https://github.com/Cranial-XIX/Continual-Learning-Private-Unlearning.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2022 02:40:33 GMT"
},
{
"version": "v2",
"created": "Sat, 13 Aug 2022 23:35:30 GMT"
}
]
| 1,660,608,000,000 | [
[
"Liu",
"Bo",
""
],
[
"Liu",
"Qiang",
""
],
[
"Stone",
"Peter",
""
]
]
|
2203.12955 | Adam Hepworth | Adam J. Hepworth and Daniel P. Baxter and Hussein A. Abbass | Onto4MAT: A Swarm Shepherding Ontology for Generalised Multi-Agent
Teaming | 19 pages, 2 tables, 16 figures | null | 10.1109/ACCESS.2022.3180032 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Research in multi-agent teaming has increased substantially over recent
years, with knowledge-based systems to support teaming processes typically
focused on delivering functional (communicative) solutions for a team to act
meaningfully in response to direction. Enabling humans to effectively interact
and team with a swarm of autonomous cognitive agents is an open research
challenge in Human-Swarm Teaming research, partially due to the focus on
developing the enabling architectures to support these systems. Typically,
bi-directional transparency and shared semantic understanding between agents
has not prioritised a designed mechanism in Human-Swarm Teaming, potentially
limiting how a human and a swarm team can share understanding and
information\textemdash data through concepts and contexts\textemdash to achieve
a goal. To address this, we provide a formal knowledge representation design
that enables the swarm Artificial Intelligence to reason about its environment
and system, ultimately achieving a shared goal. We propose the Ontology for
Generalised Multi-Agent Teaming, Onto4MAT, to enable more effective teaming
between humans and teams through the biologically-inspired approach of
shepherding.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2022 09:36:50 GMT"
}
]
| 1,669,075,200,000 | [
[
"Hepworth",
"Adam J.",
""
],
[
"Baxter",
"Daniel P.",
""
],
[
"Abbass",
"Hussein A.",
""
]
]
|
2203.12969 | Gyunam Park | Gyunam Park, Marco Comuzzi, Wil M. P. van der Aalst | Analyzing Process-Aware Information System Updates Using Digital Twins
of Organizations | null | LNBIP 446 (2022) 159-176 | 10.1007/978-3-031-05760-1_10 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Digital transformation often entails small-scale changes to information
systems supporting the execution of business processes. These changes may
increase the operational frictions in process execution, which decreases the
process performance. The contributions in the literature providing support to
the tracking and impact analysis of small-scale changes are limited in scope
and functionality. In this paper, we use the recently developed Digital Twins
of Organizations (DTOs) to assess the impact of (process-aware) information
systems updates. More in detail, we model the updates using the configuration
of DTOs and quantitatively assess different types of impacts of information
system updates (structural, operational, and performance-related). We
implemented a prototype of the proposed approach. Moreover, we discuss a case
study involving a standard ERP procure-to-pay business process.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2022 10:19:59 GMT"
}
]
| 1,667,260,800,000 | [
[
"Park",
"Gyunam",
""
],
[
"Comuzzi",
"Marco",
""
],
[
"van der Aalst",
"Wil M. P.",
""
]
]
|
2203.13050 | James Borg | James M. Borg, Andrew Buskell, Rohan Kapitany, Simon T. Powers, Eva
Reindl and Claudio Tennie | Evolved Open-Endedness in Cultural Evolution: A New Dimension in
Open-Ended Evolution Research | 26 pages, 1 figure, 1 table, submitted to Artificial Life journal
(special issue on Open-Ended Evolution) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of Artificial Life research, as articulated by Chris Langton, is "to
contribute to theoretical biology by locating life-as-we-know-it within the
larger picture of life-as-it-could-be" (1989, p.1). The study and pursuit of
open-ended evolution in artificial evolutionary systems exemplifies this goal.
However, open-ended evolution research is hampered by two fundamental issues;
the struggle to replicate open-endedness in an artificial evolutionary system,
and the fact that we only have one system (genetic evolution) from which to
draw inspiration. Here we argue that cultural evolution should be seen not only
as another real-world example of an open-ended evolutionary system, but that
the unique qualities seen in cultural evolution provide us with a new
perspective from which we can assess the fundamental properties of, and ask new
questions about, open-ended evolutionary systems, especially in regard to
evolved open-endedness and transitions from bounded to unbounded evolution.
Here we provide an overview of culture as an evolutionary system, highlight the
interesting case of human cultural evolution as an open-ended evolutionary
system, and contextualise cultural evolution under the framework of (evolved)
open-ended evolution. We go on to provide a set of new questions that can be
asked once we consider cultural evolution within the framework of open-ended
evolution, and introduce new insights that we may be able to gain about evolved
open-endedness as a result of asking these questions.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2022 12:55:23 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2022 14:46:41 GMT"
}
]
| 1,663,632,000,000 | [
[
"Borg",
"James M.",
""
],
[
"Buskell",
"Andrew",
""
],
[
"Kapitany",
"Rohan",
""
],
[
"Powers",
"Simon T.",
""
],
[
"Reindl",
"Eva",
""
],
[
"Tennie",
"Claudio",
""
]
]
|
2203.13236 | Pulkit Verma | Rashmeet Kaur Nayyar, Pulkit Verma, Siddharth Srivastava | Differential Assessment of Black-Box AI Agents | AAAI 2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Much of the research on learning symbolic models of AI agents focuses on
agents with stationary models. This assumption fails to hold in settings where
the agent's capabilities may change as a result of learning, adaptation, or
other post-deployment modifications. Efficient assessment of agents in such
settings is critical for learning the true capabilities of an AI system and for
ensuring its safe usage. In this work, we propose a novel approach to
"differentially" assess black-box AI agents that have drifted from their
previously known models. As a starting point, we consider the fully observable
and deterministic setting. We leverage sparse observations of the drifted
agent's current behavior and knowledge of its initial model to generate an
active querying policy that selectively queries the agent and computes an
updated model of its functionality. Empirical evaluation shows that our
approach is much more efficient than re-learning the agent model from scratch.
We also show that the cost of differential assessment using our method is
proportional to the amount of drift in the agent's functionality.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2022 17:48:58 GMT"
},
{
"version": "v2",
"created": "Thu, 19 May 2022 01:51:02 GMT"
}
]
| 1,653,004,800,000 | [
[
"Nayyar",
"Rashmeet Kaur",
""
],
[
"Verma",
"Pulkit",
""
],
[
"Srivastava",
"Siddharth",
""
]
]
|
2203.13351 | Michael Green | Michael Cerny Green, Ahmed Khalifa, M Charity, Debosmita Bhaumik, and
Julian Togelius | Predicting Personas Using Mechanic Frequencies and Game State Traces | 8 pages, 3 tables, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate how to efficiently predict play personas based on playtraces.
Play personas can be computed by calculating the action agreement ratio between
a player and a generative model of playing behavior, a so-called procedural
persona. But this is computationally expensive and assumes that appropriate
procedural personas are readily available. We present two methods for
estimating player persona, one using regular supervised learning and aggregate
measures of game mechanics initiated, and another based on sequence learning on
a trace of closely cropped gameplay observations. While both of these methods
achieve high accuracy when predicting play personas defined by agreement with
procedural personas, they utterly fail to predict play style as defined by the
players themselves using a questionnaire. This interesting result highlights
the value of using computational methods in defining play personas.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2022 21:52:11 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 14:17:25 GMT"
}
]
| 1,655,337,600,000 | [
[
"Green",
"Michael Cerny",
""
],
[
"Khalifa",
"Ahmed",
""
],
[
"Charity",
"M",
""
],
[
"Bhaumik",
"Debosmita",
""
],
[
"Togelius",
"Julian",
""
]
]
|
2203.13599 | Guillermo Puebla | Guillermo Puebla, Leonidas A. A. Doumas | Learning Relational Rules from Rewards | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Humans perceive the world in terms of objects and relations between them. In
fact, for any given pair of objects, there is a myriad of relations that apply
to them. How does the cognitive system learn which relations are useful to
characterize the task at hand? And how can it use these representations to
build a relational policy to interact effectively with the environment? In this
paper we propose that this problem can be understood through the lens of a
sub-field of symbolic machine learning called relational reinforcement learning
(RRL). To demonstrate the potential of our approach, we build a simple model of
relational policy learning based on a function approximator developed in RRL.
We trained and tested our model in three Atari games that required to consider
an increasingly number of potential relations: Breakout, Pong and Demon Attack.
In each game, our model was able to select adequate relational representations
and build a relational policy incrementally. We discuss the relationship
between our model with models of relational and analogical reasoning, as well
as its limitations and future directions of research.
| [
{
"version": "v1",
"created": "Fri, 25 Mar 2022 11:57:43 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 08:43:06 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Jul 2022 12:20:36 GMT"
}
]
| 1,657,238,400,000 | [
[
"Puebla",
"Guillermo",
""
],
[
"Doumas",
"Leonidas A. A.",
""
]
]
|
2203.13929 | Ulf Johansson | Helena L\"ofstr\"om, Karl Hammar, Ulf Johansson | A Meta Survey of Quality Evaluation Criteria in Explanation Methods | 15 pages, 4 figures, 2 tables, conference article | null | 10.1007/978-3-031-07481-3_7 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explanation methods and their evaluation have become a significant issue in
explainable artificial intelligence (XAI) due to the recent surge of opaque AI
models in decision support systems (DSS). Since the most accurate AI models are
opaque with low transparency and comprehensibility, explanations are essential
for bias detection and control of uncertainty. There are a plethora of criteria
to choose from when evaluating explanation method quality. However, since
existing criteria focus on evaluating single explanation methods, it is not
obvious how to compare the quality of different methods. This lack of consensus
creates a critical shortage of rigour in the field, although little is written
about comparative evaluations of explanation methods. In this paper, we have
conducted a semi-systematic meta-survey over fifteen literature surveys
covering the evaluation of explainability to identify existing criteria usable
for comparative evaluations of explanation methods. The main contribution in
the paper is the suggestion to use appropriate trust as a criterion to measure
the outcome of the subjective evaluation criteria and consequently make
comparative evaluations possible. We also present a model of explanation
quality aspects. In the model, criteria with similar definitions are grouped
and related to three identified aspects of quality; model, explanation, and
user. We also notice four commonly accepted criteria (groups) in the
literature, covering all aspects of explanation quality: Performance,
appropriate trust, explanation satisfaction, and fidelity. We suggest the model
be used as a chart for comparative evaluations to create more generalisable
research in explanation quality.
| [
{
"version": "v1",
"created": "Fri, 25 Mar 2022 22:24:21 GMT"
}
]
| 1,693,353,600,000 | [
[
"Löfström",
"Helena",
""
],
[
"Hammar",
"Karl",
""
],
[
"Johansson",
"Ulf",
""
]
]
|
2203.13965 | Filip Ilievski | Jiang Wang, Filip Ilievski, Pedro Szekely, Ke-Thia Yao | Augmenting Knowledge Graphs for Better Link Prediction | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Embedding methods have demonstrated robust performance on the task of link
prediction in knowledge graphs, by mostly encoding entity relationships. Recent
methods propose to enhance the loss function with a literal-aware term. In this
paper, we propose KGA: a knowledge graph augmentation method that incorporates
literals in an embedding model without modifying its loss function. KGA
discretizes quantity and year values into bins, and chains these bins both
horizontally, modeling neighboring values, and vertically, modeling multiple
levels of granularity. KGA is scalable and can be used as a pre-processing step
for any existing knowledge graph embedding model. Experiments on legacy
benchmarks and a new large benchmark, DWD, show that augmenting the knowledge
graph with quantities and years is beneficial for predicting both entities and
numbers, as KGA outperforms the vanilla models and other relevant baselines.
Our ablation studies confirm that both quantities and years contribute to KGA's
performance, and that its performance depends on the discretization and binning
settings. We make the code, models, and the DWD benchmark publicly available to
facilitate reproducibility and future research.
| [
{
"version": "v1",
"created": "Sat, 26 Mar 2022 02:06:17 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2022 03:43:30 GMT"
}
]
| 1,650,931,200,000 | [
[
"Wang",
"Jiang",
""
],
[
"Ilievski",
"Filip",
""
],
[
"Szekely",
"Pedro",
""
],
[
"Yao",
"Ke-Thia",
""
]
]
|
2203.14018 | Jonas Philipp Haldimann | Jonas Haldimann, Christoph Beierle | Model Transformations for Ranking Functions and Total Preorders | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field of knowledge representation, the considered epistemic states are
often based on propositional interpretations, also called worlds. E.g.,
epistemic states of agents can be modelled by ranking functions or total
preorders on worlds. However, there are usually different ways of how to
describe a real world situation in a propositional language; this can be seen
as different points of view on the same situation. In this paper we introduce
the concept of model transformations to convert an epistemic state from one
point of view to another point of view, yielding a novel notion of equivalence
of epistemic states. We show how the well-known advantages of syntax splitting,
originally developed for belief sets and later extended to representation of
epistemic states and to nonmonotonic reasoning, can be exploited for belief
revision via model transformation by uncovering splittings not being present
before. Furthermore, we characterize situations where belief change operators
commute with model transformations.
| [
{
"version": "v1",
"created": "Sat, 26 Mar 2022 07:58:33 GMT"
}
]
| 1,648,512,000,000 | [
[
"Haldimann",
"Jonas",
""
],
[
"Beierle",
"Christoph",
""
]
]
|
2203.14079 | Daniel Reissner | Daniel Rei{\ss}ner, Abel Armas-Cervantes, Marcello La Rosa | Generalization in Automated Process Discovery: A Framework based on
Event Log Patterns | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The importance of quality measures in process mining has increased. One of
the key quality aspects, generalization, is concerned with measuring the degree
of overfitting of a process model w.r.t. an event log, since the recorded
behavior is just an example of the true behavior of the underlying business
process. Existing generalization measures exhibit several shortcomings that
severely hinder their applicability in practice. For example, they assume the
event log fully fits the discovered process model, and cannot deal with large
real-life event logs and complex process models. More significantly, current
measures neglect generalizations for clear patterns that demand a certain
construct in the model. For example, a repeating sequence in an event log
should be generalized with a loop structure in the model. We address these
shortcomings by proposing a framework of measures that generalize a set of
patterns discovered from an event log with representative traces and check the
corresponding control-flow structures in the process model via their trace
alignment. We instantiate the framework with a generalization measure that uses
tandem repeats to identify repetitive patterns that are compared to the loop
structures and a concurrency oracle to identify concurrent patterns that are
compared to the parallel structures of the process model. In an extensive
qualitative and quantitative evaluation using 74 log-model pairs using against
two baseline generalization measures, we show that the proposed generalization
measure consistently ranks process models that fulfil the observed patterns
with generalizing control-flow structures higher than those which do not, while
the baseline measures disregard those patterns. Further, we show that our
measure can be efficiently computed for datasets two orders of magnitude larger
than the largest dataset the baseline generalization measures can handle.
| [
{
"version": "v1",
"created": "Sat, 26 Mar 2022 13:49:11 GMT"
}
]
| 1,648,512,000,000 | [
[
"Reißner",
"Daniel",
""
],
[
"Armas-Cervantes",
"Abel",
""
],
[
"La Rosa",
"Marcello",
""
]
]
|
2203.14852 | Dominik Drexler | Dominik Drexler, Jendrik Seipp, Hector Geffner | Learning Sketches for Decomposing Planning Problems into Subproblems of
Bounded Width: Extended Version | This work will appear in the Proceedings of the 32nd International
Conference on Automated Planning and Scheduling (ICAPS2022) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, sketches have been introduced as a general language for
representing the subgoal structure of instances drawn from the same domain.
Sketches are collections of rules of the form C -> E over a given set of
features where C expresses Boolean conditions and E expresses qualitative
changes. Each sketch rule defines a subproblem: going from a state that
satisfies C to a state that achieves the change expressed by E or a goal state.
Sketches can encode simple goal serializations, general policies, or
decompositions of bounded width that can be solved greedily, in polynomial
time, by the SIW_R variant of the SIW algorithm. Previous work has shown the
computational value of sketches over benchmark domains that, while tractable,
are challenging for domain-independent planners. In this work, we address the
problem of learning sketches automatically given a planning domain, some
instances of the target class of problems, and the desired bound on the sketch
width. We present a logical formulation of the problem, an implementation using
the ASP solver Clingo, and experimental results. The sketch learner and the
SIW_R planner yield a domain-independent planner that learns and exploits
domain structure in a crisp and explicit form.
| [
{
"version": "v1",
"created": "Mon, 28 Mar 2022 15:49:08 GMT"
}
]
| 1,648,512,000,000 | [
[
"Drexler",
"Dominik",
""
],
[
"Seipp",
"Jendrik",
""
],
[
"Geffner",
"Hector",
""
]
]
|
2203.15099 | Santiago Ontanon | Santiago Ontanon, Joshua Ainslie, Vaclav Cvicek and Zachary Fisher | LogicInference: A New Dataset for Teaching Logical Inference to seq2seq
Models | Accepted at ICLR 2022 OSC workshop (v3 contains updated results after
fixing a problem in dataset generation) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Machine learning models such as Transformers or LSTMs struggle with tasks
that are compositional in nature such as those involving reasoning/inference.
Although many datasets exist to evaluate compositional generalization, when it
comes to evaluating inference abilities, options are more limited. This paper
presents LogicInference, a new dataset to evaluate the ability of models to
perform logical inference. The dataset focuses on inference using propositional
logic and a small subset of first-order logic, represented both in semi-formal
logical notation, as well as in natural language. We also report initial
results using a collection of machine learning models to establish an initial
baseline in this dataset.
| [
{
"version": "v1",
"created": "Mon, 28 Mar 2022 21:13:22 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2022 00:01:11 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Apr 2022 13:43:04 GMT"
}
]
| 1,649,721,600,000 | [
[
"Ontanon",
"Santiago",
""
],
[
"Ainslie",
"Joshua",
""
],
[
"Cvicek",
"Vaclav",
""
],
[
"Fisher",
"Zachary",
""
]
]
|
2203.15274 | Matej Zecevic | Matej Ze\v{c}evi\'c and Florian Peter Busch and Devendra Singh Dhami
and Kristian Kersting | Finding Structure and Causality in Linear Programs | Main paper: 5 pages, References: 2 pages, Appendix: 1 page. Figures:
8 main, 1 appendix. Tables: 1 appendix | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linear Programs (LP) are celebrated widely, particularly so in machine
learning where they have allowed for effectively solving probabilistic
inference tasks or imposing structure on end-to-end learning systems. Their
potential might seem depleted but we propose a foundational, causal perspective
that reveals intriguing intra- and inter-structure relations for LP components.
We conduct a systematic, empirical investigation on general-, shortest path-
and energy system LPs.
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2022 06:39:58 GMT"
}
]
| 1,648,598,400,000 | [
[
"Zečević",
"Matej",
""
],
[
"Busch",
"Florian Peter",
""
],
[
"Dhami",
"Devendra Singh",
""
],
[
"Kersting",
"Kristian",
""
]
]
|
2203.15398 | Massimiliano Ronzani | Stefano Branchi, Chiara Di Francescomarino, Chiara Ghidini, David
Massimo, Francesco Ricci and Massimiliano Ronzani | Learning to act: a Reinforcement Learning approach to recommend the best
next activities | 16 pages, 3 figures, v2 accepted to the BPM 2022 Forum | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The rise of process data availability has recently led to the development of
data-driven learning approaches. However, most of these approaches restrict the
use of the learned model to predict the future of ongoing process executions.
The goal of this paper is moving a step forward and leveraging available data
to learning to act, by supporting users with recommendations derived from an
optimal strategy (measure of performance). We take the optimization perspective
of one process actor and we recommend the best activities to execute next, in
response to what happens in a complex external environment, where there is no
control on exogenous factors. To this aim, we investigate an approach that
learns, by means of Reinforcement Learning, the optimal policy from the
observation of past executions and recommends the best activities to carry on
for optimizing a Key Performance Indicator of interest. The validity of the
approach is demonstrated on two scenarios taken from real-life data.
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2022 09:43:39 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 14:29:50 GMT"
}
]
| 1,655,337,600,000 | [
[
"Branchi",
"Stefano",
""
],
[
"Di Francescomarino",
"Chiara",
""
],
[
"Ghidini",
"Chiara",
""
],
[
"Massimo",
"David",
""
],
[
"Ricci",
"Francesco",
""
],
[
"Ronzani",
"Massimiliano",
""
]
]
|
2203.16171 | Alberto Pozanco | Alberto Pozanco, Yolanda E-Mart\'in, Susana Fern\'andez, Daniel
Borrajo | Anticipatory Counterplanning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In competitive environments, commonly agents try to prevent opponents from
achieving their goals. Most previous preventing approaches assume the
opponent's goal is known a priori. Others only start executing actions once the
opponent's goal has been inferred. In this work we introduce a novel
domain-independent algorithm called Anticipatory Counterplanning. It combines
inference of opponent's goals with computation of planning centroids to yield
proactive counter strategies in problems where the opponent's goal is unknown.
Experimental results show how this novel technique outperforms reactive
counterplanning, increasing the chances of stopping the opponent from achieving
its goals.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2022 09:49:33 GMT"
}
]
| 1,648,684,800,000 | [
[
"Pozanco",
"Alberto",
""
],
[
"E-Martín",
"Yolanda",
""
],
[
"Fernández",
"Susana",
""
],
[
"Borrajo",
"Daniel",
""
]
]
|
2203.16280 | Caihua Shan | Shifu Yan, Caihua Shan, Wenyi Yang, Bixiong Xu, Dongsheng Li, Lili
Qiu, Jie Tong, Qi Zhang | CMMD: Cross-Metric Multi-Dimensional Root Cause Analysis | null | null | 10.1145/3534678.3539109 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In large-scale online services, crucial metrics, a.k.a., key performance
indicators (KPIs), are monitored periodically to check their running statuses.
Generally, KPIs are aggregated along multiple dimensions and derived by complex
calculations among fundamental metrics from the raw data. Once abnormal KPI
values are observed, root cause analysis (RCA) can be applied to identify the
reasons for anomalies, so that we can troubleshoot quickly. Recently, several
automatic RCA techniques were proposed to localize the related dimensions (or a
combination of dimensions) to explain the anomalies. However, their analyses
are limited to the data on the abnormal metric and ignore the data of other
metrics which may be also related to the anomalies, leading to imprecise or
even incorrect root causes. To this end, we propose a cross-metric
multi-dimensional root cause analysis method, named CMMD, which consists of two
key components: 1) relationship modeling, which utilizes graph neural network
(GNN) to model the unknown complex calculation among metrics and aggregation
function among dimensions from historical data; 2) root cause localization,
which adopts the genetic algorithm to efficiently and effectively dive into the
raw data and localize the abnormal dimension(s) once the KPI anomalies are
detected. Experiments on synthetic datasets, public datasets and online
production environment demonstrate the superiority of our proposed CMMD method
compared with baselines. Currently, CMMD is running as an online service in
Microsoft Azure.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2022 13:17:19 GMT"
}
]
| 1,662,076,800,000 | [
[
"Yan",
"Shifu",
""
],
[
"Shan",
"Caihua",
""
],
[
"Yang",
"Wenyi",
""
],
[
"Xu",
"Bixiong",
""
],
[
"Li",
"Dongsheng",
""
],
[
"Qiu",
"Lili",
""
],
[
"Tong",
"Jie",
""
],
[
"Zhang",
"Qi",
""
]
]
|
2203.16289 | Qiong Liu | Qiong Liu, Ye Guo, Lirong Deng, Haotian Liu, Dongyu Li, Hongbin Sun,
Wenqi Huang | Reducing Learning Difficulties: One-Step Two-Critic Deep Reinforcement
Learning for Inverter-based Volt-Var Control | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A one-step two-critic deep reinforcement learning (OSTC-DRL) approach for
inverter-based volt-var control (IB-VVC) in active distribution networks is
proposed in this paper. Firstly, considering IB-VVC can be formulated as a
single-period optimization problem, we formulate the IB-VVC as a one-step
Markov decision process rather than the standard Markov decision process, which
simplifies the DRL learning task. Then we design the one-step actor-critic DRL
scheme which is a simplified version of recent DRL algorithms, and it avoids
the issue of Q value overestimation successfully. Furthermore, considering two
objectives of VVC: minimizing power loss and eliminating voltage violation, we
utilize two critics to approximate the rewards of two objectives separately. It
simplifies the approximation tasks of each critic, and avoids the interaction
effect between two objectives in the learning process of critic. The OSTC-DRL
approach integrates the one-step actor-critic DRL scheme and the two-critic
technology. Based on the OSTC-DRL, we design two centralized DRL algorithms.
Further, we extend the OSTC-DRL to multi-agent OSTC-DRL for decentralized
IB-VVC and design two multi-agent DRL algorithms. Simulations demonstrate that
the proposed OSTC-DRL has a faster convergence rate and a better control
performance, and the multi-agent OSTC-DRL works well for decentralized IB-VVC
problems.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2022 13:29:28 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Jul 2022 04:48:20 GMT"
}
]
| 1,656,892,800,000 | [
[
"Liu",
"Qiong",
""
],
[
"Guo",
"Ye",
""
],
[
"Deng",
"Lirong",
""
],
[
"Liu",
"Haotian",
""
],
[
"Li",
"Dongyu",
""
],
[
"Sun",
"Hongbin",
""
],
[
"Huang",
"Wenqi",
""
]
]
|
2203.17109 | Vishal Pallagani | Vishal Pallagani, Priyadharsini Ramamurthy, Vedant Khandelwal, Revathy
Venkataramanan, Kausik Lakkaraju, Sathyanarayanan N. Aakur, Biplav Srivastava | A Rich Recipe Representation as Plan to Support Expressive Multi Modal
Queries on Recipe Content and Preparation Process | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Food is not only a basic human necessity but also a key factor driving a
society's health and economic well-being. As a result, the cooking domain is a
popular use-case to demonstrate decision-support (AI) capabilities in service
of benefits like precision health with tools ranging from information retrieval
interfaces to task-oriented chatbots. An AI here should understand concepts in
the food domain (e.g., recipes, ingredients), be tolerant to failures
encountered while cooking (e.g., browning of butter), handle allergy-based
substitutions, and work with multiple data modalities (e.g. text and images).
However, the recipes today are handled as textual documents which makes it
difficult for machines to read, reason and handle ambiguity. This demands a
need for better representation of the recipes, overcoming the ambiguity and
sparseness that exists in the current textual documents. In this paper, we
discuss the construction of a machine-understandable rich recipe representation
(R3), in the form of plans, from the recipes available in natural language. R3
is infused with additional knowledge such as information about allergens and
images of ingredients, possible failures and tips for each atomic cooking step.
To show the benefits of R3, we also present TREAT, a tool for recipe retrieval
which uses R3 to perform multi-modal reasoning on the recipe's content (plan
objects - ingredients and cooking tools), food preparation process (plan
actions and time), and media type (image, text). R3 leads to improved retrieval
efficiency and new capabilities that were hither-to not possible in textual
representation.
| [
{
"version": "v1",
"created": "Thu, 31 Mar 2022 15:29:38 GMT"
}
]
| 1,648,771,200,000 | [
[
"Pallagani",
"Vishal",
""
],
[
"Ramamurthy",
"Priyadharsini",
""
],
[
"Khandelwal",
"Vedant",
""
],
[
"Venkataramanan",
"Revathy",
""
],
[
"Lakkaraju",
"Kausik",
""
],
[
"Aakur",
"Sathyanarayanan N.",
""
],
[
"Srivastava",
"Biplav",
""
]
]
|
2204.00288 | David Speck | David Speck | Symbolic Search for Optimal Planning with Expressive Extensions | PhD thesis, University of Freiburg, Germany, 2022 | null | 10.6094/UNIFR/225448 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In classical planning, the goal is to derive a course of actions that allows
an intelligent agent to move from any situation it finds itself in to one that
satisfies its goals. Classical planning is considered domain-independent, i.e.,
it is not limited to a particular application and can be used to solve
different types of reasoning problems. In practice, however, some properties of
a planning problem at hand require an expressive extension of the standard
classical planning formalism to capture and model them. Although the importance
of many of these extensions is well known, most planners, especially optimal
planners, do not support these extended planning formalisms. The lack of
support not only limits the use of these planners for certain problems, but
even if it is possible to model the problems without these extensions, it often
leads to increased effort in modeling or makes modeling practically impossible
as the required problem encoding size increases exponentially.
In this thesis, we propose to use symbolic search for cost-optimal planning
for different expressive extensions of classical planning, all capturing
different aspects of the problem. In particular, we study planning with axioms,
planning with state-dependent action costs, oversubscription planning, and
top-k planning. For all formalisms, we present complexity and compilability
results, highlighting that it is desirable and even necessary to natively
support the corresponding features. We analyze symbolic heuristic search and
show that the search performance does not always benefit from the use of a
heuristic and that the search performance can exponentially deteriorate even
under the best possible circumstances, namely the perfect heuristic. This
reinforces that symbolic blind search is the dominant symbolic search strategy
nowadays, on par with other state-of-the-art cost-optimal planning
strategies...
| [
{
"version": "v1",
"created": "Fri, 1 Apr 2022 08:41:06 GMT"
}
]
| 1,649,030,400,000 | [
[
"Speck",
"David",
""
]
]
|
2204.00302 | Stelios Triantafyllou | Stelios Triantafyllou, Adish Singla, Goran Radanovic | Actual Causality and Responsibility Attribution in Decentralized
Partially Observable Markov Decision Processes | In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and
Society (AIES22) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Actual causality and a closely related concept of responsibility attribution
are central to accountable decision making. Actual causality focuses on
specific outcomes and aims to identify decisions (actions) that were critical
in realizing an outcome of interest. Responsibility attribution is
complementary and aims to identify the extent to which decision makers (agents)
are responsible for this outcome. In this paper, we study these concepts under
a widely used framework for multi-agent sequential decision making under
uncertainty: decentralized partially observable Markov decision processes
(Dec-POMDPs). Following recent works in RL that show correspondence between
POMDPs and Structural Causal Models (SCMs), we first establish a connection
between Dec-POMDPs and SCMs. This connection enables us to utilize a language
for describing actual causality from prior work and study existing definitions
of actual causality in Dec-POMDPs. Given that some of the well-known
definitions may lead to counter-intuitive actual causes, we introduce a novel
definition that more explicitly accounts for causal dependencies between
agents' actions. We then turn to responsibility attribution based on actual
causality, where we argue that in ascribing responsibility to an agent it is
important to consider both the number of actual causes in which the agent
participates, as well as its ability to manipulate its own degree of
responsibility. Motivated by these arguments we introduce a family of
responsibility attribution methods that extends prior work, while accounting
for the aforementioned considerations. Finally, through a simulation-based
experiment, we compare different definitions of actual causality and
responsibility attribution methods. The empirical results demonstrate the
qualitative difference between the considered definitions of actual causality
and their impact on attributed responsibility.
| [
{
"version": "v1",
"created": "Fri, 1 Apr 2022 09:22:58 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Aug 2022 11:12:31 GMT"
}
]
| 1,660,089,600,000 | [
[
"Triantafyllou",
"Stelios",
""
],
[
"Singla",
"Adish",
""
],
[
"Radanovic",
"Goran",
""
]
]
|
2204.00747 | Bo Hui | Bo Hui, Wenlu Wang, Jiao Yu, Zhitao Gong, Wei-Shinn Ku, Min-Te Sun,
Hua Lu | RFID-Based Indoor Spatial Query Evaluation with Bayesian Filtering
Techniques | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | People spend a significant amount of time in indoor spaces (e.g., office
buildings, subway systems, etc.) in their daily lives. Therefore, it is
important to develop efficient indoor spatial query algorithms for supporting
various location-based applications. However, indoor spaces differ from outdoor
spaces because users have to follow the indoor floor plan for their movements.
In addition, positioning in indoor environments is mainly based on sensing
devices (e.g., RFID readers) rather than GPS devices. Consequently, we cannot
apply existing spatial query evaluation techniques devised for outdoor
environments for this new challenge. Because Bayesian filtering techniques can
be employed to estimate the state of a system that changes over time using a
sequence of noisy measurements made on the system, in this research, we propose
the Bayesian filtering-based location inference methods as the basis for
evaluating indoor spatial queries with noisy RFID raw data. Furthermore, two
novel models, indoor walking graph model and anchor point indexing model, are
created for tracking object locations in indoor environments. Based on the
inference method and tracking models, we develop innovative indoor range and k
nearest neighbor (kNN) query algorithms. We validate our solution through use
of both synthetic data and real-world data. Our experimental results show that
the proposed algorithms can evaluate indoor spatial queries effectively and
efficiently. We open-source the code, data, and floor plan at
https://github.com/DataScienceLab18/IndoorToolKit.
| [
{
"version": "v1",
"created": "Sat, 2 Apr 2022 02:52:19 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2022 21:12:12 GMT"
}
]
| 1,653,609,600,000 | [
[
"Hui",
"Bo",
""
],
[
"Wang",
"Wenlu",
""
],
[
"Yu",
"Jiao",
""
],
[
"Gong",
"Zhitao",
""
],
[
"Ku",
"Wei-Shinn",
""
],
[
"Sun",
"Min-Te",
""
],
[
"Lu",
"Hua",
""
]
]
|
2204.00755 | Steven Carr | Steven Carr, Nils Jansen, Sebastian Junges and Ufuk Topcu | Safe Reinforcement Learning via Shielding under Partial Observability | 21 pages, 28 Figures, 3 Tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Safe exploration is a common problem in reinforcement learning (RL) that aims
to prevent agents from making disastrous decisions while exploring their
environment. A family of approaches to this problem assume domain knowledge in
the form of a (partial) model of this environment to decide upon the safety of
an action. A so-called shield forces the RL agent to select only safe actions.
However, for adoption in various applications, one must look beyond enforcing
safety and also ensure the applicability of RL with good performance. We extend
the applicability of shields via tight integration with state-of-the-art deep
RL, and provide an extensive, empirical study in challenging, sparse-reward
environments under partial observability. We show that a carefully integrated
shield ensures safety and can improve the convergence rate and final
performance of RL agents. We furthermore show that a shield can be used to
bootstrap state-of-the-art RL agents: they remain safe after initial learning
in a shielded setting, allowing us to disable a potentially too conservative
shield eventually.
| [
{
"version": "v1",
"created": "Sat, 2 Apr 2022 03:51:55 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Aug 2022 00:30:45 GMT"
}
]
| 1,661,299,200,000 | [
[
"Carr",
"Steven",
""
],
[
"Jansen",
"Nils",
""
],
[
"Junges",
"Sebastian",
""
],
[
"Topcu",
"Ufuk",
""
]
]
|
2204.01611 | Taewoon Kim | Taewoon Kim, Michael Cochez, Vincent Francois-Lavet, Mark Neerincx,
and Piek Vossen | A Machine With Human-Like Memory Systems | Submitted to Human-Centered Design of Symbiotic Hybrid Intelligence
2022 (https://ii.tudelft.nl/humancenteredsymbioticHI/) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Inspired by the cognitive science theory, we explicitly model an agent with
both semantic and episodic memory systems, and show that it is better than
having just one of the two memory systems. In order to show this, we have
designed and released our own challenging environment, "the Room", compatible
with OpenAI Gym, where an agent has to properly learn how to encode, store, and
retrieve memories to maximize its rewards. The Room environment allows for a
hybrid intelligence setup where machines and humans can collaborate. We show
that two agents collaborating with each other results in better performance
than one agent acting alone. We have open-sourced our code and models at
https://github.com/tae898/explicit-memory.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2022 16:05:53 GMT"
}
]
| 1,649,116,800,000 | [
[
"Kim",
"Taewoon",
""
],
[
"Cochez",
"Michael",
""
],
[
"Francois-Lavet",
"Vincent",
""
],
[
"Neerincx",
"Mark",
""
],
[
"Vossen",
"Piek",
""
]
]
|
2204.01774 | Jordi Levy | Carlos Ans\'otegui, Jordi Levy | Reducing SAT to Max2XOR | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Representing some problems with XOR clauses (parity constraints) can allow to
apply more efficient reasoning techniques. In this paper, we present a gadget
for translating SAT clauses into Max2XOR constraints, i.e., XOR clauses of at
most 2 variables equal to zero or to one. Additionally, we present new
resolution rules for the Max2XOR problem which asks for which is the maximum
number of constraints that can be satisfied from a set of 2XOR equations.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2022 18:06:24 GMT"
}
]
| 1,649,203,200,000 | [
[
"Ansótegui",
"Carlos",
""
],
[
"Levy",
"Jordi",
""
]
]
|
2204.02011 | Yongjun Chen | Yongjun Chen and Jia Li and Caiming Xiong | ELECRec: Training Sequential Recommenders as Discriminators | Accepted to SIGIR 2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequential recommendation is often considered as a generative task, i.e.,
training a sequential encoder to generate the next item of a user's interests
based on her historical interacted items. Despite their prevalence, these
methods usually require training with more meaningful samples to be effective,
which otherwise will lead to a poorly trained model. In this work, we propose
to train the sequential recommenders as discriminators rather than generators.
Instead of predicting the next item, our method trains a discriminator to
distinguish if a sampled item is a 'real' target item or not. A generator, as
an auxiliary model, is trained jointly with the discriminator to sample
plausible alternative next items and will be thrown out after training. The
trained discriminator is considered as the final SR model and denoted as
\modelname. Experiments conducted on four datasets demonstrate the
effectiveness and efficiency of the proposed approach.
| [
{
"version": "v1",
"created": "Tue, 5 Apr 2022 06:19:45 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Apr 2022 07:19:25 GMT"
},
{
"version": "v3",
"created": "Sat, 16 Apr 2022 03:38:15 GMT"
},
{
"version": "v4",
"created": "Thu, 21 Jul 2022 23:47:06 GMT"
}
]
| 1,658,707,200,000 | [
[
"Chen",
"Yongjun",
""
],
[
"Li",
"Jia",
""
],
[
"Xiong",
"Caiming",
""
]
]
|
2204.02360 | Joyjit Chatterjee | Joyjit Chatterjee, Nina Dethlefs | Scientometric Review of Artificial Intelligence for Operations &
Maintenance of Wind Turbines: The Past, Present and Future | This is a preprint version of the accepted manuscript in the
Renewable and Sustainable Energy Reviews journal, shared under a CC-BY-NC-ND
license. The final published version can be found at:
https://doi.org/10.1016/j.rser.2021.111051 | Renewable and Sustainable Energy Reviews, Volume 144, 2021 | 10.1016/j.rser.2021.111051 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Wind energy has emerged as a highly promising source of renewable energy in
recent times. However, wind turbines regularly suffer from operational
inconsistencies, leading to significant costs and challenges in operations and
maintenance (O&M). Condition-based monitoring (CBM) and performance
assessment/analysis of turbines are vital aspects for ensuring efficient O&M
planning and cost minimisation. Data-driven decision making techniques have
witnessed rapid evolution in the wind industry for such O&M tasks during the
last decade, from applying signal processing methods in early 2010 to
artificial intelligence (AI) techniques, especially deep learning in 2020. In
this article, we utilise statistical computing to present a scientometric
review of the conceptual and thematic evolution of AI in the wind energy
sector, providing evidence-based insights into present strengths and
limitations of data-driven decision making in the wind industry. We provide a
perspective into the future and on current key challenges in data availability
and quality, lack of transparency in black box-natured AI models, and
prevailing issues in deploying models for real-time decision support, along
with possible strategies to overcome these problems. We hope that a systematic
analysis of the past, present and future of CBM and performance assessment can
encourage more organisations to adopt data-driven decision making techniques in
O&M towards making wind energy sources more reliable, contributing to the
global efforts of tackling climate change.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2022 21:42:21 GMT"
}
]
| 1,649,203,200,000 | [
[
"Chatterjee",
"Joyjit",
""
],
[
"Dethlefs",
"Nina",
""
]
]
|
2204.02495 | Saujas Vaduguru | Saujas Vaduguru, Kevin Ellis, Yewen Pu | Efficient Pragmatic Program Synthesis with Informative Specifications | 9 pages, Meaning in Context Workshop 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Providing examples is one of the most common way for end-users to interact
with program synthesizers. However, program synthesis systems assume that
examples consistent with the program are chosen at random, and do not exploit
the fact that users choose examples pragmatically. Prior work modeled program
synthesis as pragmatic communication, but required an inefficient enumeration
of the entire program space. In this paper, we show that it is possible to
build a program synthesizer that is both pragmatic and efficient by
approximating the joint distribution of programs with a product of independent
factors, and performing pragmatic inference on each factor separately. This
factored distribution approximates the exact joint distribution well when the
examples are given pragmatically, and is compatible with a basic neuro-symbolic
program synthesis algorithm. Surprisingly, we find that the synthesizer
assuming a factored approximation performs better than a synthesizer assuming
an exact joint distribution when evaluated on natural human inputs. This
suggests that humans may be assuming a factored distribution while
communicating programs.
| [
{
"version": "v1",
"created": "Tue, 5 Apr 2022 21:25:58 GMT"
}
]
| 1,649,289,600,000 | [
[
"Vaduguru",
"Saujas",
""
],
[
"Ellis",
"Kevin",
""
],
[
"Pu",
"Yewen",
""
]
]
|
2204.02737 | Cezary Kaliszyk | Stanis{\l}aw J. Purga{\l} and Cezary Kaliszyk | Adversarial Learning to Reason in an Arbitrary Logic | null | FLAIRS 2022 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing approaches to learning to prove theorems focus on particular logics
and datasets. In this work, we propose Monte-Carlo simulations guided by
reinforcement learning that can work in an arbitrarily specified logic, without
any human knowledge or set of problems. Since the algorithm does not need any
training dataset, it is able to learn to work with any logical foundation, even
when there is no body of proofs or even conjectures available. We practically
demonstrate the feasibility of the approach in multiple logical systems. The
approach is stronger than training on randomly generated data but weaker than
the approaches trained on tailored axiom and conjecture sets. It however allows
us to apply machine learning to automated theorem proving for many logics,
where no such attempts have been tried to date, such as intuitionistic logic or
linear logic.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2022 11:25:19 GMT"
}
]
| 1,649,289,600,000 | [
[
"Purgał",
"Stanisław J.",
""
],
[
"Kaliszyk",
"Cezary",
""
]
]
|
2204.02929 | Carlos Linares L\'opez | Sofia Lemons and Carlos Linares L\'opez and Robert C. Holte and
Wheeler Ruml | Beam Search: Faster and Monotonic | 9 pages, 15 figures, 3 algorithms, published in the International
Conference on Automated Planning and Scheduling ICAPS 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Beam search is a popular satisficing approach to heuristic search problems
that allows one to trade increased computation time for lower solution cost by
increasing the beam width parameter. We make two contributions to the study of
beam search. First, we show how to make beam search monotonic; that is, we
provide a new variant that guarantees non-increasing solution cost as the beam
width is increased. This makes setting the beam parameter much easier. Second,
we show how using distance-to-go estimates can allow beam search to find better
solutions more quickly in domains with non-uniform costs. Together, these
results improve the practical effectiveness of beam search.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2022 16:40:13 GMT"
}
]
| 1,649,289,600,000 | [
[
"Lemons",
"Sofia",
""
],
[
"López",
"Carlos Linares",
""
],
[
"Holte",
"Robert C.",
""
],
[
"Ruml",
"Wheeler",
""
]
]
|
2204.03536 | Till Hofmann | Till Hofmann, Vaishak Belle | Abstracting Noisy Robot Programs | To be presented at AAMAS'23 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Abstraction is a commonly used process to represent some low-level system by
a more coarse specification with the goal to omit unnecessary details while
preserving important aspects. While recent work on abstraction in the situation
calculus has focused on non-probabilistic domains, we describe an approach to
abstraction of probabilistic and dynamic systems. Based on a variant of the
situation calculus with probabilistic belief, we define a notion of
bisimulation that allows to abstract a detailed probabilistic basic action
theory with noisy actuators and sensors by a possibly non-stochastic basic
action theory. By doing so, we obtain abstract Golog programs that omit
unnecessary details and which can be translated back to a detailed program for
actual execution. This simplifies the implementation of noisy robot programs,
opens up the possibility of using non-stochastic reasoning methods (e.g.,
planning) on probabilistic problems, and provides domain descriptions that are
more easily understandable and explainable.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2022 16:04:19 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Mar 2023 14:01:54 GMT"
}
]
| 1,677,715,200,000 | [
[
"Hofmann",
"Till",
""
],
[
"Belle",
"Vaishak",
""
]
]
|
2204.03551 | Sri Harikrishnan | Martin Caminada, Sri Harikrishnan | Strong Admissibility, a Tractable Algorithmic Approach (proofs) | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Much like admissibility is the key concept underlying preferred semantics,
strong admissibility is the key concept underlying grounded semantics, as
membership of a strongly admissible set is sufficient to show membership of the
grounded extension. As such, strongly admissible sets and labellings can be
used as an explanation of membership of the grounded extension, as is for
instance done in some of the proof procedures for grounded semantics. In the
current paper, we present two polynomial algorithms for constructing relatively
small strongly admissible labellings, with associated min-max numberings, for a
particular argument. These labellings can be used as relatively small
explanations for the argument's membership of the grounded extension. Although
our algorithms are not guaranteed to yield an absolute minimal strongly
admissible labelling for the argument (as doing do would have implied an
exponential complexity), our best performing algorithm yields results that are
only marginally bigger. Moreover, the runtime of this algorithm is an order of
magnitude smaller than that of the existing approach for computing an absolute
minimal strongly admissible labelling for a particular argument. As such, we
believe that our algorithms can be of practical value in situations where the
aim is to construct a minimal or near-minimal strongly admissible labelling in
a time-efficient way.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2022 16:22:52 GMT"
}
]
| 1,649,376,000,000 | [
[
"Caminada",
"Martin",
""
],
[
"Harikrishnan",
"Sri",
""
]
]
|
2204.03596 | Till Hofmann | Till Hofmann, Stefan Schupp | Controlling Golog Programs against MTL Constraints | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | While Golog is an expressive programming language to control the high-level
behavior of a robot, it is often tedious to use on a real robotic system. On an
actual robot, the user needs to consider low-level details, such as enabling
and disabling hardware components, e.g., a camera to detect objects for
grasping. In other words, high-level actions usually pose implicit temporal
constraints on the low-level platform, which are typically independent of the
concrete program to be executed. In this paper, we propose to make these
constraints explicit by modeling them as MTL formulas, which enforce the
execution of certain low-level platform operations in addition to the main
program. Based on results from timed automata controller synthesis, we describe
a method to synthesize a controller that executes both the high-level program
and the low-level platform operations concurrently in order to satisfy the MTL
specification. This allows the user to focus on the high-level behavior without
the need to consider low-level operations. We present an extension to Golog by
clocks together with the required theoretical foundations as well as
decidability results.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2022 17:16:37 GMT"
}
]
| 1,649,376,000,000 | [
[
"Hofmann",
"Till",
""
],
[
"Schupp",
"Stefan",
""
]
]
|
2204.03752 | Jean-Baptiste Herv\'e | Jean-Baptiste Herv\'e, Christoph Salge | Automated Isovist Computation for Minecraft | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Procedural content generation for games is a growing trend in both research
and industry, even though there is no consensus of how good content looks, nor
how to automatically evaluate it. A number of metrics have been developed in
the past, usually focused on the artifact as a whole, and mostly lacking
grounding in human experience. In this study we develop a new set of automated
metrics, motivated by ideas from architecture, namely isovists and space
syntax, which have a track record of capturing human experience of space. These
metrics can be computed for a specific game state, from the player's
perspective, and take into account their embodiment in the game world. We show
how to apply those metrics to the 3d blockworld of Minecraft. We use a dataset
of generated settlements from the GDMC Settlement Generation Challenge in
Minecraft and establish several rank-based correlations between the isovist
properties and the rating human judges gave those settelements. We also produce
a range of heat maps that demonstrate the location based applicability of the
approach, which allows for development of those metrics as measures for a game
experience at a specific time and space.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2022 21:41:06 GMT"
}
]
| 1,649,635,200,000 | [
[
"Hervé",
"Jean-Baptiste",
""
],
[
"Salge",
"Christoph",
""
]
]
|
2204.04009 | Sagar Malhotra | Sagar Malhotra and Luciano Serafini | On Projectivity in Markov Logic Networks | Added formal comparison to previous projective fragments. Added a
proof for transforming RBM to MLN. For the most updated version please visit
: https://countinglogic.github.io/files/Projectivity.pdf | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Markov Logic Networks (MLNs) define a probability distribution on relational
structures over varying domain sizes. Many works have noticed that MLNs, like
many other relational models, do not admit consistent marginal inference over
varying domain sizes. Furthermore, MLNs learnt on a certain domain do not
generalize to new domains of varied sizes. In recent works, connections have
emerged between domain size dependence, lifted inference and learning from
sub-sampled domains. The central idea to these works is the notion of
projectivity. The probability distributions ascribed by projective models
render the marginal probabilities of sub-structures independent of the domain
cardinality. Hence, projective models admit efficient marginal inference,
removing any dependence on the domain size. Furthermore, projective models
potentially allow efficient and consistent parameter learning from sub-sampled
domains. In this paper, we characterize the necessary and sufficient conditions
for a two-variable MLN to be projective. We then isolate a special model in
this class of MLNs, namely Relational Block Model (RBM). We show that, in terms
of data likelihood maximization, RBM is the best possible projective MLN in the
two-variable fragment. Finally, we show that RBMs also admit consistent
parameter learning over sub-sampled domains.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2022 11:37:53 GMT"
},
{
"version": "v2",
"created": "Thu, 5 May 2022 13:56:46 GMT"
}
]
| 1,651,795,200,000 | [
[
"Malhotra",
"Sagar",
""
],
[
"Serafini",
"Luciano",
""
]
]
|
2204.04071 | Bruno Yun | Bruno Yun, Nir Oren, Madalina Croitoru | Utility Functions for Human/Robot Interaction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we place ourselves in the context of human robot interaction
and address the problem of cognitive robot modelling. More precisely we are
investigating properties of a utility-based model that will govern a robot's
actions. The novelty of this approach lies in embedding the responsibility of
the robot over the state of affairs into the utility model via a utility
aggregation function. We describe desiderata for such a function and consider
related properties.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2022 13:41:07 GMT"
}
]
| 1,649,635,200,000 | [
[
"Yun",
"Bruno",
""
],
[
"Oren",
"Nir",
""
],
[
"Croitoru",
"Madalina",
""
]
]
|
2204.04148 | Marco Pegoraro | Marco Pegoraro | Process Mining on Uncertain Event Data | 2 pages, 1 figure, 2 tables, 9 references | CEUR Workshop Proceedings 3098 (2022) 1-2 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the widespread adoption of process mining in organizations, the field of
process science is seeing an increase in the demand for ad-hoc analysis
techniques of non-standard event data. An example of such data are uncertain
event data: events characterized by a described and quantified attribute
imprecision. This paper outlines a research project aimed at developing process
mining techniques able to extract insights from uncertain data. We set the
basis for this research topic, recapitulate the available literature, and
define a future outlook.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2022 15:56:00 GMT"
}
]
| 1,649,635,200,000 | [
[
"Pegoraro",
"Marco",
""
]
]
|
2204.04242 | Arnold Hien | Arnold Hien, Samir Loudni, Noureddine Aribi, Abdelkader Ouali,
Albrecht Zimmermann | Exploiting complex pattern features for interactive pattern mining | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent years have seen a shift from a pattern mining process that has users
define constraints before-hand, and sift through the results afterwards, to an
interactive one. This new framework depends on exploiting user feedback to
learn a quality function for patterns. Existing approaches have a weakness in
that they use static pre-defined low-level features, and attempt to learn
independent weights representing their importance to the user. As an
alternative, we propose to work with more complex features that are derived
directly from the pattern ranking imposed by the user. Learned weights are then
aggregated onto lower-level features and help to drive the quality function in
the right direction. We explore the effect of different parameter choices
experimentally and find that using higher-complexity features leads to the
selection of patterns that are better aligned with a hidden quality function
while not adding significantly to the run times of the method.
Getting good user feedback requires to quickly present diverse patterns,
something that we achieve but pushing an existing diversity constraint into the
sampling component of the interactive mining system LetSip. Resulting patterns
allow in most cases to converge to a good solution more quickly.
Combining the two improvements, finally, leads to an algorithm showing clear
advantages over the existing state-of-the-art.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2022 18:33:32 GMT"
}
]
| 1,649,721,600,000 | [
[
"Hien",
"Arnold",
""
],
[
"Loudni",
"Samir",
""
],
[
"Aribi",
"Noureddine",
""
],
[
"Ouali",
"Abdelkader",
""
],
[
"Zimmermann",
"Albrecht",
""
]
]
|
2204.04301 | Rushang Karia | Rushang Karia, Rashmeet Kaur Nayyar, Siddharth Srivastava | Learning Generalized Policy Automata for Relational Stochastic Shortest
Path Problems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several goal-oriented problems in the real-world can be naturally expressed
as Stochastic Shortest Path Problems (SSPs). However, the computational
complexity of solving SSPs makes finding solutions to even moderately sized
problems intractable. Currently, existing state-of-the-art planners and
heuristics often fail to exploit knowledge learned from solving other
instances. This paper presents an approach for learning \emph{Generalized
Policy Automata} (GPA): non-deterministic partial policies that can be used to
catalyze the solution process. GPAs are learned using relational, feature-based
abstractions, which makes them applicable on broad classes of related problems
with different object names and quantities. Theoretical analysis of this
approach shows that it guarantees completeness and hierarchical optimality.
Empirical analysis shows that this approach effectively learns broadly
applicable policy knowledge in a few-shot fashion and significantly outperforms
state-of-the-art SSP solvers on test problems whose object counts are far
greater than those used during training.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2022 21:30:47 GMT"
},
{
"version": "v2",
"created": "Tue, 10 May 2022 00:00:07 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Oct 2022 07:39:48 GMT"
}
]
| 1,665,532,800,000 | [
[
"Karia",
"Rushang",
""
],
[
"Nayyar",
"Rashmeet Kaur",
""
],
[
"Srivastava",
"Siddharth",
""
]
]
|
2204.04322 | Ramon Fraga Pereira | Ramon Fraga Pereira, Andr\'e G. Pereira, Frederico Messa, and Giuseppe
De Giacomo | Iterative Depth-First Search for Fully Observable Non-Deterministic
Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fully Observable Non-Deterministic (FOND) planning models uncertainty through
actions with non-deterministic effects. Existing FOND planning algorithms are
effective and employ a wide range of techniques. However, most of the existing
algorithms are not robust for dealing with both non-determinism and task size.
In this paper, we develop a novel iterative depth-first search algorithm that
solves FOND planning tasks and produces strong cyclic policies. Our algorithm
is explicitly designed for FOND planning, addressing more directly the
non-deterministic aspect of FOND planning, and it also exploits the benefits of
heuristic functions to make the algorithm more effective during the iterative
searching process. We compare our proposed algorithm to well-known FOND
planners, and show that it has robust performance over several distinct types
of FOND domains considering different metrics.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2022 23:10:30 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jun 2022 13:12:02 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Jun 2022 22:49:08 GMT"
}
]
| 1,655,856,000,000 | [
[
"Pereira",
"Ramon Fraga",
""
],
[
"Pereira",
"André G.",
""
],
[
"Messa",
"Frederico",
""
],
[
"De Giacomo",
"Giuseppe",
""
]
]
|
2204.04686 | Tianyang Cao | Tianyang Cao, Shuang Zeng, Xiaodan Xu, Mairgup Mansur, Baobao Chang | DISK: Domain-constrained Instance Sketch for Math Word Problem
Generation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | A math word problem (MWP) is a coherent narrative which reflects the
underlying logic of math equations. Successful MWP generation can automate the
writing of mathematics questions. Previous methods mainly generate MWP text
based on inflexible pre-defined templates. In this paper, we propose a neural
model for generating MWP text from math equations. Firstly, we incorporate a
matching model conditioned on the domain knowledge to retrieve a MWP instance
which is most consistent with the ground-truth, where the domain is a latent
variable extracted with a domain summarizer. Secondly, by constructing a
Quantity Cell Graph (QCG) from the retrieved MWP instance and reasoning over
it, we improve the model's comprehension of real-world scenarios and derive a
domain-constrained instance sketch to guide the generation. Besides, the QCG
also interacts with the equation encoder to enhance the alignment between math
tokens (e.g., quantities and variables) and MWP text. Experiments and empirical
analysis on educational MWP set show that our model achieves impressive
performance in both automatic evaluation metrics and human evaluation metrics.
| [
{
"version": "v1",
"created": "Sun, 10 Apr 2022 13:54:23 GMT"
}
]
| 1,649,721,600,000 | [
[
"Cao",
"Tianyang",
""
],
[
"Zeng",
"Shuang",
""
],
[
"Xu",
"Xiaodan",
""
],
[
"Mansur",
"Mairgup",
""
],
[
"Chang",
"Baobao",
""
]
]
|
2204.04780 | Majid Khonji | Majid Khonji | A Fully Polynomial Time Approximation Scheme for Constrained MDPs and
Stochastic Shortest Path under Local Transitions | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The fixed-horizon constrained Markov Decision Process (C-MDP) is a well-known
model for planning in stochastic environments under operating constraints.
Chance-Constrained MDP (CC-MDP) is a variant that allows bounding the
probability of constraint violation, which is desired in many safety-critical
applications. CC-MDP can also model a class of MDPs, called Stochastic Shortest
Path (SSP), under dead-ends, where there is a trade-off between the
probability-to-goal and cost-to-goal. This work studies the structure of
(C)C-MDP, particularly an important variant that involves local transition. In
this variant, the state reachability exhibits a certain degree of locality and
independence from the remaining states. More precisely, the number of states,
at a given time, that share some reachable future states is always constant.
(C)C-MDP under local transition is NP-Hard even for a planning horizon of two.
In this work, we propose a fully polynomial-time approximation scheme for
(C)C-MDP that computes (near) optimal deterministic policies. Such an algorithm
is among the best approximation algorithm attainable in theory and gives
insights into the approximability of constrained MDP and its variants.
| [
{
"version": "v1",
"created": "Sun, 10 Apr 2022 22:08:33 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Apr 2023 17:16:33 GMT"
}
]
| 1,681,862,400,000 | [
[
"Khonji",
"Majid",
""
]
]
|
2204.04918 | Guocheng Qian | Guocheng Qian, Xuanyang Zhang, Guohao Li, Chen Zhao, Yukang Chen,
Xiangyu Zhang, Bernard Ghanem, Jian Sun | When NAS Meets Trees: An Efficient Algorithm for Neural Architecture
Search | 4 pages, accepted at CVPR Workshop 2022 (ECV2022) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The key challenge in neural architecture search (NAS) is designing how to
explore wisely in the huge search space. We propose a new NAS method called
TNAS (NAS with trees), which improves search efficiency by exploring only a
small number of architectures while also achieving a higher search accuracy.
TNAS introduces an architecture tree and a binary operation tree, to factorize
the search space and substantially reduce the exploration size. TNAS performs a
modified bi-level Breadth-First Search in the proposed trees to discover a
high-performance architecture. Impressively, TNAS finds the global optimal
architecture on CIFAR-10 with test accuracy of 94.37\% in four GPU hours in
NAS-Bench-201. The average test accuracy is 94.35\%, which outperforms the
state-of-the-art. Code is available at:
\url{https://github.com/guochengqian/TNAS}.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2022 07:34:21 GMT"
}
]
| 1,649,721,600,000 | [
[
"Qian",
"Guocheng",
""
],
[
"Zhang",
"Xuanyang",
""
],
[
"Li",
"Guohao",
""
],
[
"Zhao",
"Chen",
""
],
[
"Chen",
"Yukang",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Ghanem",
"Bernard",
""
],
[
"Sun",
"Jian",
""
]
]
|
2204.04938 | Jieting Luo | Jieting Luo, Beishui Liao and Dov Gabbay | Value-based Practical Reasoning: Modal Logic + Argumentation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous agents are supposed to be able to finish tasks or achieve goals
that are assigned by their users through performing a sequence of actions.
Since there might exist multiple plans that an agent can follow and each plan
might promote or demote different values along each action, the agent should be
able to resolve the conflicts between them and evaluate which plan he should
follow. In this paper, we develop a logic-based framework that combines modal
logic and argumentation for value-based practical reasoning with plans. Modal
logic is used as a technique to represent and verify whether a plan with its
local properties of value promotion or demotion can be followed to achieve an
agent's goal. We then propose an argumentation-based approach that allows an
agent to reason about his plans in the form of supporting or objecting to a
plan using the verification results.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2022 08:29:45 GMT"
}
]
| 1,649,721,600,000 | [
[
"Luo",
"Jieting",
""
],
[
"Liao",
"Beishui",
""
],
[
"Gabbay",
"Dov",
""
]
]
|
2204.05148 | Robin Algayres | Robin Algayres, Adel Nabli, Benoit Sagot, Emmanuel Dupoux | Speech Sequence Embeddings using Nearest Neighbors Contrastive Learning | Interspeech 2022 New version on 10/21/23 with appendix data and
gitlab link | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce a simple neural encoder architecture that can be trained using
an unsupervised contrastive learning objective which gets its positive samples
from data-augmented k-Nearest Neighbors search. We show that when built on top
of recent self-supervised audio representations, this method can be applied
iteratively and yield competitive SSE as evaluated on two tasks:
query-by-example of random sequences of speech, and spoken term discovery. On
both tasks our method pushes the state-of-the-art by a significant margin
across 5 different languages. Finally, we establish a benchmark on a
query-by-example task on the LibriSpeech dataset to monitor future improvements
in the field.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2022 14:28:01 GMT"
},
{
"version": "v2",
"created": "Sat, 21 Oct 2023 10:15:36 GMT"
}
]
| 1,698,105,600,000 | [
[
"Algayres",
"Robin",
""
],
[
"Nabli",
"Adel",
""
],
[
"Sagot",
"Benoit",
""
],
[
"Dupoux",
"Emmanuel",
""
]
]
|
2204.05168 | Leye Wang | Leye Wang | The Principle of Least Sensing: A Privacy-Friendly Sensing Paradigm for
Urban Big Data Analytics | null | XRDS: Crossroads, The ACM Magazine for Students, Volume 28, Issue
3, Spring 2022, pp 56-59 | 10.1145/3522696 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the worldwide emergence of data protection regulations, how to conduct
law-regulated big data analytics becomes a challenging and fundamental problem.
This article introduces the principle of least sensing, a promising sensing
paradigm toward law-regulated big data analytics.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2022 14:54:24 GMT"
}
]
| 1,649,721,600,000 | [
[
"Wang",
"Leye",
""
]
]
|
2204.05206 | Selene Baez Santamaria | Selene Baez Santamaria, Emmanouil Manousogiannis, Guusje Boomgaard,
Linh P. Tran, Zoltan Szlavik and Robert-Jan Sips | Access to care: analysis of the geographical distribution of healthcare
using Linked Open Data | Accepted at 4th Workshop on Semantic Web solutions for large-scale
biomedical data analytics (SeWeBMeDA-2020) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Background: Access to medical care is strongly dependent on resource
allocation, such as the geographical distribution of medical facilities.
Nevertheless, this data is usually restricted to country official
documentation, not available to the public. While some medical facilities' data
is accessible as semantic resources on the Web, it is not consistent in its
modeling and has yet to be integrated into a complete, open, and specialized
repository. This work focuses on generating a comprehensive semantic dataset of
medical facilities worldwide containing extensive information about such
facilities' geo-location.
Results: For this purpose, we collect, align, and link various open-source
databases where medical facilities' information may be present. This work
allows us to evaluate each data source along various dimensions, such as
completeness, correctness, and interlinking with other sources, all critical
aspects of current knowledge representation technologies.
Conclusions: Our contributions directly benefit stakeholders in the
biomedical and health domain (patients, healthcare professionals, companies,
regulatory authorities, and researchers), who will now have a better overview
of the access to and distribution of medical facilities.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2022 15:51:56 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Sep 2022 11:46:33 GMT"
}
]
| 1,664,236,800,000 | [
[
"Santamaria",
"Selene Baez",
""
],
[
"Manousogiannis",
"Emmanouil",
""
],
[
"Boomgaard",
"Guusje",
""
],
[
"Tran",
"Linh P.",
""
],
[
"Szlavik",
"Zoltan",
""
],
[
"Sips",
"Robert-Jan",
""
]
]
|
2204.05217 | Michael Green | Michael Cerny Green, Ahmed Khalifa, M Charity, and Julian Togelius | Persona-driven Dominant/Submissive Map (PDSM) Generation for Tutorials | 10 pages, 7 figures, 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a method for automated persona-driven video game
tutorial level generation. Tutorial levels are scenarios in which the player
can explore and discover different rules and game mechanics. Procedural
personas can guide generators to create content which encourages or discourages
certain playstyle behaviors. In this system, we use procedural personas to
calculate the behavioral characteristics of levels which are evolved using the
quality-diversity algorithm known as Constrained MAP-Elites. An evolved map's
quality is determined by its simplicity: the simpler it is, the better it is.
Within this work, we show that the generated maps can strongly encourage or
discourage different persona-like behaviors and range from simple solutions to
complex puzzle-levels, making them perfect candidates for a tutorial generative
system.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2022 16:01:48 GMT"
}
]
| 1,649,721,600,000 | [
[
"Green",
"Michael Cerny",
""
],
[
"Khalifa",
"Ahmed",
""
],
[
"Charity",
"M",
""
],
[
"Togelius",
"Julian",
""
]
]
|
2204.05512 | Hung Nguyen | Duy-Hung Nguyen and Nguyen Viet Dung Nghiem and Bao-Sinh Nguyen and
Dung Tien Le and Shahab Sabahi and Minh-Tien Nguyen and Hung Le | Make The Most of Prior Data: A Solution for Interactive Text
Summarization with Preference Feedback | The paper is accepted at NAACL 2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For summarization, human preference is critical to tame outputs of the
summarizer in favor of human interests, as ground-truth summaries are scarce
and ambiguous. Practical settings require dynamic exchanges between human and
AI agent wherein feedback is provided in an online manner, a few at a time. In
this paper, we introduce a new framework to train summarization models with
preference feedback interactively. By properly leveraging offline data and a
novel reward model, we improve the performance regarding ROUGE scores and
sample-efficiency. Our experiments on three various datasets confirm the
benefit of the proposed framework in active, few-shot and online settings of
preference learning.
| [
{
"version": "v1",
"created": "Tue, 12 Apr 2022 03:56:59 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2022 03:12:30 GMT"
}
]
| 1,652,400,000,000 | [
[
"Nguyen",
"Duy-Hung",
""
],
[
"Nghiem",
"Nguyen Viet Dung",
""
],
[
"Nguyen",
"Bao-Sinh",
""
],
[
"Le",
"Dung Tien",
""
],
[
"Sabahi",
"Shahab",
""
],
[
"Nguyen",
"Minh-Tien",
""
],
[
"Le",
"Hung",
""
]
]
|
2204.05545 | Prasant Misra | Ajay Narayanan, Prasant Misra, Ankush Ojha, Vivek Bandhu, Supratim
Ghosh, Arunchandar Vasan | A Reinforcement Learning Approach for Electric Vehicle Routing Problem
with Vehicle-to-Grid Supply | 6 pages; 1 figure; Proc. of the Adaptive and Learning Agents Workshop
(ALA 2022), Cruz, Hayes, da Silva, Santos (eds.), May 9-10, 2022, Online,
https:// ala2022.github.io/.2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of electric vehicles (EV) in the last mile is appealing from both
sustainability and operational cost perspectives. In addition to the inherent
cost efficiency of EVs, selling energy back to the grid during peak grid
demand, is a potential source of additional revenue to a fleet operator. To
achieve this, EVs have to be at specific locations (discharge points) during
specific points in time (peak period), even while meeting their core purpose of
delivering goods to customers. In this work, we consider the problem of EV
routing with constraints on loading capacity; time window; vehicle-to-grid
energy supply (CEVRPTW-D); which not only satisfy multiple system objectives,
but also scale efficiently to large problem sizes involving hundreds of
customers and discharge stations. We present QuikRouteFinder that uses
reinforcement learning (RL) for EV routing to overcome these challenges. Using
Solomon datasets, results from RL are compared against exact formulations based
on mixed-integer linear program (MILP) and genetic algorithm (GA)
metaheuristics. On an average, the results show that RL is 24 times faster than
MILP and GA, while being close in quality (within 20%) to the optimal.
| [
{
"version": "v1",
"created": "Tue, 12 Apr 2022 06:13:06 GMT"
}
]
| 1,649,808,000,000 | [
[
"Narayanan",
"Ajay",
""
],
[
"Misra",
"Prasant",
""
],
[
"Ojha",
"Ankush",
""
],
[
"Bandhu",
"Vivek",
""
],
[
"Ghosh",
"Supratim",
""
],
[
"Vasan",
"Arunchandar",
""
]
]
|
2204.05576 | Yuan Tian | Yuan Tian, Klaus-Rudolf Kladny, Qin Wang, Zhiwu Huang, Olga Fink | Multi-agent Actor-Critic with Time Dynamical Opponent Model | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | In multi-agent reinforcement learning, multiple agents learn simultaneously
while interacting with a common environment and each other. Since the agents
adapt their policies during learning, not only the behavior of a single agent
becomes non-stationary, but also the environment as perceived by the agent.
This renders it particularly challenging to perform policy improvement. In this
paper, we propose to exploit the fact that the agents seek to improve their
expected cumulative reward and introduce a novel \textit{Time Dynamical
Opponent Model} (TDOM) to encode the knowledge that the opponent policies tend
to improve over time. We motivate TDOM theoretically by deriving a lower bound
of the log objective of an individual agent and further propose
\textit{Multi-Agent Actor-Critic with Time Dynamical Opponent Model} (TDOM-AC).
We evaluate the proposed TDOM-AC on a differential game and the Multi-agent
Particle Environment. We show empirically that TDOM achieves superior opponent
behavior prediction during test time. The proposed TDOM-AC methodology
outperforms state-of-the-art Actor-Critic methods on the performed experiments
in cooperative and \textbf{especially} in mixed cooperative-competitive
environments. TDOM-AC results in a more stable training and a faster
convergence.
| [
{
"version": "v1",
"created": "Tue, 12 Apr 2022 07:16:15 GMT"
}
]
| 1,649,808,000,000 | [
[
"Tian",
"Yuan",
""
],
[
"Kladny",
"Klaus-Rudolf",
""
],
[
"Wang",
"Qin",
""
],
[
"Huang",
"Zhiwu",
""
],
[
"Fink",
"Olga",
""
]
]
|
2204.05579 | Jo\v{z}e Ro\v{z}anec | Jo\v{z}e M. Ro\v{z}anec, Elena Trajkova, Inna Novalija, Patrik Zajec,
Klemen Kenda, Bla\v{z} Fortuna, Dunja Mladeni\'c | Enriching Artificial Intelligence Explanations with Knowledge Fragments | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence models are increasingly used in manufacturing to
inform decision-making. Responsible decision-making requires accurate forecasts
and an understanding of the models' behavior. Furthermore, the insights into
models' rationale can be enriched with domain knowledge. This research builds
explanations considering feature rankings for a particular forecast, enriching
them with media news entries, datasets' metadata, and entries from the Google
Knowledge Graph. We compare two approaches (embeddings-based and
semantic-based) on a real-world use case regarding demand forecasting.
| [
{
"version": "v1",
"created": "Tue, 12 Apr 2022 07:19:30 GMT"
}
]
| 1,649,808,000,000 | [
[
"Rožanec",
"Jože M.",
""
],
[
"Trajkova",
"Elena",
""
],
[
"Novalija",
"Inna",
""
],
[
"Zajec",
"Patrik",
""
],
[
"Kenda",
"Klemen",
""
],
[
"Fortuna",
"Blaž",
""
],
[
"Mladenić",
"Dunja",
""
]
]
|
2204.05627 | Shurong Mo | Shurong Mo, Nailong Wu, Jie Qi, Anqi Pan, Zhiguang Feng, Huaicheng
Yan, Yueying Wang | Proximal Policy Optimization Learning based Control of Congested Freeway
Traffic | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This study proposes a delay-compensated feedback controller based on proximal
policy optimization (PPO) reinforcement learning to stabilize traffic flow in
the congested regime by manipulating the time-gap of adaptive cruise
control-equipped (ACC-equipped) vehicles.The traffic dynamics on a freeway
segment are governed by an Aw-Rascle-Zhang (ARZ) model, consisting of $2\times
2$ nonlinear first-order partial differential equations (PDEs).Inspired by the
backstepping delay compensator [18] but different from whose complex segmented
control scheme, the PPO control is composed of three feedbacks, namely the
current traffic flow velocity, the current traffic flow density and previous
one step control input. The control gains for the three feedbacks are learned
from the interaction between the PPO and the numerical simulator of the traffic
system without knowing the system dynamics. Numerical simulation experiments
are designed to compare the Lyapunov control, the backstepping control and the
PPO control. The results show that for a delay-free system, the PPO control has
faster convergence rate and less control effort than the Lyapunov control. For
a traffic system with input delay, the performance of the PPO controller is
comparable to that of the Backstepping controller, even for the situation that
the delay value does not match. However, the PPO is robust to parameter
perturbations, while the Backstepping controller cannot stabilize a system
where one of the parameters is disturbed by Gaussian noise.
| [
{
"version": "v1",
"created": "Tue, 12 Apr 2022 08:36:21 GMT"
},
{
"version": "v2",
"created": "Sat, 14 Jan 2023 11:52:41 GMT"
}
]
| 1,674,000,000,000 | [
[
"Mo",
"Shurong",
""
],
[
"Wu",
"Nailong",
""
],
[
"Qi",
"Jie",
""
],
[
"Pan",
"Anqi",
""
],
[
"Feng",
"Zhiguang",
""
],
[
"Yan",
"Huaicheng",
""
],
[
"Wang",
"Yueying",
""
]
]
|
2204.06076 | Jacqueline Kueper | Jacqueline K. Kueper, Jennifer Rayner, Daniel J. Lizotte | Hybrid Feature- and Similarity-Based Models for Joint Prediction and
Interpretation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Electronic health records (EHRs) include simple features like patient age
together with more complex data like care history that are informative but not
easily represented as individual features. To better harness such data, we
developed an interpretable hybrid feature- and similarity-based model for
supervised learning that combines feature and kernel learning for prediction
and for investigation of causal relationships. We fit our hybrid models by
convex optimization with a sparsity-inducing penalty on the kernel. Depending
on the desired model interpretation, the feature and kernel coefficients can be
learned sequentially or simultaneously. The hybrid models showed comparable or
better predictive performance than solely feature- or similarity-based
approaches in a simulation study and in a case study to predict two-year risk
of loneliness or social isolation with EHR data from a complex primary health
care population. Using the case study we also present new kernels for
high-dimensional indicator-coded EHR data that are based on deviations from
population-level expectations, and we identify considerations for causal
interpretations.
| [
{
"version": "v1",
"created": "Tue, 12 Apr 2022 20:37:03 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Feb 2023 23:07:31 GMT"
}
]
| 1,676,332,800,000 | [
[
"Kueper",
"Jacqueline K.",
""
],
[
"Rayner",
"Jennifer",
""
],
[
"Lizotte",
"Daniel J.",
""
]
]
|
2204.06117 | Huili Chen | Huili Chen, Xinqiao Zhang, Ke Huang, Farinaz Koushanfar | AdaTest:Reinforcement Learning and Adaptive Sampling for On-chip
Hardware Trojan Detection | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes AdaTest, a novel adaptive test pattern generation
framework for efficient and reliable Hardware Trojan (HT) detection. HT is a
backdoor attack that tampers with the design of victim integrated circuits
(ICs). AdaTest improves the existing HT detection techniques in terms of
scalability and accuracy of detecting smaller Trojans in the presence of noise
and variations. To achieve high trigger coverage, AdaTest leverages
Reinforcement Learning (RL) to produce a diverse set of test inputs.
Particularly, we progressively generate test vectors with high reward values in
an iterative manner. In each iteration, the test set is evaluated and
adaptively expanded as needed. Furthermore, AdaTest integrates adaptive
sampling to prioritize test samples that provide more information for HT
detection, thus reducing the number of samples while improving the sample
quality for faster exploration. We develop AdaTest with a Software/Hardware
co-design principle and provide an optimized on-chip architecture solution.
AdaTest's architecture minimizes the hardware overhead in two ways:(i)
Deploying circuit emulation on programmable hardware to accelerate reward
evaluation of the test input; (ii) Pipelining each computation stage in AdaTest
by automatically constructing auxiliary circuit for test input generation,
reward evaluation, and adaptive sampling. We evaluate AdaTest's performance on
various HT benchmarks and compare it with two prior works that use logic
testing for HT detection. Experimental results show that AdaTest engenders up
to two orders of test generation speedup and two orders of test set size
reduction compared to the prior works while achieving the same level or higher
Trojan detection rate.
| [
{
"version": "v1",
"created": "Tue, 12 Apr 2022 23:56:59 GMT"
}
]
| 1,649,894,400,000 | [
[
"Chen",
"Huili",
""
],
[
"Zhang",
"Xinqiao",
""
],
[
"Huang",
"Ke",
""
],
[
"Koushanfar",
"Farinaz",
""
]
]
|
2204.06138 | Yinglong Ma | Gao Pengfei, Lai Dedi, Zhao Lijiao, Liang Yue, Ma Yinglong | A Three-phase Augmented Classifiers Chain Approach Based on
Co-occurrence Analysis for Multi-Label Classification | 31 pages, 5 figires, 6 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a very popular multi-label classification method, Classifiers Chain has
recently been widely applied to many multi-label classification tasks. However,
existing Classifier Chains methods are difficult to model and exploit the
underlying dependency in the label space, and often suffer from the problems of
poorly ordered chain and error propagation. In this paper, we present a
three-phase augmented Classifier Chains approach based on co-occurrence
analysis for multi-label classification. First, we propose a co-occurrence
matrix method to model the underlying correlations between a label and its
precedents and further determine the head labels of a chain. Second, we propose
two augmented strategies of optimizing the order of labels of a chain to
approximate the underlying label correlations in label space, including Greedy
Order Classifier Chain and Trigram Order Classifier Chain. Extensive
experiments were made over six benchmark datasets, and the experimental results
show that the proposed augmented CC approaches can significantly improve the
performance of multi-label classification in comparison with CC and its popular
variants of Classifier Chains, in particular maintaining lower computational
costs while achieving superior performance.
| [
{
"version": "v1",
"created": "Wed, 13 Apr 2022 02:10:14 GMT"
}
]
| 1,649,894,400,000 | [
[
"Pengfei",
"Gao",
""
],
[
"Dedi",
"Lai",
""
],
[
"Lijiao",
"Zhao",
""
],
[
"Yue",
"Liang",
""
],
[
"Yinglong",
"Ma",
""
]
]
|
2204.06179 | Yinglong Ma | Chen Xiaona, Ahmad Tanvir, Ma Yinglong | An Ensemble Learning Based Approach to Multi-label Power Text
Classification for Fault-type Recognition | 23 pages, 1 figure, 5 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid development of ICT Custom Services (ICT CS) in power
industries, the deployed power ICT CS systems mainly rely on the experience of
customer service staff for fault type recognition, questioning, and answering,
which makes it difficult and inefficient to precisely resolve the problems
issued by users. To resolve this problem, in this paper, firstly, a multi-label
fault text classification ensemble approach called BR-GBDT is proposed by
combining Binary Relevance and Gradient Boosting Decision Tree for assisted
fault type diagnosis and improving the accuracy of fault type recognition.
Second, for the problem that there is lack of the training set for power ICT
multi-label text classification, an automatic approach is presented to
construct the training set from the historical fault text data stored in power
ICT CS systems. The extensive experiments were made based on the power ICT CS
training set and some general-purpose benchmark training datasets. The
experiment results show that our approach outperforms the well known ensemble
learning based approaches BR+LR and ML-KNN for fault text classification,
efficiently handling the multi-label classification of ICT custom service text
data for fault type recognition.
| [
{
"version": "v1",
"created": "Wed, 13 Apr 2022 05:53:55 GMT"
}
]
| 1,649,894,400,000 | [
[
"Xiaona",
"Chen",
""
],
[
"Tanvir",
"Ahmad",
""
],
[
"Yinglong",
"Ma",
""
]
]
|
2204.06355 | Bertoin David | David Bertoin (IMT), Emmanuel Rachelson (DMIA) | Local Feature Swapping for Generalization in Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the past few years, the acceleration of computing resources and research
in deep learning has led to significant practical successes in a range of
tasks, including in particular in computer vision. Building on these advances,
reinforcement learning has also seen a leap forward with the emergence of
agents capable of making decisions directly from visual observations. Despite
these successes, the over-parametrization of neural architectures leads to
memorization of the data used during training and thus to a lack of
generalization. Reinforcement learning agents based on visual inputs also
suffer from this phenomenon by erroneously correlating rewards with unrelated
visual features such as background elements. To alleviate this problem, we
introduce a new regularization technique consisting of channel-consistent local
permutations (CLOP) of the feature maps. The proposed permutations induce
robustness to spatial correlations and help prevent overfitting behaviors in
RL. We demonstrate, on the OpenAI Procgen Benchmark, that RL agents trained
with the CLOP method exhibit robustness to visual changes and better
generalization properties than agents trained using other state-of-the-art
regularization techniques. We also demonstrate the effectiveness of CLOP as a
general regularization technique in supervised learning.
| [
{
"version": "v1",
"created": "Wed, 13 Apr 2022 13:12:51 GMT"
}
]
| 1,649,894,400,000 | [
[
"Bertoin",
"David",
"",
"IMT"
],
[
"Rachelson",
"Emmanuel",
"",
"DMIA"
]
]
|
2204.06403 | Xiaowei Wang | Xinyi Yu, Xiaowei Wang, Jintao Rong, Mingyang Zhang, Linlin Ou | Efficient Re-parameterization Operations Search for Easy-to-Deploy
Network Based on Directional Evolutionary Strategy | 21pages, 8figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structural re-parameterization (Rep) methods has achieved significant
performance improvement on traditional convolutional network. Most current Rep
methods rely on prior knowledge to select the reparameterization operations.
However, the performance of architecture is limited by the type of operations
and prior knowledge. To break this restriction, in this work, an improved
re-parameterization search space is designed, which including more type of
re-parameterization operations. Concretely, the performance of convolutional
networks can be further improved by the search space. To effectively explore
this search space, an automatic re-parameterization enhancement strategy is
designed based on neural architecture search (NAS), which can search a
excellent re-parameterization architecture. Besides, we visualize the output
features of the architecture to analyze the reasons for the formation of the
re-parameterization architecture. On public datasets, we achieve better
results. Under the same training conditions as ResNet, we improve the accuracy
of ResNet-50 by 1.82% on ImageNet-1k.
| [
{
"version": "v1",
"created": "Wed, 13 Apr 2022 14:07:20 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Jul 2022 05:22:35 GMT"
}
]
| 1,656,979,200,000 | [
[
"Yu",
"Xinyi",
""
],
[
"Wang",
"Xiaowei",
""
],
[
"Rong",
"Jintao",
""
],
[
"Zhang",
"Mingyang",
""
],
[
"Ou",
"Linlin",
""
]
]
|
2204.06438 | April Niu | April Niu, Agnes Totschnig, Adrian Vetta | Fair Algorithm Design: Fair and Efficacious Machine Scheduling | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by a plethora of practical examples where bias is induced by
automated-decision making algorithms, there has been strong recent interest in
the design of fair algorithms. However, there is often a dichotomy between
fairness and efficacy: fair algorithms may proffer low social welfare solutions
whereas welfare optimizing algorithms may be very unfair. This issue is
exemplified in the machine scheduling problem where, for $n$ jobs, the social
welfare of any fair solution may be a factor $\Omega(n)$ worse than the optimal
welfare. In this paper, we prove that this dichotomy between fairness and
efficacy can be overcome if we allow for a negligible amount of bias: there
exist algorithms that are both "almost perfectly fair" and have a constant
factor efficacy ratio, that is, are guaranteed to output solutions that have
social welfare within a constant factor of optimal welfare. Specifically, for
any $\epsilon>0$, there exist mechanisms with efficacy ratio
$\Theta(\frac{1}{\epsilon})$ and where no agent is more than an $\epsilon$
fraction worse off than they are in the fairest possible solution (given by an
algorithm that does not use personal or type data). Moreover, these bicriteria
guarantees are tight and apply to both the single machine case and the multiple
machine case. The key to our results are the use of Pareto scheduling
mechanisms. These mechanisms, by the judicious use of personal or type data,
are able to exploit Pareto improvements that benefit every individual; such
Pareto improvements would typically be forbidden by fair scheduling algorithms
designed to satisfy standard statistical measures of group fairness. We
anticipate this paradigm, the judicious use of personal data by a fair
algorithm to greatly improve performance at the cost of negligible bias, has
wider application.
| [
{
"version": "v1",
"created": "Wed, 13 Apr 2022 14:56:22 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Jul 2023 16:16:43 GMT"
}
]
| 1,689,206,400,000 | [
[
"Niu",
"April",
""
],
[
"Totschnig",
"Agnes",
""
],
[
"Vetta",
"Adrian",
""
]
]
|
2204.06908 | Andreia P. Guerreiro | Andreia P. Guerreiro, Jo\~ao Cortes, Daniel Vanderpooten, Cristina
Bazgan, In\^es Lynce, Vasco Manquinho, Jos\'e Rui Figueira | Exact and approximate determination of the Pareto set using minimal
correction subsets | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, it has been shown that the enumeration of Minimal Correction
Subsets (MCS) of Boolean formulas allows solving Multi-Objective Boolean
Optimization (MOBO) formulations. However, a major drawback of this approach is
that most MCSs do not correspond to Pareto-optimal solutions. In fact, one can
only know that a given MCS corresponds to a Pareto-optimal solution when all
MCSs are enumerated. Moreover, if it is not possible to enumerate all MCSs,
then there is no guarantee of the quality of the approximation of the Pareto
frontier. This paper extends the state of the art for solving MOBO using MCSs.
First, we show that it is possible to use MCS enumeration to solve MOBO
problems such that each MCS necessarily corresponds to a Pareto-optimal
solution. Additionally, we also propose two new algorithms that can find a (1 +
{\varepsilon})-approximation of the Pareto frontier using MCS enumeration.
Experimental results in several benchmark sets show that the newly proposed
algorithms allow finding better approximations of the Pareto frontier than
state-of-the-art algorithms, and with guaranteed approximation ratios.
| [
{
"version": "v1",
"created": "Thu, 14 Apr 2022 12:08:55 GMT"
}
]
| 1,649,980,800,000 | [
[
"Guerreiro",
"Andreia P.",
""
],
[
"Cortes",
"João",
""
],
[
"Vanderpooten",
"Daniel",
""
],
[
"Bazgan",
"Cristina",
""
],
[
"Lynce",
"Inês",
""
],
[
"Manquinho",
"Vasco",
""
],
[
"Figueira",
"José Rui",
""
]
]
|
2204.07123 | Anssi Kanervisto | Rohin Shah, Steven H. Wang, Cody Wild, Stephanie Milani, Anssi
Kanervisto, Vinicius G. Goecks, Nicholas Waytowich, David Watkins-Valls,
Bharat Prakash, Edmund Mills, Divyansh Garg, Alexander Fries, Alexandra
Souly, Chan Jun Shern, Daniel del Castillo, Tom Lieberum | Retrospective on the 2021 BASALT Competition on Learning from Human
Feedback | Accepted to the PMLR NeurIPS 2021 Demo & Competition Track volume | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We held the first-ever MineRL Benchmark for Agents that Solve Almost-Lifelike
Tasks (MineRL BASALT) Competition at the Thirty-fifth Conference on Neural
Information Processing Systems (NeurIPS 2021). The goal of the competition was
to promote research towards agents that use learning from human feedback (LfHF)
techniques to solve open-world tasks. Rather than mandating the use of LfHF
techniques, we described four tasks in natural language to be accomplished in
the video game Minecraft, and allowed participants to use any approach they
wanted to build agents that could accomplish the tasks. Teams developed a
diverse range of LfHF algorithms across a variety of possible human feedback
types. The three winning teams implemented significantly different approaches
while achieving similar performance. Interestingly, their approaches performed
well on different tasks, validating our choice of tasks to include in the
competition. While the outcomes validated the design of our competition, we did
not get as many participants and submissions as our sister competition, MineRL
Diamond. We speculate about the causes of this problem and suggest improvements
for future iterations of the competition.
| [
{
"version": "v1",
"created": "Thu, 14 Apr 2022 17:24:54 GMT"
}
]
| 1,649,980,800,000 | [
[
"Shah",
"Rohin",
""
],
[
"Wang",
"Steven H.",
""
],
[
"Wild",
"Cody",
""
],
[
"Milani",
"Stephanie",
""
],
[
"Kanervisto",
"Anssi",
""
],
[
"Goecks",
"Vinicius G.",
""
],
[
"Waytowich",
"Nicholas",
""
],
[
"Watkins-Valls",
"David",
""
],
[
"Prakash",
"Bharat",
""
],
[
"Mills",
"Edmund",
""
],
[
"Garg",
"Divyansh",
""
],
[
"Fries",
"Alexander",
""
],
[
"Souly",
"Alexandra",
""
],
[
"Shern",
"Chan Jun",
""
],
[
"del Castillo",
"Daniel",
""
],
[
"Lieberum",
"Tom",
""
]
]
|
2204.07203 | Ellyn Ayton | Sameera Horawalavithana, Ellyn Ayton, Anastasiya Usenko, Shivam
Sharma, Jasmine Eshun, Robin Cosbey, Maria Glenski, and Svitlana Volkova | EXPERT: Public Benchmarks for Dynamic Heterogeneous Academic Graphs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Machine learning models that learn from dynamic graphs face nontrivial
challenges in learning and inference as both nodes and edges change over time.
The existing large-scale graph benchmark datasets that are widely used by the
community primarily focus on homogeneous node and edge attributes and are
static. In this work, we present a variety of large scale, dynamic
heterogeneous academic graphs to test the effectiveness of models developed for
multi-step graph forecasting tasks. Our novel datasets cover both context and
content information extracted from scientific publications across two
communities: Artificial Intelligence (AI) and Nuclear Nonproliferation (NN). In
addition, we propose a systematic approach to improve the existing evaluation
procedures used in the graph forecasting models.
| [
{
"version": "v1",
"created": "Thu, 14 Apr 2022 19:43:34 GMT"
}
]
| 1,650,240,000,000 | [
[
"Horawalavithana",
"Sameera",
""
],
[
"Ayton",
"Ellyn",
""
],
[
"Usenko",
"Anastasiya",
""
],
[
"Sharma",
"Shivam",
""
],
[
"Eshun",
"Jasmine",
""
],
[
"Cosbey",
"Robin",
""
],
[
"Glenski",
"Maria",
""
],
[
"Volkova",
"Svitlana",
""
]
]
|
2204.07471 | David Radke | David Radke, Kate Larson, Tim Brecht | The Importance of Credo in Multiagent Learning | 12 pages, 8 figures, Proceedings of the 22nd International Conference
on Autonomous Agents and Multiagent Systems (AAMAS 2023) | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | We propose a model for multi-objective optimization, a credo, for agents in a
system that are configured into multiple groups (i.e., teams). Our model of
credo regulates how agents optimize their behavior for the groups they belong
to. We evaluate credo in the context of challenging social dilemmas with
reinforcement learning agents. Our results indicate that the interests of
teammates, or the entire system, are not required to be fully aligned for
achieving globally beneficial outcomes. We identify two scenarios without full
common interest that achieve high equality and significantly higher mean
population rewards compared to when the interests of all agents are aligned.
| [
{
"version": "v1",
"created": "Fri, 15 Apr 2022 14:12:13 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Apr 2023 15:04:45 GMT"
}
]
| 1,681,344,000,000 | [
[
"Radke",
"David",
""
],
[
"Larson",
"Kate",
""
],
[
"Brecht",
"Tim",
""
]
]
|
2204.08687 | Yuxuan Sun | Yuxuan Sun, Ethan Carlson, Rebecca Qian, Kavya Srinet, Arthur Szlam | Many Episode Learning in a Modular Embodied Agent via End-to-End
Interaction | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this work we give a case study of an embodied machine-learning (ML)
powered agent that improves itself via interactions with crowd-workers. The
agent consists of a set of modules, some of which are learned, and others
heuristic. While the agent is not "end-to-end" in the ML sense, end-to-end
interaction is a vital part of the agent's learning mechanism. We describe how
the design of the agent works together with the design of multiple annotation
interfaces to allow crowd-workers to assign credit to module errors from
end-to-end interactions, and to label data for individual modules. Over
multiple automated human-agent interaction, credit assignment, data annotation,
and model re-training and re-deployment, rounds we demonstrate agent
improvement.
| [
{
"version": "v1",
"created": "Tue, 19 Apr 2022 06:11:46 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2023 18:29:32 GMT"
}
]
| 1,673,395,200,000 | [
[
"Sun",
"Yuxuan",
""
],
[
"Carlson",
"Ethan",
""
],
[
"Qian",
"Rebecca",
""
],
[
"Srinet",
"Kavya",
""
],
[
"Szlam",
"Arthur",
""
]
]
|
2204.09960 | Francesco Fuggitti | Giuseppe De Giacomo, Marco Favorito, Francesco Fuggitti | Planning for Temporally Extended Goals in Pure-Past Linear Temporal
Logic: A Polynomial Reduction to Standard Planning | 26 pages, 8 figures, 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study temporally extended goals expressed in Pure-Past LTL (PPLTL). PPLTL
is particularly interesting for expressing goals since it allows to express
sophisticated tasks as in the Formal Methods literature, while the worst-case
computational complexity of Planning in both deterministic and nondeterministic
domains (FOND) remains the same as for classical reachability goals. However,
while the theory of planning for PPLTL goals is well understood, practical
tools have not been specifically investigated. In this paper, we make a
significant leap forward in the construction of actual tools to handle PPLTL
goals. We devise a technique to polynomially translate planning for PPLTL goals
into standard planning. We show the formal correctness of the translation, its
complexity, and its practical effectiveness through some comparative
experiments. As a result, our translation enables state-of-the-art tools, such
as FD or MyND, to handle PPLTL goals seamlessly, maintaining the impressive
performances they have for classical reachability goals.
| [
{
"version": "v1",
"created": "Thu, 21 Apr 2022 08:34:49 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Apr 2022 11:40:14 GMT"
},
{
"version": "v3",
"created": "Tue, 31 May 2022 22:38:28 GMT"
}
]
| 1,654,128,000,000 | [
[
"De Giacomo",
"Giuseppe",
""
],
[
"Favorito",
"Marco",
""
],
[
"Fuggitti",
"Francesco",
""
]
]
|
2204.09985 | Matthias Thimm | Matthias Thimm | Revisiting initial sets in abstract argumentation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We revisit the notion of initial sets by Xu and Cayrol, i.e., non-empty
minimal admissible sets in abstract argumentation frameworks. Initial sets are
a simple concept for analysing conflicts in an abstract argumentation framework
and to explain why certain arguments can be accepted. We contribute with new
insights on the structure of initial sets and devise a simple non-deterministic
construction principle for any admissible set, based on iterative selection of
initial sets of the original framework and its induced reducts. In particular,
we characterise many existing admissibility-based semantics via this
construction principle, thus providing a constructive explanation on the
structure of extensions. We also investigate certain problems related to
initial sets with respect to their computational complexity.
| [
{
"version": "v1",
"created": "Thu, 21 Apr 2022 09:23:12 GMT"
}
]
| 1,650,585,600,000 | [
[
"Thimm",
"Matthias",
""
]
]
|
2204.10358 | Lakshmi Nair | Evana Gizzi, Lakshmi Nair, Sonia Chernova, Jivko Sinapov | Creative Problem Solving in Artificially Intelligent Agents: A Survey
and Framework | 46 pages (including appendix), 17 figures, under submission at
Journal of Artificial Intelligence Research (JAIR) | Journal of Artificial Intelligence Research 2022 | 10.1613/jair.1.13864 | Vol. 75 | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Creative Problem Solving (CPS) is a sub-area within Artificial Intelligence
(AI) that focuses on methods for solving off-nominal, or anomalous problems in
autonomous systems. Despite many advancements in planning and learning,
resolving novel problems or adapting existing knowledge to a new context,
especially in cases where the environment may change in unpredictable ways post
deployment, remains a limiting factor in the safe and useful integration of
intelligent systems. The emergence of increasingly autonomous systems dictates
the necessity for AI agents to deal with environmental uncertainty through
creativity. To stimulate further research in CPS, we present a definition and a
framework of CPS, which we adopt to categorize existing AI methods in this
field. Our framework consists of four main components of a CPS problem, namely,
1) problem formulation, 2) knowledge representation, 3) method of knowledge
manipulation, and 4) method of evaluation. We conclude our survey with open
research questions, and suggested directions for the future.
| [
{
"version": "v1",
"created": "Thu, 21 Apr 2022 18:31:44 GMT"
}
]
| 1,671,062,400,000 | [
[
"Gizzi",
"Evana",
""
],
[
"Nair",
"Lakshmi",
""
],
[
"Chernova",
"Sonia",
""
],
[
"Sinapov",
"Jivko",
""
]
]
|
2204.10420 | Tom Silver | Ryan Yang, Tom Silver, Aidan Curtis, Tomas Lozano-Perez, Leslie Pack
Kaelbling | PG3: Policy-Guided Planning for Generalized Policy Generation | IJCAI 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A longstanding objective in classical planning is to synthesize policies that
generalize across multiple problems from the same domain. In this work, we
study generalized policy search-based methods with a focus on the score
function used to guide the search over policies. We demonstrate limitations of
two score functions and propose a new approach that overcomes these
limitations. The main idea behind our approach, Policy-Guided Planning for
Generalized Policy Generation (PG3), is that a candidate policy should be used
to guide planning on training problems as a mechanism for evaluating that
candidate. Theoretical results in a simplified setting give conditions under
which PG3 is optimal or admissible. We then study a specific instantiation of
policy search where planning problems are PDDL-based and policies are lifted
decision lists. Empirical results in six domains confirm that PG3 learns
generalized policies more efficiently and effectively than several baselines.
Code: https://github.com/ryangpeixu/pg3
| [
{
"version": "v1",
"created": "Thu, 21 Apr 2022 21:59:25 GMT"
}
]
| 1,650,844,800,000 | [
[
"Yang",
"Ryan",
""
],
[
"Silver",
"Tom",
""
],
[
"Curtis",
"Aidan",
""
],
[
"Lozano-Perez",
"Tomas",
""
],
[
"Kaelbling",
"Leslie Pack",
""
]
]
|
2204.10662 | Gyunam Park | Gyunam Park, Jan Niklas Adams, and Wil. M. P. van der Aalst | OPerA: Object-Centric Performance Analysis | null | LNCS 13607 (2022) 281-292 | 10.1007/978-3-031-17995-2_20 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Performance analysis in process mining aims to provide insights on the
performance of a business process by using a process model as a formal
representation of the process. Such insights are reliably interpreted by
process analysts in the context of a model with formal semantics. Existing
techniques for performance analysis assume that a single case notion exists in
a business process (e.g., a patient in healthcare process). However, in
reality, different objects might interact (e.g., order, item, delivery, and
invoice in an O2C process). In such a setting, traditional techniques may yield
misleading or even incorrect insights on performance metrics such as waiting
time. More importantly, by considering the interaction between objects, we can
define object-centric performance metrics such as synchronization time, pooling
time, and lagging time. In this work, we propose a novel approach to
performance analysis considering multiple case notions by using object-centric
Petri nets as formal representations of business processes. The proposed
approach correctly computes existing performance metrics, while supporting the
derivation of newly-introduced object-centric performance metrics. We have
implemented the approach as a web application and conducted a case study based
on a real-life loan application process.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2022 12:23:06 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jun 2022 18:09:11 GMT"
}
]
| 1,667,260,800,000 | [
[
"Park",
"Gyunam",
""
],
[
"Adams",
"Jan Niklas",
""
],
[
"van der Aalst",
"Wil. M. P.",
""
]
]
|
2204.10669 | Ebaa Alnazer | Ebaa Alnazer, Ilche Georgievski, Marco Aiello | Risk Awareness in HTN Planning | 62 pages, 9 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Actual real-world domains are characterised by uncertain situations in which
acting and use of resources require embracing risk. Performing actions in such
domains always entails costs of consuming some resource, such as time, money,
or energy, where the knowledge about these costs can range from totally known
to totally unknown and even unknowable probabilities of costs. Think of robotic
domains, where actions and their costs are non-deterministic due to uncertain
factors like obstacles. Choosing which action to perform considering its cost
on the available resource requires taking a stance on risk. Thus, these domains
call for not only planning under uncertainty but also planning while embracing
risk. Taking Hierarchical Task Network (HTN) planning as a widely used planning
technique in real-world applications, one can observe that existing approaches
do not account for risk. That is, computing most probable or optimal plans
using actions with single-valued costs is only enough to express risk
neutrality. In this work, we postulate that HTN planning can become risk aware
by considering expected utility theory, a representative concept of decision
theory that enables choosing actions considering a probability distribution of
their costs and a given risk attitude expressed using a utility function. In
particular, we introduce a general framework for HTN planning that allows
modelling risk and uncertainty using a probability distribution of action costs
upon which we define risk-aware HTN planning as an approach that accounts for
the different risk attitudes and allows computing plans that go beyond risk
neutrality. In fact, we layout that computing risk-aware plans requires finding
plans with the highest expected utility. Finally, we argue that it is possible
for HTN planning agents to solve specialised risk-aware HTN planning problems
by adapting some existing HTN planning approaches.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2022 12:33:27 GMT"
}
]
| 1,650,844,800,000 | [
[
"Alnazer",
"Ebaa",
""
],
[
"Georgievski",
"Ilche",
""
],
[
"Aiello",
"Marco",
""
]
]
|
2204.10856 | Jo\~ao Cortes Mr. | Jo\~ao Cortes, In\^es Lynce, Vasco Manquinho | New Core-Guided and Hitting Set Algorithms for Multi-Objective
Combinatorial Optimization | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In the last decade, a plethora of algorithms for single-objective Boolean
optimization has been proposed that rely on the iterative usage of a highly
effective Propositional Satisfiability (SAT) solver. But the use of SAT solvers
in Multi-Objective Combinatorial Optimization (MOCO) algorithms is still
scarce. Due to this shortage of efficient tools for MOCO, many real-world
applications formulated as multi-objective are simplified to single-objective,
using either a linear combination or a lexicographic ordering of the objective
functions to optimize. In this paper, we extend the state of the art of MOCO
solvers with two novel unsatisfiability-based algorithms. The first is a
core-guided MOCO solver. The second is a hitting set-based MOCO solver.
Experimental results obtained in a wide range of benchmark instances show that
our new unsatisfiability-based algorithms can outperform state-of-the-art
SAT-based algorithms for MOCO.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2022 09:46:44 GMT"
}
]
| 1,650,931,200,000 | [
[
"Cortes",
"João",
""
],
[
"Lynce",
"Inês",
""
],
[
"Manquinho",
"Vasco",
""
]
]
|
2204.11902 | Andr\'es Occhipinti Liberman | Andr\'es Occhipinti Liberman, Blai Bonet, Hector Geffner | Learning First-Order Symbolic Planning Representations That Are Grounded | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Two main approaches have been developed for learning first-order planning
(action) models from unstructured data: combinatorial approaches that yield
crisp action schemas from the structure of the state space, and deep learning
approaches that produce action schemas from states represented by images. A
benefit of the former approach is that the learned action schemas are similar
to those that can be written by hand; a benefit of the latter is that the
learned representations (predicates) are grounded on the images, and as a
result, new instances can be given in terms of images. In this work, we develop
a new formulation for learning crisp first-order planning models that are
grounded on parsed images, a step to combine the benefits of the two
approaches. Parsed images are assumed to be given in a simple O2D language
(objects in 2D) that involves a small number of unary and binary predicates
like "left", "above", "shape", etc. After learning, new planning instances can
be given in terms of pairs of parsed images, one for the initial situation and
the other for the goal. Learning and planning experiments are reported for
several domains including Blocks, Sokoban, IPC Grid, and Hanoi.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2022 18:07:28 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 01:44:12 GMT"
},
{
"version": "v3",
"created": "Sat, 30 Apr 2022 08:56:54 GMT"
}
]
| 1,651,536,000,000 | [
[
"Liberman",
"Andrés Occhipinti",
""
],
[
"Bonet",
"Blai",
""
],
[
"Geffner",
"Hector",
""
]
]
|
2204.12190 | Qize Jiang | Qize Jiang, Minhao Qin, Shengmin Shi, Weiwei Sun and Baihua Zheng | Multi-Agent Reinforcement Learning for Traffic Signal Control through
Universal Communication Method | IJCAI 2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How to coordinate the communication among intersections effectively in real
complex traffic scenarios with multi-intersection is challenging. Existing
approaches only enable the communication in a heuristic manner without
considering the content/importance of information to be shared. In this paper,
we propose a universal communication form UniComm between intersections.
UniComm embeds massive observations collected at one agent into crucial
predictions of their impact on its neighbors, which improves the communication
efficiency and is universal across existing methods. We also propose a concise
network UniLight to make full use of communications enabled by UniComm.
Experimental results on real datasets demonstrate that UniComm universally
improves the performance of existing state-of-the-art methods, and UniLight
significantly outperforms existing methods on a wide range of traffic
situations.
| [
{
"version": "v1",
"created": "Tue, 26 Apr 2022 09:48:28 GMT"
}
]
| 1,651,017,600,000 | [
[
"Jiang",
"Qize",
""
],
[
"Qin",
"Minhao",
""
],
[
"Shi",
"Shengmin",
""
],
[
"Sun",
"Weiwei",
""
],
[
"Zheng",
"Baihua",
""
]
]
|
2204.12562 | Daxin Liu | Daxin Liu and Gerhard Lakemeyer | On the Verification of Belief Programs | unpublished | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a recent paper, Belle and Levesque proposed a framework for a type of
program called belief programs, a probabilistic extension of GOLOG programs
where every action and sensing result could be noisy and every test condition
refers to the agent's subjective beliefs. Inherited from GOLOG programs, the
action-centered feature makes belief programs fairly suitable for high-level
robot control under uncertainty. An important step before deploying such a
program is to verify whether it satisfies properties as desired. At least two
problems exist in doing verification: how to formally specify properties of a
program and what is the complexity of verification. In this paper, we propose a
formalism for belief programs based on a modal logic of actions and beliefs.
Among other things, this allows us to express PCTL-like temporal properties
smoothly. Besides, we investigate the decidability and undecidability for the
verification problem of belief programs.
| [
{
"version": "v1",
"created": "Tue, 26 Apr 2022 19:52:02 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Apr 2022 12:30:53 GMT"
},
{
"version": "v3",
"created": "Tue, 3 May 2022 13:14:30 GMT"
}
]
| 1,651,622,400,000 | [
[
"Liu",
"Daxin",
""
],
[
"Lakemeyer",
"Gerhard",
""
]
]
|
2204.12704 | Min Zhou | Jiahong Liu, Min Zhou, Philippe Fournier-Viger, Menglin Yang, Lujia
Pan, Mourad Nouioua | Discovering Representative Attribute-stars via Minimum Description
Length | 14pages.Accepted by ICDE 2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graphs are a popular data type found in many domains. Numerous techniques
have been proposed to find interesting patterns in graphs to help understand
the data and support decision-making. However, there are generally two
limitations that hinder their practical use: (1) they have multiple parameters
that are hard to set but greatly influence results, (2) and they generally
focus on identifying complex subgraphs while ignoring relationships between
attributes of nodes.Graphs are a popular data type found in many domains.
Numerous techniques have been proposed to find interesting patterns in graphs
to help understand the data and support decision-making. However, there are
generally two limitations that hinder their practical use: (1) they have
multiple parameters that are hard to set but greatly influence results, (2) and
they generally focus on identifying complex subgraphs while ignoring
relationships between attributes of nodes. To address these problems, we
propose a parameter-free algorithm named CSPM (Compressing Star Pattern Miner)
which identifies star-shaped patterns that indicate strong correlations among
attributes via the concept of conditional entropy and the minimum description
length principle. Experiments performed on several benchmark datasets show that
CSPM reveals insightful and interpretable patterns and is efficient in runtime.
Moreover, quantitative evaluations on two real-world applications show that
CSPM has broad applications as it successfully boosts the accuracy of graph
attribute completion models by up to 30.68\% and uncovers important patterns in
telecommunication alarm data.
| [
{
"version": "v1",
"created": "Wed, 27 Apr 2022 05:23:07 GMT"
}
]
| 1,651,104,000,000 | [
[
"Liu",
"Jiahong",
""
],
[
"Zhou",
"Min",
""
],
[
"Fournier-Viger",
"Philippe",
""
],
[
"Yang",
"Menglin",
""
],
[
"Pan",
"Lujia",
""
],
[
"Nouioua",
"Mourad",
""
]
]
|
2204.13305 | Michael Bernreiter | Michael Bernreiter, Wolfgang Dvorak, Anna Rapberger, Stefan Woltran | The Effect of Preferences in Abstract Argumentation Under a
Claim-Centric View | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the effect of preferences in abstract argumentation
under a claim-centric perspective. Recent work has revealed that semantical and
computational properties can change when reasoning is performed on claim-level
rather than on the argument-level, while under certain natural restrictions
(arguments with the same claims have the same outgoing attacks) these
properties are conserved. We now investigate these effects when, in addition,
preferences have to be taken into account and consider four prominent
reductions to handle preferences between arguments. As we shall see, these
reductions give rise to different classes of claim-augmented argumentation
frameworks, and behave differently in terms of semantic properties and
computational complexity. This strengthens the view that the actual choice for
handling preferences has to be taken with care.
| [
{
"version": "v1",
"created": "Thu, 28 Apr 2022 06:51:00 GMT"
}
]
| 1,651,190,400,000 | [
[
"Bernreiter",
"Michael",
""
],
[
"Dvorak",
"Wolfgang",
""
],
[
"Rapberger",
"Anna",
""
],
[
"Woltran",
"Stefan",
""
]
]
|
2204.13329 | Heiko Paulheim | Niclas Heilig, Jan Kirchhoff, Florian Stumpe, Joan Plepi, Lucie Flek,
Heiko Paulheim | Refining Diagnosis Paths for Medical Diagnosis based on an Augmented
Knowledge Graph | Accepted at the 5th Workshop on Semantic Web solutions for
large-scale biomedical data analytics | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Medical diagnosis is the process of making a prediction of the disease a
patient is likely to have, given a set of symptoms and observations. This
requires extensive expert knowledge, in particular when covering a large
variety of diseases. Such knowledge can be coded in a knowledge graph --
encompassing diseases, symptoms, and diagnosis paths. Since both the knowledge
itself and its encoding can be incomplete, refining the knowledge graph with
additional information helps physicians making better predictions. At the same
time, for deployment in a hospital, the diagnosis must be explainable and
transparent. In this paper, we present an approach using diagnosis paths in a
medical knowledge graph. We show that those graphs can be refined using latent
representations with RDF2vec, while the final diagnosis is still made in an
explainable way. Using both an intrinsic as well as an expert-based evaluation,
we show that the embedding-based prediction approach is beneficial for refining
the graph with additional valid conditions.
| [
{
"version": "v1",
"created": "Thu, 28 Apr 2022 07:58:33 GMT"
}
]
| 1,651,190,400,000 | [
[
"Heilig",
"Niclas",
""
],
[
"Kirchhoff",
"Jan",
""
],
[
"Stumpe",
"Florian",
""
],
[
"Plepi",
"Joan",
""
],
[
"Flek",
"Lucie",
""
],
[
"Paulheim",
"Heiko",
""
]
]
|
2204.13570 | Kun Gao | Kun Gao, Katsumi Inoue, Yongzhi Cao, Hanpin Wang | Learning First-Order Rules with Differentiable Logic Program Semantics | Accepted by IJCAI 2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning first-order logic programs (LPs) from relational facts which yields
intuitive insights into the data is a challenging topic in neuro-symbolic
research. We introduce a novel differentiable inductive logic programming (ILP)
model, called differentiable first-order rule learner (DFOL), which finds the
correct LPs from relational facts by searching for the interpretable matrix
representations of LPs. These interpretable matrices are deemed as trainable
tensors in neural networks (NNs). The NNs are devised according to the
differentiable semantics of LPs. Specifically, we first adopt a novel
propositionalization method that transfers facts to NN-readable vector pairs
representing interpretation pairs. We replace the immediate consequence
operator with NN constraint functions consisting of algebraic operations and a
sigmoid-like activation function. We map the symbolic forward-chained format of
LPs into NN constraint functions consisting of operations between subsymbolic
vector representations of atoms. By applying gradient descent, the trained well
parameters of NNs can be decoded into precise symbolic LPs in forward-chained
logic format. We demonstrate that DFOL can perform on several standard ILP
datasets, knowledge bases, and probabilistic relation facts and outperform
several well-known differentiable ILP models. Experimental results indicate
that DFOL is a precise, robust, scalable, and computationally cheap
differentiable ILP model.
| [
{
"version": "v1",
"created": "Thu, 28 Apr 2022 15:33:43 GMT"
}
]
| 1,651,190,400,000 | [
[
"Gao",
"Kun",
""
],
[
"Inoue",
"Katsumi",
""
],
[
"Cao",
"Yongzhi",
""
],
[
"Wang",
"Hanpin",
""
]
]
|
2204.13775 | Riddhiman Adib | Riddhiman Adib, Md Mobasshir Arshed Naved, Chih-Hao Fang, Md Osman
Gani, Ananth Grama, Paul Griffin, Sheikh Iqbal Ahamed, Mohammad Adibuzzaman | CKH: Causal Knowledge Hierarchy for Estimating Structural Causal Models
from Data and Priors | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structural causal models (SCMs) provide a principled approach to identifying
causation from observational and experimental data in disciplines ranging from
economics to medicine. However, SCMs, which is typically represented as
graphical models, cannot rely only on data, rather require support of domain
knowledge. A key challenge in this context is the absence of a methodological
framework for encoding priors (background knowledge) into causal models in a
systematic manner. We propose an abstraction called causal knowledge hierarchy
(CKH) for encoding priors into causal models. Our approach is based on the
foundation of "levels of evidence" in medicine, with a focus on confidence in
causal information. Using CKH, we present a methodological framework for
encoding causal priors from various information sources and combining them to
derive an SCM. We evaluate our approach on a simulated dataset and demonstrate
overall performance compared to the ground truth causal model with sensitivity
analysis.
| [
{
"version": "v1",
"created": "Thu, 28 Apr 2022 20:55:38 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Sep 2022 16:10:16 GMT"
}
]
| 1,662,076,800,000 | [
[
"Adib",
"Riddhiman",
""
],
[
"Naved",
"Md Mobasshir Arshed",
""
],
[
"Fang",
"Chih-Hao",
""
],
[
"Gani",
"Md Osman",
""
],
[
"Grama",
"Ananth",
""
],
[
"Griffin",
"Paul",
""
],
[
"Ahamed",
"Sheikh Iqbal",
""
],
[
"Adibuzzaman",
"Mohammad",
""
]
]
|
2204.14116 | Benjamin Provan-Bessell | Benjamin Provan-Bessell, Marco Dalla, Andrea Visentin, Barry
O'Sullivan | SATfeatPy -- A Python-based Feature Extraction System for Satisfiability | 8 pages, 2 figures, code available at
https://github.com/bprovanbessell/SATfeatPy | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature extraction is a fundamental task in the application of machine
learning methods to SAT solving. It is used in algorithm selection and
configuration for solver portfolios and satisfiability classification. Many
approaches have been proposed to extract meaningful attributes from CNF
instances. Most of them lack a working/updated implementation, and the limited
descriptions lack clarity affecting the reproducibility. Furthermore, the
literature misses a comparison among the features. This paper introduces
SATfeatPy, a library that offers feature extraction techniques for SAT problems
in the CNF form. This package offers the implementation of all the structural
and statistical features from there major papers in the field. The library is
provided in an up-to-date, easy-to-use Python package alongside a detailed
feature description. We show the high accuracy of SAT/UNSAT and problem
category classification, using five sets of features generated using our
library from a dataset of 3000 SAT and UNSAT instances, over ten different
classes of problems. Finally, we compare the usefulness of the features and
importance for predicting a SAT instance's original structure in an ablation
study.
| [
{
"version": "v1",
"created": "Fri, 29 Apr 2022 14:10:01 GMT"
}
]
| 1,651,449,600,000 | [
[
"Provan-Bessell",
"Benjamin",
""
],
[
"Dalla",
"Marco",
""
],
[
"Visentin",
"Andrea",
""
],
[
"O'Sullivan",
"Barry",
""
]
]
|
2204.14172 | Maurice Funk | Maurice Funk, Jean Christoph Jung and Carsten Lutz | Frontiers and Exact Learning of ELI Queries under DL-Lite Ontologies | 24 pages, long version of a paper accepted at IJCAI 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We study ELI queries (ELIQs) in the presence of ontologies formulated in the
description logic DL-Lite. For the dialect DL-LiteH, we show that ELIQs have a
frontier (set of least general generalizations) that is of polynomial size and
can be computed in polynomial time. In the dialect DL-LiteF, in contrast,
frontiers may be infinite. We identify a natural syntactic restriction that
enables the same positive results as for DL-LiteH. We use out results on
frontiers to show that ELIQs are learnable in polynomial time in the presence
of a DL-LiteH / restricted DL-LiteF ontology in Angluin's framework of exact
learning with only membership queries.
| [
{
"version": "v1",
"created": "Fri, 29 Apr 2022 15:56:45 GMT"
}
]
| 1,651,449,600,000 | [
[
"Funk",
"Maurice",
""
],
[
"Jung",
"Jean Christoph",
""
],
[
"Lutz",
"Carsten",
""
]
]
|
2205.00077 | Joseph Singleton | Joseph Singleton and Richard Booth | Who's the Expert? On Multi-source Belief Change | Presented at KR 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Consider the following belief change/merging scenario. A group of information
sources gives a sequence of reports about the state of the world at various
instances (e.g. different points in time). The true states at these instances
are unknown to us. The sources have varying levels of expertise, also unknown
to us, and may be knowledgeable on some topics but not others. This may cause
sources to report false statements in areas they lack expertise. What should we
believe on the basis of these reports? We provide a framework in which to
explore this problem, based on an extension of propositional logic with
expertise formulas. This extended language allows us to express beliefs about
the state of the world at each instance, as well as beliefs about the expertise
of each source. We propose several postulates, provide a couple of families of
concrete operators, and analyse these operators with respect to the postulates.
| [
{
"version": "v1",
"created": "Fri, 29 Apr 2022 20:45:54 GMT"
}
]
| 1,651,536,000,000 | [
[
"Singleton",
"Joseph",
""
],
[
"Booth",
"Richard",
""
]
]
|
2205.00215 | Adri\`a Fenoy Barcel\'o | Adri\`a Fenoy, Filippo Bistaffa, Alessandro Farinelli | An attention model for the formation of collectives in real-world
domains | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of forming collectives of agents for real-world
applications aligned with Sustainable Development Goals (e.g., shared mobility,
cooperative learning). We propose a general approach for the formation of
collectives based on a novel combination of an attention model and an integer
linear program (ILP). In more detail, we propose an attention encoder-decoder
model that transforms a collective formation instance to a weighted set packing
problem, which is then solved by an ILP. Results on two real-world domains
(i.e., ridesharing and team formation for cooperative learning) show that our
approach provides solutions that are comparable (in terms of quality) to the
ones produced by state-of-the-art approaches specific to each domain. Moreover,
our solution outperforms the most recent general approach for forming
collectives based on Monte Carlo tree search.
| [
{
"version": "v1",
"created": "Sat, 30 Apr 2022 09:15:36 GMT"
}
]
| 1,651,536,000,000 | [
[
"Fenoy",
"Adrià",
""
],
[
"Bistaffa",
"Filippo",
""
],
[
"Farinelli",
"Alessandro",
""
]
]
|
2205.00299 | Yisi Sang | Yisi Sang, Xiangyang Mou, Jing Li, Jeffrey Stanton, Mo Yu | A Survey of Machine Narrative Reading Comprehension Assessments | accepted for the IJCAI-ECAI2022 Survey Track | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the body of research on machine narrative comprehension grows, there is a
critical need for consideration of performance assessment strategies as well as
the depth and scope of different benchmark tasks. Based on narrative theories,
reading comprehension theories, as well as existing machine narrative reading
comprehension tasks and datasets, we propose a typology that captures the main
similarities and differences among assessment tasks; and discuss the
implications of our typology for new task design and the challenges of
narrative reading comprehension.
| [
{
"version": "v1",
"created": "Sat, 30 Apr 2022 16:06:23 GMT"
}
]
| 1,651,536,000,000 | [
[
"Sang",
"Yisi",
""
],
[
"Mou",
"Xiangyang",
""
],
[
"Li",
"Jing",
""
],
[
"Stanton",
"Jeffrey",
""
],
[
"Yu",
"Mo",
""
]
]
|
2205.00399 | GyeongTaek Lee | GyeongTaek Lee | Learning user-defined sub-goals using memory editing in reinforcement
learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The aim of reinforcement learning (RL) is to allow the agent to achieve the
final goal. Most RL studies have focused on improving the efficiency of
learning to achieve the final goal faster. However, the RL model is very
difficult to modify an intermediate route in the process of reaching the final
goal. That is, the agent cannot be under control to achieve other sub-goals in
the existing studies. If the agent can go through the sub-goals on the way to
the destination, the RL can be applied and studied in various fields. In this
study, I propose a methodology to achieve the user-defined sub-goals as well as
the final goal using memory editing. The memory editing is performed to
generate various sub-goals and give an additional reward to the agent. In
addition, the sub-goals are separately learned from the final goal. I set two
simple environments and various scenarios in the test environments. As a
result, the agent almost successfully passed the sub-goals as well as the final
goal under control. Moreover, the agent was able to be induced to visit the
novel state indirectly in the environments. I expect that this methodology can
be used in the fields that need to control the agent in a variety of scenarios.
| [
{
"version": "v1",
"created": "Sun, 1 May 2022 05:19:51 GMT"
}
]
| 1,651,536,000,000 | [
[
"Lee",
"GyeongTaek",
""
]
]
|
2205.00880 | Sharief Basha Shaik | Rajagopal Reddy N, Sharief Basha Shaik | The Application of Energy and Laplacian Energy of Hesitancy Fuzzy Graph
Based on Similarity Measures in Decision Making Problems | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | In this article, a new hesitancy fuzzy similarity measure is defined and then
used to develop the matrix of hesitancy fuzzy similarity measures, which is
subsequently used to classify hesitancy fuzzy graph using the working
procedure. We build a working procedure (Algorithm) for estimating the eligible
reputation scores values of experts by applying hesitancy fuzzy preference
relationships (HFPRs) and the usual similarity degree of one distinct HFPRs to
each other's. As the last step, we provide real time numerical examples to
demonstrate and validate our working procedure.
| [
{
"version": "v1",
"created": "Thu, 28 Apr 2022 11:24:36 GMT"
}
]
| 1,651,536,000,000 | [
[
"N",
"Rajagopal Reddy",
""
],
[
"Shaik",
"Sharief Basha",
""
]
]
|
2205.00911 | Emil H\"aglund | Emil H\"aglund and Johanna Bj\"orklund | AI-Driven Contextual Advertising: A Technology Report and Implication
Analysis | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Programmatic advertising consists in automated auctioning of digital ad
space. Every time a user requests a web page, placeholders on the page are
populated with ads from the highest-bidding advertisers. The bids are typically
based on information about the user, and to an increasing extent, on
information about the surrounding media context. The growing interest in
contextual advertising is in part a counterreaction to the current dependency
on personal data, which is problematic from legal and ethical standpoints. The
transition is further accelerated by developments in Artificial Intelligence
(AI), which allow for a deeper semantic understanding of context and, by
extension, more effective ad placement. In this article, we begin by
identifying context factors that have been shown in previous research to
positively influence how ads are received. We then continue to discuss
applications of AI in contextual advertising, where it adds value by, e.g.,
extracting high-level information about media context and optimising bidding
strategies. However, left unchecked, these new practices can lead to unfair ad
delivery and manipulative use of context. We summarize these and other concerns
for consumers, publishers and advertisers in an implication analysis.
| [
{
"version": "v1",
"created": "Mon, 2 May 2022 13:44:58 GMT"
}
]
| 1,651,536,000,000 | [
[
"Häglund",
"Emil",
""
],
[
"Björklund",
"Johanna",
""
]
]
|
2205.01290 | Jayetri Bardhan | Jayetri Bardhan, Anthony Colas, Kirk Roberts, Daisy Zhe Wang | DrugEHRQA: A Question Answering Dataset on Structured and Unstructured
Electronic Health Records For Medicine Related Queries | 15 pages (including Appendix section), 7 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper develops the first question answering dataset (DrugEHRQA)
containing question-answer pairs from both structured tables and unstructured
notes from a publicly available Electronic Health Record (EHR). EHRs contain
patient records, stored in structured tables and unstructured clinical notes.
The information in structured and unstructured EHRs is not strictly disjoint:
information may be duplicated, contradictory, or provide additional context
between these sources. Our dataset has medication-related queries, containing
over 70,000 question-answer pairs. To provide a baseline model and help analyze
the dataset, we have used a simple model (MultimodalEHRQA) which uses the
predictions of a modality selection network to choose between EHR tables and
clinical notes to answer the questions. This is used to direct the questions to
the table-based or text-based state-of-the-art QA model. In order to address
the problem arising from complex, nested queries, this is the first time
Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers (RAT-SQL)
has been used to test the structure of query templates in EHR data. Our goal is
to provide a benchmark dataset for multi-modal QA systems, and to open up new
avenues of research in improving question answering over EHR structured data by
using context from unstructured clinical data.
| [
{
"version": "v1",
"created": "Tue, 3 May 2022 03:50:50 GMT"
}
]
| 1,651,622,400,000 | [
[
"Bardhan",
"Jayetri",
""
],
[
"Colas",
"Anthony",
""
],
[
"Roberts",
"Kirk",
""
],
[
"Wang",
"Daisy Zhe",
""
]
]
|
2205.01296 | Razvan Andonie | Boris Kovalerchuk, R\u{a}zvan Andonie, Nuno Datia, Kawa Nazemi, Ebad
Banissi | Visual Knowledge Discovery with Artificial Intelligence: Challenges and
Future Directions | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This volume is devoted to the emerging field of Integrated Visual Knowledge
Discovery that combines advances in Artificial Intelligence/Machine Learning
(AI/ML) and Visualization/Visual Analytics. Chapters included are extended
versions of the selected AI and Visual Analytics papers and related symposia at
the recent International Information Visualization Conferences (IV2019 and
IV2020). AI/ML face a long-standing challenge of explaining models to humans.
Models explanation is fundamentally human activity, not only an algorithmic
one. In this chapter we aim to present challenges and future directions within
the field of Visual Analytics, Visual Knowledge Discovery and AI/ML, and to
discuss the role of visualization in visual AI/ML. In addition, we describe
progress in emerging Full 2D ML, natural language processing, and AI/ML in
multidimensional data aided by visual means.
| [
{
"version": "v1",
"created": "Tue, 3 May 2022 04:17:21 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 15:04:47 GMT"
}
]
| 1,651,708,800,000 | [
[
"Kovalerchuk",
"Boris",
""
],
[
"Andonie",
"Răzvan",
""
],
[
"Datia",
"Nuno",
""
],
[
"Nazemi",
"Kawa",
""
],
[
"Banissi",
"Ebad",
""
]
]
|
2205.01331 | Joachim Schopfel | Renaud Fabre (LED), Otmane Azeroual (DZHW), Patrice Bellot (LIS),
Joachim Sch\"opfel (GERIICO), Daniel Egret (PSL) | GRAPHYP: A Scientific Knowledge Graph with Manifold Subnetworks of
Communities. Detection of Scholarly Disputes in Adversarial Information
Routes | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The cognitive manifold of published content is currently expanding in all
areas of science. However, Scientific Knowledge Graphs (SKGs) only provide poor
pictures of the adversarial directions and scientific controversies that feed
the production of knowledge. In this Article, we tackle the understanding of
the design of the information space of a cognitive representation of research
activities, and of related bottlenecks that affect search interfaces, in the
mapping of structured objects into graphs. We propose, with SKG GRAPHYP, a
novel graph designed geometric architecture which optimizes both the detection
of the knowledge manifold of "cognitive communities", and the representation of
alternative paths to adversarial answers to a research question, for instance
in the context of academic disputes. With a methodology for designing "Manifold
Subnetworks of Cognitive Communities", GRAPHYP provides a classification of
distinct search paths in a research field. Users are detected from the variety
of their search practices and classified in "Cognitive communities" from the
analysis of the search history of their logs of scientific documentation. The
manifold of practices is expressed from metrics of differentiated uses by
triplets of nodes shaped into symmetrical graph subnetworks, with the following
three parameters: Mass, Intensity, and Variety.
| [
{
"version": "v1",
"created": "Tue, 3 May 2022 06:35:47 GMT"
}
]
| 1,651,622,400,000 | [
[
"Fabre",
"Renaud",
"",
"LED"
],
[
"Azeroual",
"Otmane",
"",
"DZHW"
],
[
"Bellot",
"Patrice",
"",
"LIS"
],
[
"Schöpfel",
"Joachim",
"",
"GERIICO"
],
[
"Egret",
"Daniel",
"",
"PSL"
]
]
|
2205.01546 | Yukun Feng | Yukun Feng, Feng Li, Ziang Song, Boyuan Zheng, Philipp Koehn | Learn To Remember: Transformer with Recurrent Memory for Document-Level
Machine Translation | Accepted by NAACL-2022 Findings | Findings of the Association for Computational Linguistics: NAACL
2022, 1409--1420 | 10.18653/v1/2022.findings-naacl.105 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The Transformer architecture has led to significant gains in machine
translation. However, most studies focus on only sentence-level translation
without considering the context dependency within documents, leading to the
inadequacy of document-level coherence. Some recent research tried to mitigate
this issue by introducing an additional context encoder or translating with
multiple sentences or even the entire document. Such methods may lose the
information on the target side or have an increasing computational complexity
as documents get longer. To address such problems, we introduce a recurrent
memory unit to the vanilla Transformer, which supports the information exchange
between the sentence and previous context. The memory unit is recurrently
updated by acquiring information from sentences, and passing the aggregated
knowledge back to subsequent sentence states. We follow a two-stage training
strategy, in which the model is first trained at the sentence level and then
finetuned for document-level translation. We conduct experiments on three
popular datasets for document-level machine translation and our model has an
average improvement of 0.91 s-BLEU over the sentence-level baseline. We also
achieve state-of-the-art results on TED and News, outperforming the previous
work by 0.36 s-BLEU and 1.49 d-BLEU on average.
| [
{
"version": "v1",
"created": "Tue, 3 May 2022 14:55:53 GMT"
}
]
| 1,666,310,400,000 | [
[
"Feng",
"Yukun",
""
],
[
"Li",
"Feng",
""
],
[
"Song",
"Ziang",
""
],
[
"Zheng",
"Boyuan",
""
],
[
"Koehn",
"Philipp",
""
]
]
|
2205.01979 | Francesco Chiariello | Francesco Chiariello, Fabrizio Maria Maggi, Fabio Patrizi | ASP-Based Declarative Process Mining (Extended Abstract) | null | 38th International Conference on Logic Programming (ICLP2022) | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We propose Answer Set Programming (ASP) as an approach for modeling and
solving problems from the area of Declarative Process Mining (DPM). We consider
here three classical problems, namely, Log Generation, Conformance Checking,
and Query Checking. These problems are addressed from both a control-flow and a
data-aware perspective. The approach is based on the representation of process
specifications as (finite-state) automata. Since these are strictly more
expressive than the de facto DPM standard specification language DECLARE, more
general specifications than those typical of DPM can be handled, such as
formulas in linear-time temporal logic over finite traces. (Full version
available in the Proceedings of the 36th AAAI Conference on Artificial
Intelligence).
| [
{
"version": "v1",
"created": "Wed, 4 May 2022 10:11:54 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Sep 2022 15:07:26 GMT"
}
]
| 1,664,236,800,000 | [
[
"Chiariello",
"Francesco",
""
],
[
"Maggi",
"Fabrizio Maria",
""
],
[
"Patrizi",
"Fabio",
""
]
]
|
2205.02328 | David Radke | David Radke, Kate Larson, Tim Brecht | Exploring the Benefits of Teams in Multiagent Learning | 10 pages, 6 figures, Published at IJCAI 2022. arXiv admin note: text
overlap with arXiv:2204.07471 | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | For problems requiring cooperation, many multiagent systems implement
solutions among either individual agents or across an entire population towards
a common goal. Multiagent teams are primarily studied when in conflict;
however, organizational psychology (OP) highlights the benefits of teams among
human populations for learning how to coordinate and cooperate. In this paper,
we propose a new model of multiagent teams for reinforcement learning (RL)
agents inspired by OP and early work on teams in artificial intelligence. We
validate our model using complex social dilemmas that are popular in recent
multiagent RL and find that agents divided into teams develop cooperative
pro-social policies despite incentives to not cooperate. Furthermore, agents
are better able to coordinate and learn emergent roles within their teams and
achieve higher rewards compared to when the interests of all agents are
aligned.
| [
{
"version": "v1",
"created": "Wed, 4 May 2022 21:14:03 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 16:06:46 GMT"
}
]
| 1,690,848,000,000 | [
[
"Radke",
"David",
""
],
[
"Larson",
"Kate",
""
],
[
"Brecht",
"Tim",
""
]
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.