id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2209.05226 | Sarit Kraus | Yaniv Oshrat, Yonatan Aumann, Tal Hollander, Oleg Maksimov, Anita
Ostroumov, Natali Shechtman, Sarit Kraus | Efficient Customer Service Combining Human Operators and Virtual Agents | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The prospect of combining human operators and virtual agents (bots) into an
effective hybrid system that provides proper customer service to clients is
promising yet challenging. The hybrid system decreases the customers'
frustration when bots are unable to provide appropriate service and increases
their satisfaction when they prefer to interact with human operators.
Furthermore, we show that it is possible to decrease the cost and efforts of
building and maintaining such virtual agents by enabling the virtual agent to
incrementally learn from the human operators. We employ queuing theory to
identify the key parameters that govern the behavior and efficiency of such
hybrid systems and determine the main parameters that should be optimized in
order to improve the service. We formally prove, and demonstrate in extensive
simulations and in a user study, that with the proper choice of parameters,
such hybrid systems are able to increase the number of served clients while
simultaneously decreasing their expected waiting time and increasing
satisfaction.
| [
{
"version": "v1",
"created": "Mon, 12 Sep 2022 13:23:42 GMT"
}
] | 1,663,027,200,000 | [
[
"Oshrat",
"Yaniv",
""
],
[
"Aumann",
"Yonatan",
""
],
[
"Hollander",
"Tal",
""
],
[
"Maksimov",
"Oleg",
""
],
[
"Ostroumov",
"Anita",
""
],
[
"Shechtman",
"Natali",
""
],
[
"Kraus",
"Sarit",
""
]
] |
2209.05470 | Alexander Feldman | Alexander Feldman, Johan de Kleer, Ion Matei | A Quantum Algorithm for Computing All Diagnoses of a Switching Circuit | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Faults are stochastic by nature while most man-made systems, and especially
computers, work deterministically. This necessitates the linking of probability
theory with mathematical logics, automata, and switching circuit theory. This
paper provides such a connecting via quantum information theory which is an
intuitive approach as quantum physics obeys probability laws. In this paper we
provide a novel approach for computing diagnosis of switching circuits with
gate-based quantum computers. The approach is based on the idea of putting the
qubits representing faults in superposition and compute all, often
exponentially many, diagnoses simultaneously. We empirically compare the
quantum algorithm for diagnostics to an approach based on SAT and
model-counting. For a benchmark of combinational circuits we establish an error
of less than one percent in estimating the true probability of faults.
| [
{
"version": "v1",
"created": "Thu, 8 Sep 2022 17:55:30 GMT"
}
] | 1,663,027,200,000 | [
[
"Feldman",
"Alexander",
""
],
[
"de Kleer",
"Johan",
""
],
[
"Matei",
"Ion",
""
]
] |
2209.05698 | Feng Zhao | Feng Zhao, Ziqi Zhang, Donglin Wang | KSG: Knowledge and Skill Graph | 5 pages, 7 figures, published to CIKM 2022 | null | 10.1145/3511808.3557623 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The knowledge graph (KG) is an essential form of knowledge representation
that has grown in prominence in recent years. Because it concentrates on
nominal entities and their relationships, traditional knowledge graphs are
static and encyclopedic in nature. On this basis, event knowledge graph (Event
KG) models the temporal and spatial dynamics by text processing to facilitate
downstream applications, such as question-answering, recommendation and
intelligent search. Existing KG research, on the other hand, mostly focuses on
text processing and static facts, ignoring the vast quantity of dynamic
behavioral information included in photos, movies, and pre-trained neural
networks. In addition, no effort has been done to include behavioral
intelligence information into the knowledge graph for deep reinforcement
learning (DRL) and robot learning. In this paper, we propose a novel dynamic
knowledge and skill graph (KSG), and then we develop a basic and specific KSG
based on CN-DBpedia. The nodes are divided into entity and attribute nodes,
with entity nodes containing the agent, environment, and skill (DRL policy or
policy representation), and attribute nodes containing the entity description,
pre-train network, and offline dataset. KSG can search for different agents'
skills in various environments and provide transferable information for
acquiring new skills. This is the first study that we are aware of that looks
into dynamic KSG for skill retrieval and learning. Extensive experimental
results on new skill learning show that KSG boosts new skill learning
efficiency.
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2022 02:47:46 GMT"
}
] | 1,663,113,600,000 | [
[
"Zhao",
"Feng",
""
],
[
"Zhang",
"Ziqi",
""
],
[
"Wang",
"Donglin",
""
]
] |
2209.05838 | Markus Iser | Tim Holzenkamp, Kevin Kuryshev, Thomas Oltmann, Lucas W\"aldele,
Johann Zuber, Tobias Heuer, Markus Iser | SATViz: Real-Time Visualization of Clausal Proofs | Presented at Pragmatics of SAT Workshop (no proceedings) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Visual layouts of graphs representing SAT instances can highlight the
community structure of SAT instances. The community structure of SAT instances
has been associated with both instance hardness and known clause quality
heuristics. Our tool SATViz visualizes CNF formulas using the variable
interaction graph and a force-directed layout algorithm. With SATViz, clause
proofs can be animated to continuously highlight variables that occur in a
moving window of recently learned clauses. If needed, SATViz can also create
new layouts of the variable interaction graph with the adjusted edge weights.
In this paper, we describe the structure and feature set of SATViz. We also
present some interesting visualizations created with SATViz.
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2022 09:34:48 GMT"
}
] | 1,663,113,600,000 | [
[
"Holzenkamp",
"Tim",
""
],
[
"Kuryshev",
"Kevin",
""
],
[
"Oltmann",
"Thomas",
""
],
[
"Wäldele",
"Lucas",
""
],
[
"Zuber",
"Johann",
""
],
[
"Heuer",
"Tobias",
""
],
[
"Iser",
"Markus",
""
]
] |
2209.06120 | Neel Guha | Neel Guha, Daniel E. Ho, Julian Nyarko, Christopher R\'e | LegalBench: Prototyping a Collaborative Benchmark for Legal Reasoning | 13 pages, 7 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Can foundation models be guided to execute tasks involving legal reasoning?
We believe that building a benchmark to answer this question will require
sustained collaborative efforts between the computer science and legal
communities. To that end, this short paper serves three purposes. First, we
describe how IRAC-a framework legal scholars use to distinguish different types
of legal reasoning-can guide the construction of a Foundation Model oriented
benchmark. Second, we present a seed set of 44 tasks built according to this
framework. We discuss initial findings, and highlight directions for new tasks.
Finally-inspired by the Open Science movement-we make a call for the legal and
computer science communities to join our efforts by contributing new tasks.
This work is ongoing, and our progress can be tracked here:
https://github.com/HazyResearch/legalbench.
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2022 16:11:54 GMT"
}
] | 1,663,113,600,000 | [
[
"Guha",
"Neel",
""
],
[
"Ho",
"Daniel E.",
""
],
[
"Nyarko",
"Julian",
""
],
[
"Ré",
"Christopher",
""
]
] |
2209.06317 | Michael Hind | David Piorkowski, Michael Hind, John Richards | Quantitative AI Risk Assessments: Opportunities and Challenges | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although AI-based systems are increasingly being leveraged to provide value
to organizations, individuals, and society, significant attendant risks have
been identified. These risks have led to proposed regulations, litigation, and
general societal concerns.
As with any promising technology, organizations want to benefit from the
positive capabilities of AI technology while reducing the risks. The best way
to reduce risks is to implement comprehensive AI lifecycle governance where
policies and procedures are described and enforced during the design,
development, deployment, and monitoring of an AI system. While support for
comprehensive governance is beginning to emerge, organizations often need to
identify the risks of deploying an already-built model without knowledge of how
it was constructed or access to its original developers.
Such an assessment will quantitatively assess the risks of an existing model
in a manner analogous to how a home inspector might assess the energy
efficiency of an already-built home or a physician might assess overall patient
health based on a battery of tests. This paper explores the concept of a
quantitative AI Risk Assessment, exploring the opportunities, challenges, and
potential impacts of such an approach, and discussing how it might improve AI
regulations.
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2022 21:47:25 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Jan 2023 18:20:01 GMT"
}
] | 1,673,481,600,000 | [
[
"Piorkowski",
"David",
""
],
[
"Hind",
"Michael",
""
],
[
"Richards",
"John",
""
]
] |
2209.06569 | Mostafa Haghir Chehreghani | Mostafa Haghir Chehreghani | The Embeddings World and Artificial General Intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | From early days, a key and controversial question inside the artificial
intelligence community was whether Artificial General Intelligence (AGI) is
achievable. AGI is the ability of machines and computer programs to achieve
human-level intelligence and do all tasks that a human being can. While there
exist a number of systems in the literature claiming they realize AGI, several
other researchers argue that it is impossible to achieve it. In this paper, we
take a different view to the problem. First, we discuss that in order to
realize AGI, along with building intelligent machines and programs, an
intelligent world should also be constructed which is on the one hand, an
accurate approximation of our world and on the other hand, a significant part
of reasoning of intelligent machines is already embedded in this world. Then we
discuss that AGI is not a product or algorithm, rather it is a continuous
process which will become more and more mature over time (like human
civilization and wisdom). Then, we argue that pre-trained embeddings play a key
role in building this intelligent world and as a result, realizing AGI. We
discuss how pre-trained embeddings facilitate achieving several characteristics
of human-level intelligence, such as embodiment, common sense knowledge,
unconscious knowledge and continuality of learning, by machines.
| [
{
"version": "v1",
"created": "Wed, 14 Sep 2022 11:56:30 GMT"
}
] | 1,663,200,000,000 | [
[
"Chehreghani",
"Mostafa Haghir",
""
]
] |
2209.07096 | Stas Tiomkin | Kyle Hollins Wray, Stas Tiomkin, Mykel J. Kochenderfer, Pieter Abbeel | Multi-Objective Policy Gradients with Topological Constraints | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Multi-objective optimization models that encode ordered sequential
constraints provide a solution to model various challenging problems including
encoding preferences, modeling a curriculum, and enforcing measures of safety.
A recently developed theory of topological Markov decision processes (TMDPs)
captures this range of problems for the case of discrete states and actions. In
this work, we extend TMDPs towards continuous spaces and unknown transition
dynamics by formulating, proving, and implementing the policy gradient theorem
for TMDPs. This theoretical result enables the creation of TMDP learning
algorithms that use function approximators, and can generalize existing deep
reinforcement learning (DRL) approaches. Specifically, we present a new
algorithm for a policy gradient in TMDPs by a simple extension of the proximal
policy optimization (PPO) algorithm. We demonstrate this on a real-world
multiple-objective navigation problem with an arbitrary ordering of objectives
both in simulation and on a real robot.
| [
{
"version": "v1",
"created": "Thu, 15 Sep 2022 07:22:58 GMT"
}
] | 1,663,286,400,000 | [
[
"Wray",
"Kyle Hollins",
""
],
[
"Tiomkin",
"Stas",
""
],
[
"Kochenderfer",
"Mykel J.",
""
],
[
"Abbeel",
"Pieter",
""
]
] |
2209.07175 | Ayush Kumar Varshney Mr. | Ayush K. Varshney and Vicen\c{c} Torra | Literature Review of the Recent Trends and Applications in various Fuzzy
Rule based systems | 49 pages, Accepted for publication in ijfs | Int. J. Fuzzy Syst. (2023) | 10.1007/s40815-023-01534-w | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Fuzzy rule based systems (FRBSs) is a rule-based system which uses linguistic
fuzzy variables as antecedents and consequent to represent human understandable
knowledge. They have been applied to various applications and areas throughout
the soft computing literature. However, FRBSs suffers from many drawbacks such
as uncertainty representation, high number of rules, interpretability loss,
high computational time for learning etc. To overcome these issues with FRBSs,
there exists many extensions of FRBSs. This paper presents an overview and
literature review of recent trends on various types and prominent areas of
fuzzy systems (FRBSs) namely genetic fuzzy system (GFS), hierarchical fuzzy
system (HFS), neuro fuzzy system (NFS), evolving fuzzy system (eFS), FRBSs for
big data, FRBSs for imbalanced data, interpretability in FRBSs and FRBSs which
use cluster centroids as fuzzy rules. The review is for years 2010-2021. This
paper also highlights important contributions, publication statistics and
current trends in the field. The paper also addresses several open research
areas which need further attention from the FRBSs research community.
| [
{
"version": "v1",
"created": "Thu, 15 Sep 2022 09:49:17 GMT"
},
{
"version": "v2",
"created": "Tue, 16 May 2023 17:42:34 GMT"
}
] | 1,685,404,800,000 | [
[
"Varshney",
"Ayush K.",
""
],
[
"Torra",
"Vicenç",
""
]
] |
2209.07368 | Xuehui Yu | Xuehui Yu, Jingchi Jiang, Xinmiao Yu, Yi Guan, Xue Li | Causal Coupled Mechanisms: A Control Method with Cooperation and
Competition for Complex System | 8 pages, 7 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex systems are ubiquitous in the real world and tend to have complicated
and poorly understood dynamics. For their control issues, the challenge is to
guarantee accuracy, robustness, and generalization in such bloated and troubled
environments. Fortunately, a complex system can be divided into multiple
modular structures that human cognition appears to exploit. Inspired by this
cognition, a novel control method, Causal Coupled Mechanisms (CCMs), is
proposed that explores the cooperation in division and competition in
combination. Our method employs the theory of hierarchical reinforcement
learning (HRL), in which 1) the high-level policy with competitive awareness
divides the whole complex system into multiple functional mechanisms, and 2)
the low-level policy finishes the control task of each mechanism. Specifically
for cooperation, a cascade control module helps the series operation of CCMs,
and a forward coupled reasoning module is used to recover the coupling
information lost in the division process. On both synthetic systems and a
real-world biological regulatory system, the CCM method achieves robust and
state-of-the-art control results even with unpredictable random noise.
Moreover, generalization results show that reusing prepared specialized CCMs
helps to perform well in environments with different confounders and dynamics.
| [
{
"version": "v1",
"created": "Thu, 15 Sep 2022 15:32:16 GMT"
}
] | 1,663,286,400,000 | [
[
"Yu",
"Xuehui",
""
],
[
"Jiang",
"Jingchi",
""
],
[
"Yu",
"Xinmiao",
""
],
[
"Guan",
"Yi",
""
],
[
"Li",
"Xue",
""
]
] |
2209.07479 | Sven Hertling | Sven Hertling, Heiko Paulheim | Gollum: A Gold Standard for Large Scale Multi Source Knowledge Graph
Matching | accepted at AKBC 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The number of Knowledge Graphs (KGs) generated with automatic and manual
approaches is constantly growing. For an integrated view and usage, an
alignment between these KGs is necessary on the schema as well as instance
level. While there are approaches that try to tackle this multi source
knowledge graph matching problem, large gold standards are missing to evaluate
their effectiveness and scalability. We close this gap by presenting Gollum --
a gold standard for large-scale multi source knowledge graph matching with over
275,000 correspondences between 4,149 different KGs. They originate from
knowledge graphs derived by applying the DBpedia extraction framework to a
large wiki farm. Three variations of the gold standard are made available: (1)
a version with all correspondences for evaluating unsupervised matching
approaches, and two versions for evaluating supervised matching: (2) one where
each KG is contained both in the train and test set, and (3) one where each KG
is exclusively contained in the train or the test set.
| [
{
"version": "v1",
"created": "Thu, 15 Sep 2022 17:21:43 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Sep 2022 08:15:04 GMT"
}
] | 1,663,545,600,000 | [
[
"Hertling",
"Sven",
""
],
[
"Paulheim",
"Heiko",
""
]
] |
2209.08271 | Zhicong Luo | Long Yu, Zhicong Luo, Huanyong Liu, Deng Lin, Hongzhu Li, Yafeng Deng | TripleRE: Knowledge Graph Embeddings via Tripled Relation Vectors | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Translation-based knowledge graph embedding has been one of the most
important branches for knowledge representation learning since TransE came out.
Although many translation-based approaches have achieved some progress in
recent years, the performance was still unsatisfactory. This paper proposes a
novel knowledge graph embedding method named TripleRE with two versions. The
first version of TripleRE creatively divide the relationship vector into three
parts. The second version takes advantage of the concept of residual and
achieves better performance. In addition, attempts on using NodePiece to encode
entities achieved promising results in reducing the parametric size, and solved
the problems of scalability. Experiments show that our approach achieved
state-of-the-art performance on the large-scale knowledge graph dataset, and
competitive performance on other datasets.
| [
{
"version": "v1",
"created": "Sat, 17 Sep 2022 07:42:37 GMT"
}
] | 1,663,632,000,000 | [
[
"Yu",
"Long",
""
],
[
"Luo",
"Zhicong",
""
],
[
"Liu",
"Huanyong",
""
],
[
"Lin",
"Deng",
""
],
[
"Li",
"Hongzhu",
""
],
[
"Deng",
"Yafeng",
""
]
] |
2209.09066 | Gerhard Friedrich | Richard Comploi-Taupe and Gerhard Friedrich and Konstantin Schekotihin
and Antonius Weinzierl | Specifying and Exploiting Non-Monotonic Domain-Specific Declarative
Heuristics in Answer Set Programming | null | null | 10.1613/jair.1.14091 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Domain-specific heuristics are an essential technique for solving
combinatorial problems efficiently. Current approaches to integrate
domain-specific heuristics with Answer Set Programming (ASP) are unsatisfactory
when dealing with heuristics that are specified non-monotonically on the basis
of partial assignments. Such heuristics frequently occur in practice, for
example, when picking an item that has not yet been placed in bin packing.
Therefore, we present novel syntax and semantics for declarative specifications
of domain-specific heuristics in ASP. Our approach supports heuristic
statements that depend on the partial assignment maintained during solving,
which has not been possible before. We provide an implementation in ALPHA that
makes ALPHA the first lazy-grounding ASP system to support declaratively
specified domain-specific heuristics. Two practical example domains are used to
demonstrate the benefits of our proposal. Additionally, we use our approach to
implement informed} search with A*, which is tackled within ASP for the first
time. A* is applied to two further search problems. The experiments confirm
that combining lazy-grounding ASP solving and our novel heuristics can be vital
for solving industrial-size problems.
| [
{
"version": "v1",
"created": "Mon, 19 Sep 2022 14:57:50 GMT"
}
] | 1,673,308,800,000 | [
[
"Comploi-Taupe",
"Richard",
""
],
[
"Friedrich",
"Gerhard",
""
],
[
"Schekotihin",
"Konstantin",
""
],
[
"Weinzierl",
"Antonius",
""
]
] |
2209.09491 | Curie Kim | Curie Kim, Yewon Hwang, and Jong-Hwan Kim | Deep Q-Network for AI Soccer | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning has shown an outstanding performance in the
applications of games, particularly in Atari games as well as Go. Based on
these successful examples, we attempt to apply one of the well-known
reinforcement learning algorithms, Deep Q-Network, to the AI Soccer game. AI
Soccer is a 5:5 robot soccer game where each participant develops an algorithm
that controls five robots in a team to defeat the opponent participant. Deep
Q-Network is designed to implement our original rewards, the state space, and
the action space to train each agent so that it can take proper actions in
different situations during the game. Our algorithm was able to successfully
train the agents, and its performance was preliminarily proven through the
mini-competition against 10 teams wishing to take part in the AI Soccer
international competition. The competition was organized by the AI World Cup
committee, in conjunction with the WCG 2019 Xi'an AI Masters. With our
algorithm, we got the achievement of advancing to the round of 16 in this
international competition with 130 teams from 39 countries.
| [
{
"version": "v1",
"created": "Tue, 20 Sep 2022 06:04:26 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Sep 2022 05:26:15 GMT"
}
] | 1,663,804,800,000 | [
[
"Kim",
"Curie",
""
],
[
"Hwang",
"Yewon",
""
],
[
"Kim",
"Jong-Hwan",
""
]
] |
2209.09535 | Joscha Gr\"uger | Joscha Gr\"uger, Tobias Geyer, Martin Kuhn, Stefan Braun, Ralph
Bergmann | Declarative Guideline Conformance Checking of Clinical Treatments: A
Case Study | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conformance checking is a process mining technique that allows verifying the
conformance of process instances to a given model. Thus, this technique is
predestined to be used in the medical context for the comparison of treatment
cases with clinical guidelines. However, medical processes are highly variable,
highly dynamic, and complex. This makes the use of imperative conformance
checking approaches in the medical domain difficult. Studies show that
declarative approaches can better address these characteristics. However, none
of the approaches has yet gained practical acceptance. Another challenge are
alignments, which usually do not add any value from a medical point of view.
For this reason, we investigate in a case study the usability of the HL7
standard Arden Syntax for declarative, rule-based conformance checking and the
use of manually modeled alignments. Using the approach, it was possible to
check the conformance of treatment cases and create medically meaningful
alignments for large parts of a medical guideline.
| [
{
"version": "v1",
"created": "Tue, 20 Sep 2022 08:07:02 GMT"
}
] | 1,663,718,400,000 | [
[
"Grüger",
"Joscha",
""
],
[
"Geyer",
"Tobias",
""
],
[
"Kuhn",
"Martin",
""
],
[
"Braun",
"Stefan",
""
],
[
"Bergmann",
"Ralph",
""
]
] |
2209.09608 | Dieqiao Feng | Dieqiao Feng, Carla P. Gomes, Bart Selman | Graph Value Iteration | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, deep Reinforcement Learning (RL) has been successful in
various combinatorial search domains, such as two-player games and scientific
discovery. However, directly applying deep RL in planning domains is still
challenging. One major difficulty is that without a human-crafted heuristic
function, reward signals remain zero unless the learning framework discovers
any solution plan. Search space becomes \emph{exponentially larger} as the
minimum length of plans grows, which is a serious limitation for planning
instances with a minimum plan length of hundreds to thousands of steps.
Previous learning frameworks that augment graph search with deep neural
networks and extra generated subgoals have achieved success in various
challenging planning domains. However, generating useful subgoals requires
extensive domain knowledge. We propose a domain-independent method that
augments graph search with graph value iteration to solve hard planning
instances that are out of reach for domain-specialized solvers. In particular,
instead of receiving learning signals only from discovered plans, our approach
also learns from failed search attempts where no goal state has been reached.
The graph value iteration component can exploit the graph structure of local
search space and provide more informative learning signals. We also show how we
use a curriculum strategy to smooth the learning process and perform a full
analysis of how graph value iteration scales and enables learning.
| [
{
"version": "v1",
"created": "Tue, 20 Sep 2022 10:45:03 GMT"
}
] | 1,663,718,400,000 | [
[
"Feng",
"Dieqiao",
""
],
[
"Gomes",
"Carla P.",
""
],
[
"Selman",
"Bart",
""
]
] |
2209.09618 | Oliver Niggemann | Maria Krantz, Alexander Windmann, Rene Heesch, Lukas Moddemann, Oliver
Niggemann | On a Uniform Causality Model for Industrial Automation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing complexity of Cyber-Physical Systems (CPS) makes industrial
automation challenging. Large amounts of data recorded by sensors need to be
processed to adequately perform tasks such as diagnosis in case of fault. A
promising approach to deal with this complexity is the concept of causality.
However, most research on causality has focused on inferring causal relations
between parts of an unknown system. Engineering uses causality in a
fundamentally different way: complex systems are constructed by combining
components with known, controllable behavior. As CPS are constructed by the
second approach, most data-based causality models are not suited for industrial
automation. To bridge this gap, a Uniform Causality Model for various
application areas of industrial automation is proposed, which will allow better
communication and better data usage across disciplines. The resulting model
describes the behavior of CPS mathematically and, as the model is evaluated on
the unique requirements of the application areas, it is shown that the Uniform
Causality Model can work as a basis for the application of new approaches in
industrial automation that focus on machine learning.
| [
{
"version": "v1",
"created": "Tue, 20 Sep 2022 11:23:51 GMT"
}
] | 1,663,718,400,000 | [
[
"Krantz",
"Maria",
""
],
[
"Windmann",
"Alexander",
""
],
[
"Heesch",
"Rene",
""
],
[
"Moddemann",
"Lukas",
""
],
[
"Niggemann",
"Oliver",
""
]
] |
2209.09819 | Nico Roos | Nico Roos | Efficient Model Based Diagnosis | null | Intelligent Systems Engineering 2 (1993) 107-118 | 10.1049/ise.1993.0011 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper an efficient model based diagnostic process is described for
systems whose components possess a causal relation between their inputs and
their outputs. In this diagnostic process, firstly, a set of focuses on likely
broken components is determined. Secondly, for each focus the most informative
probing point within the focus can be determined. Both these steps of the
diagnostic process have a worst case time complexity of ${\cal O}(n^2)$ where
$n$ is the number of components. If the connectivity of the components is low,
however, the diagnostic process shows a linear time complexity. It is also
shown how the diagnostic process described can be applied in dynamic systems
and systems containing loops. When diagnosing dynamic systems it is possible to
choose between detecting intermitting faults or to improve the diagnostic
precision by assuming non-intermittency.
| [
{
"version": "v1",
"created": "Tue, 20 Sep 2022 16:07:19 GMT"
}
] | 1,663,718,400,000 | [
[
"Roos",
"Nico",
""
]
] |
2209.09838 | Nico Roos | Nico Roos | On resolving conflicts between arguments | null | Computational Intelligence 16:3 (2000) 469-497 | 10.1111/0824-7935.00120 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Argument systems are based on the idea that one can construct arguments for
propositions; i.e., structured reasons justifying the belief in a proposition.
Using defeasible rules, arguments need not be valid in all circumstances,
therefore, it might be possible to construct an argument for a proposition as
well as its negation. When arguments support conflicting propositions, one of
the arguments must be defeated, which raises the question of \emph{which
(sub-)arguments can be subject to defeat}?
In legal argumentation, meta-rules determine the valid arguments by
considering the last defeasible rule of each argument involved in a conflict.
Since it is easier to evaluate arguments using their last rules, \emph{can a
conflict be resolved by considering only the last defeasible rules of the
arguments involved}?
We propose a new argument system where, instead of deriving a defeat relation
between arguments, \emph{undercutting-arguments} for the defeat of defeasible
rules are constructed. This system allows us, (\textit{i}) to resolve conflicts
(a generalization of rebutting arguments) using only the last rules of the
arguments for inconsistencies, (\textit{ii}) to determine a set of valid
(undefeated) arguments in linear time using an algorithm based on a JTMS,
(\textit{iii}) to establish a relation with Default Logic, and (\textit{iv}) to
prove closure properties such as \emph{cumulativity}. We also propose an
extension of the argument system that enables \emph{reasoning by cases}.
| [
{
"version": "v1",
"created": "Tue, 20 Sep 2022 16:31:19 GMT"
}
] | 1,663,718,400,000 | [
[
"Roos",
"Nico",
""
]
] |
2209.10319 | EPTCS | Renato Acampora (University of Udine, Italy), Luca Geatti (Free
University of Bozen-Bolzano, Italy), Nicola Gigante (Free University of
Bozen-Bolzano, Italy), Angelo Montanari (University of Udine, Italy),
Valentino Picotti (University of Southern Denmark) | Controller Synthesis for Timeline-based Games | In Proceedings GandALF 2022, arXiv:2209.09333 | EPTCS 370, 2022, pp. 131-146 | 10.4204/EPTCS.370.9 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the timeline-based approach to planning, originally born in the space
sector, the evolution over time of a set of state variables (the timelines) is
governed by a set of temporal constraints. Traditional timeline-based planning
systems excel at the integration of planning with execution by handling
temporal uncertainty. In order to handle general nondeterminism as well, the
concept of timeline-based games has been recently introduced. It has been
proved that finding whether a winning strategy exists for such games is
2EXPTIME-complete. However, a concrete approach to synthesize controllers
implementing such strategies is missing. This paper fills this gap, outlining
an approach to controller synthesis for timeline-based games.
| [
{
"version": "v1",
"created": "Wed, 21 Sep 2022 12:45:34 GMT"
}
] | 1,663,804,800,000 | [
[
"Acampora",
"Renato",
"",
"University of Udine, Italy"
],
[
"Geatti",
"Luca",
"",
"Free\n University of Bozen-Bolzano, Italy"
],
[
"Gigante",
"Nicola",
"",
"Free University of\n Bozen-Bolzano, Italy"
],
[
"Montanari",
"Angelo",
"",
"University of Udine, Italy"
],
[
"Picotti",
"Valentino",
"",
"University of Southern Denmark"
]
] |
2209.10656 | Xiangtong Yao | Xiangtong Yao, Zhenshan Bing, Genghang Zhuang, Kejia Chen, Hongkuan
Zhou, Kai Huang and Alois Knoll | Learning from Symmetry: Meta-Reinforcement Learning with Symmetrical
Behaviors and Language Instructions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Meta-reinforcement learning (meta-RL) is a promising approach that enables
the agent to learn new tasks quickly. However, most meta-RL algorithms show
poor generalization in multi-task scenarios due to the insufficient task
information provided only by rewards. Language-conditioned meta-RL improves the
generalization capability by matching language instructions with the agent's
behaviors. While both behaviors and language instructions have symmetry, which
can speed up human learning of new knowledge. Thus, combining symmetry and
language instructions into meta-RL can help improve the algorithm's
generalization and learning efficiency. We propose a dual-MDP
meta-reinforcement learning method that enables learning new tasks efficiently
with symmetrical behaviors and language instructions. We evaluate our method in
multiple challenging manipulation tasks, and experimental results show that our
method can greatly improve the generalization and learning efficiency of
meta-reinforcement learning. Videos are available at
https://tumi6robot.wixsite.com/symmetry/.
| [
{
"version": "v1",
"created": "Wed, 21 Sep 2022 20:54:21 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jul 2023 11:50:29 GMT"
}
] | 1,688,601,600,000 | [
[
"Yao",
"Xiangtong",
""
],
[
"Bing",
"Zhenshan",
""
],
[
"Zhuang",
"Genghang",
""
],
[
"Chen",
"Kejia",
""
],
[
"Zhou",
"Hongkuan",
""
],
[
"Huang",
"Kai",
""
],
[
"Knoll",
"Alois",
""
]
] |
2209.11067 | Dongzhuoran Zhou | Dongzhuoran Zhou, Baifan Zhou, Jieying Chen, Gong Cheng, Egor V.
Kostylev, Evgeny Kharlamov | Towards Ontology Reshaping for KG Generation with User-in-the-Loop:
Applied to Bosch Welding | null | null | 10.1145/3502223.3502243 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge graphs (KG) are used in a wide range of applications. The
automation of KG generation is very desired due to the data volume and variety
in industries. One important approach of KG generation is to map the raw data
to a given KG schema, namely a domain ontology, and construct the entities and
properties according to the ontology. However, the automatic generation of such
ontology is demanding and existing solutions are often not satisfactory. An
important challenge is a trade-off between two principles of ontology
engineering: knowledge-orientation and data-orientation. The former one
prescribes that an ontology should model the general knowledge of a domain,
while the latter one emphasises on reflecting the data specificities to ensure
good usability. We address this challenge by our method of ontology reshaping,
which automates the process of converting a given domain ontology to a smaller
ontology that serves as the KG schema. The domain ontology can be designed to
be knowledge-oriented and the KG schema covers the data specificities. In
addition, our approach allows the option of including user preferences in the
loop. We demonstrate our on-going research on ontology reshaping and present an
evaluation using real industrial data, with promising results.
| [
{
"version": "v1",
"created": "Thu, 22 Sep 2022 14:59:13 GMT"
}
] | 1,663,891,200,000 | [
[
"Zhou",
"Dongzhuoran",
""
],
[
"Zhou",
"Baifan",
""
],
[
"Chen",
"Jieying",
""
],
[
"Cheng",
"Gong",
""
],
[
"Kostylev",
"Egor V.",
""
],
[
"Kharlamov",
"Evgeny",
""
]
] |
2209.11591 | Vadim Indelman | Gilad Rotman, Vadim Indelman | involve-MI: Informative Planning with High-Dimensional Non-Parametric
Beliefs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | One of the most complex tasks of decision making and planning is to gather
information. This task becomes even more complex when the state is
high-dimensional and its belief cannot be expressed with a parametric
distribution. Although the state is high-dimensional, in many problems only a
small fraction of it might be involved in transitioning the state and
generating observations. We exploit this fact to calculate an
information-theoretic expected reward, mutual information (MI), over a much
lower-dimensional subset of the state, to improve efficiency and without
sacrificing accuracy. A similar approach was used in previous works, yet
specifically for Gaussian distributions, and we here extend it for general
distributions. Moreover, we apply the dimensionality reduction for cases in
which the new states are augmented to the previous, yet again without
sacrificing accuracy. We then continue by developing an estimator for the MI
which works in a Sequential Monte Carlo (SMC) manner, and avoids the
reconstruction of future belief's surfaces. Finally, we show how this work is
applied to the informative planning optimization problem. This work is then
evaluated in a simulation of an active SLAM problem, where the improvement in
both accuracy and timing is demonstrated.
| [
{
"version": "v1",
"created": "Fri, 23 Sep 2022 13:51:36 GMT"
}
] | 1,664,150,400,000 | [
[
"Rotman",
"Gilad",
""
],
[
"Indelman",
"Vadim",
""
]
] |
2209.11746 | Selene Baez Santamaria | Selene B\'aez Santamar\'ia, Piek Vossen, Thomas Baier | Evaluating Agent Interactions Through Episodic Knowledge Graphs | Accepted to 1st Workshop on Customized Chat Grounding Persona and
Knowledge, at COLING (2022) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present a new method based on episodic Knowledge Graphs (eKGs) for
evaluating (multimodal) conversational agents in open domains. This graph is
generated by interpreting raw signals during conversation and is able to
capture the accumulation of knowledge over time. We apply structural and
semantic analysis of the resulting graphs and translate the properties into
qualitative measures. We compare these measures with existing automatic and
manual evaluation metrics commonly used for conversational agents. Our results
show that our Knowledge-Graph-based evaluation provides more qualitative
insights into interaction and the agent's behavior.
| [
{
"version": "v1",
"created": "Thu, 22 Sep 2022 12:42:09 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Sep 2022 11:34:26 GMT"
}
] | 1,664,236,800,000 | [
[
"Santamaría",
"Selene Báez",
""
],
[
"Vossen",
"Piek",
""
],
[
"Baier",
"Thomas",
""
]
] |
2209.11764 | Will Bridewell | Will Bridewell | Taking the Intentional Stance Seriously, or "Intending" to Improve
Cognitive Systems | 13 pages, 1 figure, 2 tables | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Finding claims that researchers have made considerable progress in artificial
intelligence over the last several decades is easy. However, our everyday
interactions with cognitive systems (e.g., Siri, Alexa, DALL-E) quickly move
from intriguing to frustrating. One cause of those frustrations rests in a
mismatch between the expectations we have due to our inherent,
folk-psychological theories and the real limitations we experience with
existing computer programs. The software does not understand that people have
goals, beliefs about how to achieve those goals, and intentions to act
accordingly. One way to align cognitive systems with our expectations is to
imbue them with mental states that mirror those we use to predict and explain
human behavior. This paper discusses these concerns and illustrates the
challenge of following this route by analyzing the mental state 'intention.'
That analysis is joined with high-level methodological suggestions that support
progress in this endeavor.
| [
{
"version": "v1",
"created": "Wed, 21 Sep 2022 13:38:23 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Nov 2022 13:54:03 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Nov 2022 18:17:56 GMT"
}
] | 1,667,952,000,000 | [
[
"Bridewell",
"Will",
""
]
] |
2209.12093 | Tim Franzmeyer | Tim Franzmeyer, Philip H. S. Torr, Jo\~ao F. Henriques | Learn what matters: cross-domain imitation learning with task-relevant
embeddings | NeurIPS 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We study how an autonomous agent learns to perform a task from demonstrations
in a different domain, such as a different environment or different agent. Such
cross-domain imitation learning is required to, for example, train an
artificial agent from demonstrations of a human expert. We propose a scalable
framework that enables cross-domain imitation learning without access to
additional demonstrations or further domain knowledge. We jointly train the
learner agent's policy and learn a mapping between the learner and expert
domains with adversarial training. We effect this by using a mutual information
criterion to find an embedding of the expert's state space that contains
task-relevant information and is invariant to domain specifics. This step
significantly simplifies estimating the mapping between the learner and expert
domains and hence facilitates end-to-end learning. We demonstrate successful
transfer of policies between considerably different domains, without extra
supervision such as additional demonstrations, and in situations where other
methods fail.
| [
{
"version": "v1",
"created": "Sat, 24 Sep 2022 21:56:58 GMT"
}
] | 1,664,236,800,000 | [
[
"Franzmeyer",
"Tim",
""
],
[
"Torr",
"Philip H. S.",
""
],
[
"Henriques",
"João F.",
""
]
] |
2209.12398 | Kenneth Odoh E | Kenneth Odoh | Real-time Anomaly Detection for Multivariate Data Streams | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present a real-time multivariate anomaly detection algorithm for data
streams based on the Probabilistic Exponentially Weighted Moving Average
(PEWMA). Our formulation is resilient to (abrupt transient, abrupt
distributional, and gradual distributional) shifts in the data. The novel
anomaly detection routines utilize an incremental online algorithm to handle
streams. Furthermore, our proposed anomaly detection algorithm works in an
unsupervised manner eliminating the need for labeled examples. Our algorithm
performs well and is resilient in the face of concept drifts.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2022 03:40:37 GMT"
}
] | 1,664,236,800,000 | [
[
"Odoh",
"Kenneth",
""
]
] |
2209.12423 | Lina Bariah | Lina Bariah and Merouane Debbah | The Interplay of AI and Digital Twin: Bridging the Gap between
Data-Driven and Model-Driven Approaches | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The evolution of network virtualization and native artificial intelligence
(AI) paradigms have conceptualized the vision of future wireless networks as a
comprehensive entity operating in whole over a digital platform, with smart
interaction with the physical domain, paving the way for the blooming of the
Digital Twin (DT) concept. The recent interest in the DT networks is fueled by
the emergence of novel wireless technologies and use-cases, that exacerbate the
level of complexity to orchestrate the network and to manage its resources.
Driven by AI, the key principle of the DT is to create a virtual twin for the
physical entities and network dynamics, where the virtual twin will be
leveraged to generate synthetic data and offer an on-demand platform for AI
model training. Despite the common understanding that AI is the seed for DT, we
anticipate that the DT and AI will be enablers for each other, in a way that
overcome their limitations and complement each other benefits. In this article,
we dig into the fundamentals of DT, where we reveal the role of DT in unifying
model-driven and data-driven approaches, and explore the opportunities offered
by DT in order to achieve the optimistic vision of 6G networks. We further
unfold the essential role of the theoretical underpinnings in unlocking further
opportunities by AI, and hence, we unveil their pivotal impact on the
realization of reliable, efficient, and low-latency DT.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2022 05:12:58 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Mar 2023 13:00:07 GMT"
}
] | 1,680,134,400,000 | [
[
"Bariah",
"Lina",
""
],
[
"Debbah",
"Merouane",
""
]
] |
2209.12619 | Paolo Burelli | Paolo Burelli | Predicting Customer Lifetime Value in Free-to-Play Games | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As game companies increasingly embrace a service-oriented business model, the
need for predictive models of players becomes more pressing. Multiple
activities, such as user acquisition, live game operations or game design need
to be supported with information about the choices made by the players and the
choices they could make in the future. This is especially true in the context
of free-to-play games, where the absence of a pay wall and the erratic nature
of the players' playing and spending behavior make predictions about the
revenue and allocation of budget and resources extremely challenging. In this
chapter we will present an overview of customer lifetime value modeling across
different fields, we will introduce the challenges specific to free-to-play
games across different platforms and genres and we will discuss the
state-of-the-art solutions with practical examples and references to existing
implementations.
| [
{
"version": "v1",
"created": "Tue, 6 Sep 2022 15:02:14 GMT"
}
] | 1,664,236,800,000 | [
[
"Burelli",
"Paolo",
""
]
] |
2209.12623 | Kirill Krinkin | Kirill Krinkin and Yulia Shichkina | Cognitive Architecture for Co-Evolutionary Hybrid Intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper questions the feasibility of a strong (general) data-centric
artificial intelligence (AI). The disadvantages of this type of intelligence
are discussed. As an alternative, the concept of co-evolutionary hybrid
intelligence is proposed. It is based on the cognitive interoperability of man
and machine. An analysis of existing approaches to the construction of
cognitive architectures is given. An architecture seamlessly incorporates a
human into the loop of intelligent problem solving is considered. The article
is organized as follows. The first part contains a critique of data-centric
intelligent systems. The reasons why it is impossible to create a strong
artificial intelligence based on this type of intelligence are indicated. The
second part briefly presents the concept of co-evolutionary hybrid intelligence
and shows its advantages. The third part gives an overview and analysis of
existing cognitive architectures. It is concluded that many do not consider
humans part of the intelligent data processing process. The next part discusses
the cognitive architecture for co-evolutionary hybrid intelligence, providing
integration with humans. It finishes with general conclusions about the
feasibility of developing intelligent systems with humans in the
problem-solving loop.
| [
{
"version": "v1",
"created": "Mon, 5 Sep 2022 08:26:16 GMT"
}
] | 1,664,236,800,000 | [
[
"Krinkin",
"Kirill",
""
],
[
"Shichkina",
"Yulia",
""
]
] |
2209.13002 | Dongjie Wang | Dongjie Wang, Kunpeng Liu, Yanyong Huang, Leilei Sun, Bowen Du, and
Yanjie Fu | Automated Urban Planning aware Spatial Hierarchies and Human
Instructions | Needs to improve and polish | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional urban planning demands urban experts to spend considerable time
and effort producing an optimal urban plan under many architectural
constraints. The remarkable imaginative ability of deep generative learning
provides hope for renovating urban planning. While automated urban planners
have been examined, they are constrained because of the following: 1)
neglecting human requirements in urban planning; 2) omitting spatial
hierarchies in urban planning, and 3) lacking numerous urban plan data samples.
To overcome these limitations, we propose a novel, deep, human-instructed urban
planner. In the preliminary work, we formulate it into an encoder-decoder
paradigm. The encoder is to learn the information distribution of surrounding
contexts, human instructions, and land-use configuration. The decoder is to
reconstruct the land-use configuration and the associated urban functional
zones. The reconstruction procedure will capture the spatial hierarchies
between functional zones and spatial grids. Meanwhile, we introduce a
variational Gaussian mechanism to mitigate the data sparsity issue. Even though
early work has led to good results, the performance of generation is still
unstable because the way spatial hierarchies are captured may lead to unclear
optimization directions. In this journal version, we propose a cascading deep
generative framework based on generative adversarial networks (GANs) to solve
this problem, inspired by the workflow of urban experts. In particular, the
purpose of the first GAN is to build urban functional zones based on
information from human instructions and surrounding contexts. The second GAN
will produce the land-use configuration based on the functional zones that have
been constructed. Additionally, we provide a conditioning augmentation module
to augment data samples. Finally, we conduct extensive experiments to validate
the efficacy of our work.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2022 20:37:02 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Oct 2022 02:07:20 GMT"
}
] | 1,666,656,000,000 | [
[
"Wang",
"Dongjie",
""
],
[
"Liu",
"Kunpeng",
""
],
[
"Huang",
"Yanyong",
""
],
[
"Sun",
"Leilei",
""
],
[
"Du",
"Bowen",
""
],
[
"Fu",
"Yanjie",
""
]
] |
2209.13129 | Matthew Olson | Matthew L. Olson | Deep Generative Multimedia Children's Literature | AAAI 2023 Workshop on Creative AI Across Modalities | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artistic work leveraging Machine Learning techniques is an increasingly
popular endeavour for those with a creative lean. However, most work is done in
a single domain: text, images, music, etc. In this work, I design a system for
a machine learning created multimedia experience, specifically in the genre of
children's literature. We detail the process for exclusively using publicly
available pretrained deep neural network based models, I present multiple
examples of the work my system creates, and I explore the problems associated
in this area of creative work.
| [
{
"version": "v1",
"created": "Tue, 27 Sep 2022 03:23:11 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Nov 2022 05:59:02 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Nov 2022 18:18:41 GMT"
},
{
"version": "v4",
"created": "Tue, 10 Jan 2023 19:59:31 GMT"
}
] | 1,673,481,600,000 | [
[
"Olson",
"Matthew L.",
""
]
] |
2209.13160 | Dylan Asmar | Dylan M. Asmar and Mykel J. Kochenderfer | Collaborative Decision Making Using Action Suggestions | Code is available at https://github.com/sisl/action_suggestions.
Accepted to NeurIPS 2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The level of autonomy is increasing in systems spanning multiple domains, but
these systems still experience failures. One way to mitigate the risk of
failures is to integrate human oversight of the autonomous systems and rely on
the human to take control when the autonomy fails. In this work, we formulate a
method of collaborative decision making through action suggestions that
improves action selection without taking control of the system. Our approach
uses each suggestion efficiently by incorporating the implicit information
shared through suggestions to modify the agent's belief and achieves better
performance with fewer suggestions than naively following the suggested
actions. We assume collaborative agents share the same objective and
communicate through valid actions. By assuming the suggested action is
dependent only on the state, we can incorporate the suggested action as an
independent observation of the environment. The assumption of a collaborative
environment enables us to use the agent's policy to estimate the distribution
over action suggestions. We propose two methods that use suggested actions and
demonstrate the approach through simulated experiments. The proposed
methodology results in increased performance while also being robust to
suboptimal suggestions.
| [
{
"version": "v1",
"created": "Tue, 27 Sep 2022 05:16:41 GMT"
}
] | 1,664,323,200,000 | [
[
"Asmar",
"Dylan M.",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] |
2209.13501 | Wensheng Gan | Chunkai Zhang, Maohua Lyu, Wensheng Gan, and Philip S. Yu | Totally-ordered Sequential Rules for Utility Maximization | Preprint. 4 figures, 8 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High utility sequential pattern mining (HUSPM) is a significant and valuable
activity in knowledge discovery and data analytics with many real-world
applications. In some cases, HUSPM can not provide an excellent measure to
predict what will happen. High utility sequential rule mining (HUSRM) discovers
high utility and high confidence sequential rules, allowing it to solve the
problem in HUSPM. All existing HUSRM algorithms aim to find high-utility
partially-ordered sequential rules (HUSRs), which are not consistent with
reality and may generate fake HUSRs. Therefore, in this paper, we formulate the
problem of high utility totally-ordered sequential rule mining and propose two
novel algorithms, called TotalSR and TotalSR+, which aim to identify all high
utility totally-ordered sequential rules (HTSRs). TotalSR creates a utility
table that can efficiently calculate antecedent support and a utility prefix
sum list that can compute the remaining utility in O(1) time for a sequence. We
also introduce a left-first expansion strategy that can utilize the
anti-monotonic property to use a confidence pruning strategy. TotalSR can also
drastically reduce the search space with the help of utility upper bounds
pruning strategies, avoiding much more meaningless computation. In addition,
TotalSR+ uses an auxiliary antecedent record table to more efficiently discover
HTSRs. Finally, there are numerous experimental results on both real and
synthetic datasets demonstrating that TotalSR is significantly more efficient
than algorithms with fewer pruning strategies, and TotalSR+ is significantly
more efficient than TotalSR in terms of running time and scalability.
| [
{
"version": "v1",
"created": "Tue, 27 Sep 2022 16:17:58 GMT"
}
] | 1,664,323,200,000 | [
[
"Zhang",
"Chunkai",
""
],
[
"Lyu",
"Maohua",
""
],
[
"Gan",
"Wensheng",
""
],
[
"Yu",
"Philip S.",
""
]
] |
2209.13710 | Pascal Hitzler | Cara Widmer, Md Kamruzzaman Sarker, Srikanth Nadella, Joshua Fiechter,
Ion Juvina, Brandon Minnery, Pascal Hitzler, Joshua Schwartz, Michael Raymer | Towards Human-Compatible XAI: Explaining Data Differentials with Concept
Induction over Background Knowledge | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Concept induction, which is based on formal logical reasoning over
description logics, has been used in ontology engineering in order to create
ontology (TBox) axioms from the base data (ABox) graph. In this paper, we show
that it can also be used to explain data differentials, for example in the
context of Explainable AI (XAI), and we show that it can in fact be done in a
way that is meaningful to a human observer. Our approach utilizes a large class
hierarchy, curated from the Wikipedia category hierarchy, as background
knowledge.
| [
{
"version": "v1",
"created": "Tue, 27 Sep 2022 21:51:27 GMT"
}
] | 1,664,409,600,000 | [
[
"Widmer",
"Cara",
""
],
[
"Sarker",
"Md Kamruzzaman",
""
],
[
"Nadella",
"Srikanth",
""
],
[
"Fiechter",
"Joshua",
""
],
[
"Juvina",
"Ion",
""
],
[
"Minnery",
"Brandon",
""
],
[
"Hitzler",
"Pascal",
""
],
[
"Schwartz",
"Joshua",
""
],
[
"Raymer",
"Michael",
""
]
] |
2209.13763 | Guo Dongjin | Dongjin Guo, Xiaoming Su, Jiatai Wang, Limin Liu, Zhiyong Pei, Zhiwei
Xu | Clustering-Induced Generative Incomplete Image-Text Clustering (CIGIT-C) | 13 pages,12 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The target of image-text clustering (ITC) is to find correct clusters by
integrating complementary and consistent information of multi-modalities for
these heterogeneous samples. However, the majority of current studies analyse
ITC on the ideal premise that the samples in every modality are complete. This
presumption, however, is not always valid in real-world situations. The missing
data issue degenerates the image-text feature learning performance and will
finally affect the generalization abilities in ITC tasks. Although a series of
methods have been proposed to address this incomplete image text clustering
issue (IITC), the following problems still exist: 1) most existing methods
hardly consider the distinct gap between heterogeneous feature domains. 2) For
missing data, the representations generated by existing methods are rarely
guaranteed to suit clustering tasks. 3) Existing methods do not tap into the
latent connections both inter and intra modalities. In this paper, we propose a
Clustering-Induced Generative Incomplete Image-Text Clustering(CIGIT-C) network
to address the challenges above. More specifically, we first use
modality-specific encoders to map original features to more distinctive
subspaces. The latent connections between intra and inter-modalities are
thoroughly explored by using the adversarial generating network to produce one
modality conditional on the other modality. Finally, we update the
corresponding modalityspecific encoders using two KL divergence losses.
Experiment results on public image-text datasets demonstrated that the
suggested method outperforms and is more effective in the IITC job.
| [
{
"version": "v1",
"created": "Wed, 28 Sep 2022 01:19:52 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Nov 2022 08:50:36 GMT"
}
] | 1,669,852,800,000 | [
[
"Guo",
"Dongjin",
""
],
[
"Su",
"Xiaoming",
""
],
[
"Wang",
"Jiatai",
""
],
[
"Liu",
"Limin",
""
],
[
"Pei",
"Zhiyong",
""
],
[
"Xu",
"Zhiwei",
""
]
] |
2209.13873 | Mu Yuan | Mu Yuan, Lan Zhang, Fengxiang He, Xueting Tong, Miao-Hui Song,
Zhengyuan Xu, Xiang-Yang Li | InFi: End-to-End Learning to Filter Input for Resource-Efficiency in
Mobile-Centric Inference | IEEE Transactions on Mobile Computing (TMC) 2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile-centric AI applications have high requirements for resource-efficiency
of model inference. Input filtering is a promising approach to eliminate the
redundancy so as to reduce the cost of inference. Previous efforts have
tailored effective solutions for many applications, but left two essential
questions unanswered: (1) theoretical filterability of an inference workload to
guide the application of input filtering techniques, thereby avoiding the
trial-and-error cost for resource-constrained mobile applications; (2) robust
discriminability of feature embedding to allow input filtering to be widely
effective for diverse inference tasks and input content. To answer them, we
first formalize the input filtering problem and theoretically compare the
hypothesis complexity of inference models and input filters to understand the
optimization potential. Then we propose the first end-to-end learnable input
filtering framework that covers most state-of-the-art methods and surpasses
them in feature embedding with robust discriminability. We design and implement
InFi that supports six input modalities and multiple mobile-centric
deployments. Comprehensive evaluations confirm our theoretical results and show
that InFi outperforms strong baselines in applicability, accuracy, and
efficiency. InFi achieve 8.5x throughput and save 95% bandwidth, while keeping
over 90% accuracy, for a video analytics application on mobile platforms.
| [
{
"version": "v1",
"created": "Wed, 28 Sep 2022 07:16:15 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 02:08:12 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Jun 2023 03:17:14 GMT"
}
] | 1,686,182,400,000 | [
[
"Yuan",
"Mu",
""
],
[
"Zhang",
"Lan",
""
],
[
"He",
"Fengxiang",
""
],
[
"Tong",
"Xueting",
""
],
[
"Song",
"Miao-Hui",
""
],
[
"Xu",
"Zhengyuan",
""
],
[
"Li",
"Xiang-Yang",
""
]
] |
2209.13883 | Mu Yuan | Mu Yuan, Lan Zhang, Zimu Zheng, Yi-Nan Zhang, Xiang-Yang Li | MLink: Linking Black-Box Models from Multiple Domains for Collaborative
Inference | Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The cost efficiency of model inference is critical to real-world machine
learning (ML) applications, especially for delay-sensitive tasks and
resource-limited devices. A typical dilemma is: in order to provide complex
intelligent services (e.g. smart city), we need inference results of multiple
ML models, but the cost budget (e.g. GPU memory) is not enough to run all of
them. In this work, we study underlying relationships among black-box ML models
and propose a novel learning task: model linking, which aims to bridge the
knowledge of different black-box models by learning mappings (dubbed model
links) between their output spaces. We propose the design of model links which
supports linking heterogeneous black-box ML models. Also, in order to address
the distribution discrepancy challenge, we present adaptation and aggregation
methods of model links. Based on our proposed model links, we developed a
scheduling algorithm, named MLink. Through collaborative multi-model inference
enabled by model links, MLink can improve the accuracy of obtained inference
results under the cost budget. We evaluated MLink on a multi-modal dataset with
seven different ML models and two real-world video analytics systems with six
ML models and 3,264 hours of video. Experimental results show that our proposed
model links can be effectively built among various black-box models. Under the
budget of GPU memory, MLink can save 66.7% inference computations while
preserving 94% inference accuracy, which outperforms multi-task learning, deep
reinforcement learning-based scheduler and frame filtering baselines.
| [
{
"version": "v1",
"created": "Wed, 28 Sep 2022 07:29:47 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 02:14:07 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Jun 2023 03:15:06 GMT"
}
] | 1,686,182,400,000 | [
[
"Yuan",
"Mu",
""
],
[
"Zhang",
"Lan",
""
],
[
"Zheng",
"Zimu",
""
],
[
"Zhang",
"Yi-Nan",
""
],
[
"Li",
"Xiang-Yang",
""
]
] |
2209.14252 | Cunxi Yu | Yingjie Li, Ruiyang Chen, Weilu Gao, Cunxi Yu | Physics-aware Differentiable Discrete Codesign for Diffractive Optical
Neural Networks | International Conference on Computer-Aided Design (ICCAD'2022) To
appear | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffractive optical neural networks (DONNs) have attracted lots of attention
as they bring significant advantages in terms of power efficiency, parallelism,
and computational speed compared with conventional deep neural networks (DNNs),
which have intrinsic limitations when implemented on digital platforms.
However, inversely mapping algorithm-trained physical model parameters onto
real-world optical devices with discrete values is a non-trivial task as
existing optical devices have non-unified discrete levels and non-monotonic
properties. This work proposes a novel device-to-system hardware-software
codesign framework, which enables efficient physics-aware training of DONNs
w.r.t arbitrary experimental measured optical devices across layers.
Specifically, Gumbel-Softmax is employed to enable differentiable discrete
mapping from real-world device parameters into the forward function of DONNs,
where the physical parameters in DONNs can be trained by simply minimizing the
loss function of the ML task. The results have demonstrated that our proposed
framework offers significant advantages over conventional quantization-based
methods, especially with low-precision optical devices. Finally, the proposed
algorithm is fully verified with physical experimental optical systems in
low-precision settings.
| [
{
"version": "v1",
"created": "Wed, 28 Sep 2022 17:13:28 GMT"
}
] | 1,664,409,600,000 | [
[
"Li",
"Yingjie",
""
],
[
"Chen",
"Ruiyang",
""
],
[
"Gao",
"Weilu",
""
],
[
"Yu",
"Cunxi",
""
]
] |
2209.15067 | Paulo Shakarian | Paulo Shakarian, Gerardo I. Simari, Devon Callahan | Reasoning about Complex Networks: A Logic Programming Approach | arXiv admin note: substantial text overlap with arXiv:1301.0302 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reasoning about complex networks has in recent years become an important
topic of study due to its many applications: the adoption of commercial
products, spread of disease, the diffusion of an idea, etc. In this paper, we
present the MANCaLog language, a formalism based on logic programming that
satisfies a set of desiderata proposed in previous work as recommendations for
the development of approaches to reasoning in complex networks. To the best of
our knowledge, this is the first formalism that satisfies all such criteria. We
first focus on algorithms for finding minimal models (on which multi-attribute
analysis can be done), and then on how this formalism can be applied in certain
real world scenarios. Towards this end, we study the problem of deciding group
membership in social networks: given a social network and a set of groups where
group membership of only some of the individuals in the network is known, we
wish to determine a degree of membership for the remaining group-individual
pairs. We develop a prototype implementation that we use to obtain experimental
results on two real world datasets, including a current social network of
criminal gangs in a major U.S.\ city. We then show how the assignment of degree
of membership to nodes in this case allows for a better understanding of the
criminal gang problem when combined with other social network mining techniques
-- including detection of sub-groups and identification of core group members
-- which would not be possible without further identification of additional
group members.
| [
{
"version": "v1",
"created": "Thu, 29 Sep 2022 19:20:24 GMT"
}
] | 1,664,755,200,000 | [
[
"Shakarian",
"Paulo",
""
],
[
"Simari",
"Gerardo I.",
""
],
[
"Callahan",
"Devon",
""
]
] |
2209.15104 | Quoc Hung Ngo | Quoc Hung Ngo, Tahar Kechadi, Nhien-An Le-Khac | OAK4XAI: Model towards Out-Of-Box eXplainable Artificial Intelligence
for Digital Agriculture | AI-2022 Forty-second SGAI International Conference on Artificial
Intelligence | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent machine learning approaches have been effective in Artificial
Intelligence (AI) applications. They produce robust results with a high level
of accuracy. However, most of these techniques do not provide
human-understandable explanations for supporting their results and decisions.
They usually act as black boxes, and it is not easy to understand how decisions
have been made. Explainable Artificial Intelligence (XAI), which has received
much interest recently, tries to provide human-understandable explanations for
decision-making and trained AI models. For instance, in digital agriculture,
related domains often present peculiar or input features with no link to
background knowledge. The application of the data mining process on
agricultural data leads to results (knowledge), which are difficult to explain.
In this paper, we propose a knowledge map model and an ontology design as an
XAI framework (OAK4XAI) to deal with this issue. The framework does not only
consider the data analysis part of the process, but it takes into account the
semantics aspect of the domain knowledge via an ontology and a knowledge map
model, provided as modules of the framework. Many ongoing XAI studies aim to
provide accurate and verbalizable accounts for how given feature values
contribute to model decisions. The proposed approach, however, focuses on
providing consistent information and definitions of concepts, algorithms, and
values involved in the data mining models. We built an Agriculture Computing
Ontology (AgriComO) to explain the knowledge mined in agriculture. AgriComO has
a well-designed structure and includes a wide range of concepts and
transformations suitable for agriculture and computing domains.
| [
{
"version": "v1",
"created": "Thu, 29 Sep 2022 21:20:25 GMT"
}
] | 1,664,755,200,000 | [
[
"Ngo",
"Quoc Hung",
""
],
[
"Kechadi",
"Tahar",
""
],
[
"Le-Khac",
"Nhien-An",
""
]
] |
2209.15111 | Sander Beckers | Sander Beckers, Hana Chockler, Joseph Y. Halpern | Quantifying Harm | 17 pages, under submission | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In a companion paper (Beckers et al. 2022), we defined a qualitative notion
of harm: either harm is caused, or it is not. For practical applications, we
often need to quantify harm; for example, we may want to choose the lest
harmful of a set of possible interventions. We first present a quantitative
definition of harm in a deterministic context involving a single individual,
then we consider the issues involved in dealing with uncertainty regarding the
context and going from a notion of harm for a single individual to a notion of
"societal harm", which involves aggregating the harm to individuals. We show
that the "obvious" way of doing this (just taking the expected harm for an
individual and then summing the expected harm over all individuals can lead to
counterintuitive or inappropriate answers, and discuss alternatives, drawing on
work from the decision-theory literature.
| [
{
"version": "v1",
"created": "Thu, 29 Sep 2022 21:48:38 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2022 12:32:57 GMT"
}
] | 1,665,100,800,000 | [
[
"Beckers",
"Sander",
""
],
[
"Chockler",
"Hana",
""
],
[
"Halpern",
"Joseph Y.",
""
]
] |
2209.15133 | Hongyu Guo | Hongyu Guo, Kun Xie and Mehdi Keyvan-Ekbatani | Modeling driver's evasive behavior during safety-critical lane
changes:Two-dimensional time-to-collision and deep reinforcement learning | null | null | 10.1016/j.aap.2023.107063 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lane changes are complex driving behaviors and frequently involve
safety-critical situations. This study aims to develop a lane-change-related
evasive behavior model, which can facilitate the development of safety-aware
traffic simulations and predictive collision avoidance systems. Large-scale
connected vehicle data from the Safety Pilot Model Deployment (SPMD) program
were used for this study. A new surrogate safety measure, two-dimensional
time-to-collision (2D-TTC), was proposed to identify the safety-critical
situations during lane changes. The validity of 2D-TTC was confirmed by showing
a high correlation between the detected conflict risks and the archived
crashes. A deep deterministic policy gradient (DDPG) algorithm, which could
learn the sequential decision-making process over continuous action spaces, was
used to model the evasive behaviors in the identified safety-critical
situations. The results showed the superiority of the proposed model in
replicating both the longitudinal and lateral evasive behaviors.
| [
{
"version": "v1",
"created": "Thu, 29 Sep 2022 23:23:38 GMT"
}
] | 1,680,739,200,000 | [
[
"Guo",
"Hongyu",
""
],
[
"Xie",
"Kun",
""
],
[
"Keyvan-Ekbatani",
"Mehdi",
""
]
] |
2209.15137 | Gideon Vos | Gideon Vos, Kelly Trinh, Zoltan Sarnyai, Mostafa Rahimi Azghadi | Generalizable machine learning for stress monitoring from wearable
devices: A systematic literature review | https://www.sciencedirect.com/science/article/pii/S1386505623000436 | International Journal of Medical Informatics Volume 173, May 2023,
105026 | 10.1016/j.ijmedinf.2023.105026 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Introduction. The stress response has both subjective, psychological and
objectively measurable, biological components. Both of them can be expressed
differently from person to person, complicating the development of a generic
stress measurement model. This is further compounded by the lack of large,
labeled datasets that can be utilized to build machine learning models for
accurately detecting periods and levels of stress. The aim of this review is to
provide an overview of the current state of stress detection and monitoring
using wearable devices, and where applicable, machine learning techniques
utilized.
Methods. This study reviewed published works contributing and/or using
datasets designed for detecting stress and their associated machine learning
methods, with a systematic review and meta-analysis of those that utilized
wearable sensor data as stress biomarkers. The electronic databases of Google
Scholar, Crossref, DOAJ and PubMed were searched for relevant articles and a
total of 24 articles were identified and included in the final analysis. The
reviewed works were synthesized into three categories of publicly available
stress datasets, machine learning, and future research directions.
Results. A wide variety of study-specific test and measurement protocols were
noted in the literature. A number of public datasets were identified that are
labeled for stress detection. In addition, we discuss that previous works show
shortcomings in areas such as their labeling protocols, lack of statistical
power, validity of stress biomarkers, and generalization ability.
Conclusion. Generalization of existing machine learning models still require
further study, and research in this area will continue to provide improvements
as newer and more substantial datasets become available for study.
| [
{
"version": "v1",
"created": "Thu, 29 Sep 2022 23:40:38 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Mar 2023 02:44:04 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Mar 2023 07:47:11 GMT"
}
] | 1,681,776,000,000 | [
[
"Vos",
"Gideon",
""
],
[
"Trinh",
"Kelly",
""
],
[
"Sarnyai",
"Zoltan",
""
],
[
"Azghadi",
"Mostafa Rahimi",
""
]
] |
2209.15274 | Alexandre Reiffers-Masson | Alexandre Reiffers-Masson (IMT Atlantique - INFO, Lab-STICC_MATHNET),
Isabel Amigo (IMT Atlantique - INFO, Lab-STICC_MATHNET) | Online Multi-Agent Decentralized Byzantine-robust Gradient Estimation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose an iterative scheme for distributed
Byzantineresilient estimation of a gradient associated with a black-box model.
Our algorithm is based on simultaneous perturbation, secure state estimation
and two-timescale stochastic approximations. We also show the performance of
our algorithm through numerical experiments.
| [
{
"version": "v1",
"created": "Fri, 30 Sep 2022 07:29:49 GMT"
}
] | 1,664,755,200,000 | [
[
"Reiffers-Masson",
"Alexandre",
"",
"IMT Atlantique - INFO, Lab-STICC_MATHNET"
],
[
"Amigo",
"Isabel",
"",
"IMT Atlantique - INFO, Lab-STICC_MATHNET"
]
] |
2210.00216 | Tristan Cazenave | Tristan Cazenave | Nested Search versus Limited Discrepancy Search | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Limited Discrepancy Search (LDS) is a popular algorithm to search a state
space with a heuristic to order the possible actions. Nested Search (NS) is
another algorithm to search a state space with the same heuristic. NS spends
more time on the move associated to the best heuristic playout while LDS spends
more time on the best heuristic move. They both use similar times for the same
level of search. We advocate in this paper that it is often better to follow
the best heuristic playout as in NS than to follow the heuristic as in LDS.
| [
{
"version": "v1",
"created": "Sat, 1 Oct 2022 07:57:07 GMT"
}
] | 1,664,841,600,000 | [
[
"Cazenave",
"Tristan",
""
]
] |
2210.00283 | Luigi Bellomarini | Luigi Bellomarini, Eleonora Laurenza, Emanuel Sallinger, Evgeny
Sherkhonov | Swift Markov Logic for Probabilistic Reasoning on Knowledge Graphs | Under consideration in Theory and Practice of Logic Programming
(TPLP) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We provide a framework for probabilistic reasoning in Vadalog-based Knowledge
Graphs (KGs), satisfying the requirements of ontological reasoning: full
recursion, powerful existential quantification, expression of inductive
definitions. Vadalog is a Knowledge Representation and Reasoning (KRR) language
based on Warded Datalog+/-, a logical core language of existential rules, with
a good balance between computational complexity and expressive power. Handling
uncertainty is essential for reasoning with KGs. Yet Vadalog and Warded
Datalog+/- are not covered by the existing probabilistic logic programming and
statistical relational learning approaches for several reasons, including
insufficient support for recursion with existential quantification, and the
impossibility to express inductive definitions. In this work, we introduce Soft
Vadalog, a probabilistic extension to Vadalog, satisfying these desiderata. A
Soft Vadalog program induces what we call a Probabilistic Knowledge Graph
(PKG), which consists of a probability distribution on a network of chase
instances, structures obtained by grounding the rules over a database using the
chase procedure. We exploit PKGs for probabilistic marginal inference. We
discuss the theory and present MCMC-chase, a Monte Carlo method to use Soft
Vadalog in practice. We apply our framework to solve data management and
industrial problems, and experimentally evaluate it in the Vadalog system.
Under consideration in Theory and Practice of Logic Programming (TPLP).
| [
{
"version": "v1",
"created": "Sat, 1 Oct 2022 13:57:21 GMT"
}
] | 1,664,841,600,000 | [
[
"Bellomarini",
"Luigi",
""
],
[
"Laurenza",
"Eleonora",
""
],
[
"Sallinger",
"Emanuel",
""
],
[
"Sherkhonov",
"Evgeny",
""
]
] |
2210.00315 | Trevor Bench-Capon | Trevor Bench-Capon and Katie Atkinson | Using Argumentation Schemes to Model Legal Reasoning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present argumentation schemes to model reasoning with legal cases. We
provide schemes for each of the three stages that take place after the facts
are established: factor ascription, issue resolution and outcome determination.
The schemes are illustrated with examples from a specific legal domain, US
Trade Secrets law, and the wider applicability of these schemes is discussed.
| [
{
"version": "v1",
"created": "Sat, 1 Oct 2022 16:38:28 GMT"
}
] | 1,664,841,600,000 | [
[
"Bench-Capon",
"Trevor",
""
],
[
"Atkinson",
"Katie",
""
]
] |
2210.00852 | Anahita Jamshidnejad | Anahita Jamshidnejad | A note on the potentials of probabilistic and fuzzy logic | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper mainly focuses on (1) a generalized treatment of fuzzy sets of
type $n$, where $n$ is an integer larger than or equal to $1$, with an example,
mathematical discussions, and real-life interpretation of the given
mathematical concepts; (2) the potentials and links between fuzzy logic and
probability logic that have not been discussed in one document in literature;
(3) representation of real-life random and fuzzy uncertainties and ambiguities
that arise in data-driven real-life problems, due to uncertain mathematical and
vague verbal terms in datasets.
| [
{
"version": "v1",
"created": "Thu, 29 Sep 2022 18:36:43 GMT"
}
] | 1,664,841,600,000 | [
[
"Jamshidnejad",
"Anahita",
""
]
] |
2210.01344 | Peter Baumgartner | Peter Baumgartner, Daniel Smith, Mashud Rana, Reena Kapoor, Elena
Tartaglia, Andreas Schutt, Ashfaqur Rahman, John Taylor, Simon Dunstall | Movement Analytics: Current Status, Application to Manufacturing, and
Future Prospects from an AI Perspective | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Data-driven decision making is becoming an integral part of manufacturing
companies. Data is collected and commonly used to improve efficiency and
produce high quality items for the customers. IoT-based and other forms of
object tracking are an emerging tool for collecting movement data of
objects/entities (e.g. human workers, moving vehicles, trolleys etc.) over
space and time. Movement data can provide valuable insights like process
bottlenecks, resource utilization, effective working time etc. that can be used
for decision making and improving efficiency.
Turning movement data into valuable information for industrial management and
decision making requires analysis methods. We refer to this process as movement
analytics. The purpose of this document is to review the current state of work
for movement analytics both in manufacturing and more broadly.
We survey relevant work from both a theoretical perspective and an
application perspective. From the theoretical perspective, we put an emphasis
on useful methods from two research areas: machine learning, and logic-based
knowledge representation. We also review their combinations in view of movement
analytics, and we discuss promising areas for future development and
application. Furthermore, we touch on constraint optimization.
From an application perspective, we review applications of these methods to
movement analytics in a general sense and across various industries. We also
describe currently available commercial off-the-shelf products for tracking in
manufacturing, and we overview main concepts of digital twins and their
applications.
| [
{
"version": "v1",
"created": "Tue, 4 Oct 2022 03:27:17 GMT"
}
] | 1,664,928,000,000 | [
[
"Baumgartner",
"Peter",
""
],
[
"Smith",
"Daniel",
""
],
[
"Rana",
"Mashud",
""
],
[
"Kapoor",
"Reena",
""
],
[
"Tartaglia",
"Elena",
""
],
[
"Schutt",
"Andreas",
""
],
[
"Rahman",
"Ashfaqur",
""
],
[
"Taylor",
"John",
""
],
[
"Dunstall",
"Simon",
""
]
] |
2210.01484 | Alexander Semenov | Alexander Semenov, Konstantin Chukharev, Egor Tarasov, Daniil
Chivilikhin and Viktor Kondratiev | Estimating the hardness of SAT encodings for Logical Equivalence
Checking of Boolean circuits | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we investigate how to estimate the hardness of Boolean
satisfiability (SAT) encodings for the Logical Equivalence Checking problem
(LEC). Meaningful estimates of hardness are important in cases when a
conventional SAT solver cannot solve a SAT instance in a reasonable time. We
show that the hardness of SAT encodings for LEC instances can be estimated
\textit{w.r.t.} some SAT partitioning. We also demonstrate the dependence of
the accuracy of the resulting estimates on the probabilistic characteristics of
a specially defined random variable associated with the considered
partitioning. The paper proposes several methods for constructing
partitionings, which, when used in practice, allow one to estimate the hardness
of SAT encodings for LEC with good accuracy. In the experimental part we
propose a class of scalable LEC tests that give extremely complex instances
with a relatively small input size $n$ of the considered circuits. For example,
for $n = 40$, none of the state-of-the-art SAT solvers can cope with the
considered tests in a reasonable time. However, these tests can be solved in
parallel using the proposed partitioning methods.
| [
{
"version": "v1",
"created": "Tue, 4 Oct 2022 09:19:13 GMT"
}
] | 1,664,928,000,000 | [
[
"Semenov",
"Alexander",
""
],
[
"Chukharev",
"Konstantin",
""
],
[
"Tarasov",
"Egor",
""
],
[
"Chivilikhin",
"Daniil",
""
],
[
"Kondratiev",
"Viktor",
""
]
] |
2210.01634 | Felix Sosa | Felix A. Sosa, Tomer Ullman | Type theory in human-like learning and inference | 5 pages, 0 figures, accepted into Beyond Bayes ICML '22 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans can generate reasonable answers to novel queries (Schulz, 2012): if I
asked you what kind of food you want to eat for lunch, you would respond with a
food, not a time. The thought that one would respond "After 4pm" to "What would
you like to eat" is either a joke or a mistake, and seriously entertaining it
as a lunch option would likely never happen in the first place. While
understanding how people come up with new ideas, thoughts, explanations, and
hypotheses that obey the basic constraints of a novel search space is of
central importance to cognitive science, there is no agreed-on formal model for
this kind of reasoning. We propose that a core component of any such reasoning
system is a type theory: a formal imposition of structure on the kinds of
computations an agent can perform, and how they're performed. We motivate this
proposal with three empirical observations: adaptive constraints on learning
and inference (i.e. generating reasonable hypotheses), how people draw
distinctions between improbability and impossibility, and people's ability to
reason about things at varying levels of abstraction.
| [
{
"version": "v1",
"created": "Tue, 4 Oct 2022 14:31:08 GMT"
}
] | 1,664,928,000,000 | [
[
"Sosa",
"Felix A.",
""
],
[
"Ullman",
"Tomer",
""
]
] |
2210.01766 | Areg Karapetyan Dr. | Majid Khonji, Rashid Alyassi, Wolfgang Merkt, Areg Karapetyan, Xin
Huang, Sungkweon Hong, Jorge Dias, Brian Williams | Multi-Agent Chance-Constrained Stochastic Shortest Path with Application
to Risk-Aware Intelligent Intersection | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In transportation networks, where traffic lights have traditionally been used
for vehicle coordination, intersections act as natural bottlenecks. A
formidable challenge for existing automated intersections lies in detecting and
reasoning about uncertainty from the operating environment and human-driven
vehicles. In this paper, we propose a risk-aware intelligent intersection
system for autonomous vehicles (AVs) as well as human-driven vehicles (HVs). We
cast the problem as a novel class of Multi-agent Chance-Constrained Stochastic
Shortest Path (MCC-SSP) problems and devise an exact Integer Linear Programming
(ILP) formulation that is scalable in the number of agents' interaction points
(e.g., potential collision points at the intersection). In particular, when the
number of agents within an interaction point is small, which is often the case
in intersections, the ILP has a polynomial number of variables and constraints.
To further improve the running time performance, we show that the collision
risk computation can be performed offline. Additionally, a trajectory
optimization workflow is provided to generate risk-aware trajectories for any
given intersection. The proposed framework is implemented in CARLA simulator
and evaluated under a fully autonomous intersection with AVs only as well as in
a hybrid setup with a signalized intersection for HVs and an intelligent scheme
for AVs. As verified via simulations, the featured approach improves
intersection's efficiency by up to $200\%$ while also conforming to the
specified tunable risk threshold.
| [
{
"version": "v1",
"created": "Mon, 3 Oct 2022 06:49:23 GMT"
}
] | 1,664,928,000,000 | [
[
"Khonji",
"Majid",
""
],
[
"Alyassi",
"Rashid",
""
],
[
"Merkt",
"Wolfgang",
""
],
[
"Karapetyan",
"Areg",
""
],
[
"Huang",
"Xin",
""
],
[
"Hong",
"Sungkweon",
""
],
[
"Dias",
"Jorge",
""
],
[
"Williams",
"Brian",
""
]
] |
2210.02769 | Jakob Stenseke | Jakob Stenseke | Artificial virtuous agents in a multiagent tragedy of the commons | 18 pages, 5 figures, 3 tables. AI & SOCIETY (2022) | null | 10.1007/s00146-022-01569-x | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Although virtue ethics has repeatedly been proposed as a suitable framework
for the development of artificial moral agents (AMAs), it has been proven
difficult to approach from a computational perspective. In this work, we
present the first technical implementation of artificial virtuous agents (AVAs)
in moral simulations. First, we review previous conceptual and technical work
in artificial virtue ethics and describe a functionalistic path to AVAs based
on dispositional virtues, bottom-up learning, and top-down eudaimonic reward.
We then provide the details of a technical implementation in a moral simulation
based on a tragedy of the commons scenario. The experimental results show how
the AVAs learn to tackle cooperation problems while exhibiting core features of
their theoretical counterpart, including moral character, dispositional
virtues, learning from experience, and the pursuit of eudaimonia. Ultimately,
we argue that virtue ethics provides a compelling path toward morally excellent
machines and that our work provides an important starting point for such
endeavors.
| [
{
"version": "v1",
"created": "Thu, 6 Oct 2022 09:12:41 GMT"
}
] | 1,665,100,800,000 | [
[
"Stenseke",
"Jakob",
""
]
] |
2210.02807 | C. Maria Keet | Frances Gillis-Webber and C. Maria Keet | A Review of Multilingualism in and for Ontologies | 22 pages, 10 figures, 8 tables; soon to be submitted to an
international journal | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Multilingual Semantic Web has been in focus for over a decade.
Multilingualism in Linked Data and RDF has shown substantial adoption, but this
is unclear for ontologies since the last review 15 years ago. One of the design
goals for OWL was internationalisation, with the aim that an ontology is usable
across languages and cultures. Much research to improve on multilingual
ontologies has taken place in the meantime, and presumably multilingual linked
data could use multilingual ontologies. Therefore, this review seeks to (i)
elucidate and compare the modelling options for multilingual ontologies, (ii)
examine extant ontologies for their multilingualism, and (iii) evaluate
ontology editors for their ability to manage a multilingual ontology. Nine
different principal approaches for modelling multilinguality in ontologies were
identified, which fall into either of the following approaches: using
multilingual labels, linguistic models, or a mapping-based approach. They are
compared on design by means of an ad hoc visualisation mode of modelling
multilingual information for ontologies, shortcomings, and what issues they aim
to solve. For the ontologies, we extracted production-level and accessible
ontologies from BioPortal and the LOV repositories, which had, at best, 6.77%
and 15.74% multilingual ontologies, respectively, where most of them have only
partial translations and they all use a labels-based approach only. Based on a
set of nine tool requirements for managing multilingual ontologies, the
assessment of seven relevant ontology editors showed that there are significant
gaps in tooling support, with VocBench 3 nearest of meeting them all. This
stock-taking may function as a new baseline and motivate new research
directions for multilingual ontologies.
| [
{
"version": "v1",
"created": "Thu, 6 Oct 2022 10:35:07 GMT"
}
] | 1,665,100,800,000 | [
[
"Gillis-Webber",
"Frances",
""
],
[
"Keet",
"C. Maria",
""
]
] |
2210.03455 | Mudit Verma | Mudit Verma, Ayush Kharkwal, Subbarao Kambhampati | Advice Conformance Verification by Reinforcement Learning agents for
Human-in-the-Loop | Accepted at IROS-RLCONFORM 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Human-in-the-loop (HiL) reinforcement learning is gaining traction in domains
with large action and state spaces, and sparse rewards by allowing the agent to
take advice from HiL. Beyond advice accommodation, a sequential decision-making
agent must be able to express the extent to which it was able to utilize the
human advice. Subsequently, the agent should provide a means for the HiL to
inspect parts of advice that it had to reject in favor of the overall
environment objective. We introduce the problem of Advice-Conformance
Verification which requires reinforcement learning (RL) agents to provide
assurances to the human in the loop regarding how much of their advice is being
conformed to. We then propose a Tree-based lingua-franca to support this
communication, called a Preference Tree. We study two cases of good and bad
advice scenarios in MuJoCo's Humanoid environment. Through our experiments, we
show that our method can provide an interpretable means of solving the
Advice-Conformance Verification problem by conveying whether or not the agent
is using the human's advice. Finally, we present a human-user study with 20
participants that validates our method.
| [
{
"version": "v1",
"created": "Fri, 7 Oct 2022 10:56:28 GMT"
}
] | 1,665,360,000,000 | [
[
"Verma",
"Mudit",
""
],
[
"Kharkwal",
"Ayush",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2210.03918 | Jitao Xu | Jitao Xu, Hongbo Li, and Minghao Yin | Finding and Exploring Promising Search Space for the 0-1
Multidimensional Knapsack Problem | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The 0-1 Multidimensional Knapsack Problem (MKP) is a classical NP-hard
combinatorial optimization problem with many engineering applications. In this
paper, we propose a novel algorithm combining evolutionary computation with the
exact algorithm to solve the 0-1 MKP. It maintains a set of solutions and
utilizes the information from the population to extract good partial
assignments. To find high-quality solutions, an exact algorithm is applied to
explore the promising search space specified by the good partial assignments.
The new solutions are used to update the population. Thus, the good partial
assignments evolve towards a better direction with the improvement of the
population. Extensive experimentation with commonly used benchmark sets shows
that our algorithm outperforms the state-of-the-art heuristic algorithms, TPTEA
and DQPSO, as well as the commercial solver CPlex. It finds better solutions
than the existing algorithms and provides new lower bounds for 10 large and
hard instances.
| [
{
"version": "v1",
"created": "Sat, 8 Oct 2022 05:11:47 GMT"
},
{
"version": "v2",
"created": "Sat, 14 Oct 2023 07:40:32 GMT"
},
{
"version": "v3",
"created": "Mon, 27 May 2024 03:19:04 GMT"
}
] | 1,716,854,400,000 | [
[
"Xu",
"Jitao",
""
],
[
"Li",
"Hongbo",
""
],
[
"Yin",
"Minghao",
""
]
] |
2210.03994 | Yuxia Geng | Yuxia Geng, Jiaoyan Chen, Jeff Z. Pan, Mingyang Chen, Song Jiang, Wen
Zhang, Huajun Chen | Relational Message Passing for Fully Inductive Knowledge Graph
Completion | Accepted by ICDE 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In knowledge graph completion (KGC), predicting triples involving emerging
entities and/or relations, which are unseen when the KG embeddings are learned,
has become a critical challenge. Subgraph reasoning with message passing is a
promising and popular solution. Some recent methods have achieved good
performance, but they (i) usually can only predict triples involving unseen
entities alone, failing to address more realistic fully inductive situations
with both unseen entities and unseen relations, and (ii) often conduct message
passing over the entities with the relation patterns not fully utilized. In
this study, we propose a new method named RMPI which uses a novel Relational
Message Passing network for fully Inductive KGC. It passes messages directly
between relations to make full use of the relation patterns for subgraph
reasoning with new techniques on graph transformation, graph pruning,
relation-aware neighborhood attention, addressing empty subgraphs, etc., and
can utilize the relation semantics defined in the ontological schema of KG.
Extensive evaluation on multiple benchmarks has shown the effectiveness of
techniques involved in RMPI and its better performance compared with the
existing methods that support fully inductive KGC. RMPI is also comparable to
the state-of-the-art partially inductive KGC methods with very promising
results achieved. Our codes and data are available at
https://github.com/zjukg/RMPI.
| [
{
"version": "v1",
"created": "Sat, 8 Oct 2022 10:35:52 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Dec 2022 10:48:23 GMT"
}
] | 1,672,617,600,000 | [
[
"Geng",
"Yuxia",
""
],
[
"Chen",
"Jiaoyan",
""
],
[
"Pan",
"Jeff Z.",
""
],
[
"Chen",
"Mingyang",
""
],
[
"Jiang",
"Song",
""
],
[
"Zhang",
"Wen",
""
],
[
"Chen",
"Huajun",
""
]
] |
2210.04537 | Romain Gautron | Romain Gautron (Cirad, CIAT), Dorian Baudry (CNRS), Myriam Adam (UMR
AGAP, Cirad), Gatien N Falconnier (Cirad, CIMMYT), Marc Corbeels (Cirad,
IITA) | Towards an efficient and risk aware strategy for guiding farmers in
identifying best crop management | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identification of best performing fertilizer practices among a set of
contrasting practices with field trials is challenging as crop losses are
costly for farmers. To identify best management practices, an ''intuitive
strategy'' would be to set multi-year field trials with equal proportion of
each practice to test. Our objective was to provide an identification strategy
using a bandit algorithm that was better at minimizing farmers' losses
occurring during the identification, compared with the ''intuitive strategy''.
We used a modification of the Decision Support Systems for Agro-Technological
Transfer (DSSAT) crop model to mimic field trial responses, with a case-study
in Southern Mali. We compared fertilizer practices using a risk-aware measure,
the Conditional Value-at-Risk (CVaR), and a novel agronomic metric, the Yield
Excess (YE). YE accounts for both grain yield and agronomic nitrogen use
efficiency. The bandit-algorithm performed better than the intuitive strategy:
it increased, in most cases, farmers' protection against worst outcomes. This
study is a methodological step which opens up new horizons for risk-aware
ensemble identification of the performance of contrasting crop management
practices in real conditions.
| [
{
"version": "v1",
"created": "Mon, 10 Oct 2022 10:11:10 GMT"
}
] | 1,665,446,400,000 | [
[
"Gautron",
"Romain",
"",
"Cirad, CIAT"
],
[
"Baudry",
"Dorian",
"",
"CNRS"
],
[
"Adam",
"Myriam",
"",
"UMR\n AGAP, Cirad"
],
[
"Falconnier",
"Gatien N",
"",
"Cirad, CIMMYT"
],
[
"Corbeels",
"Marc",
"",
"Cirad,\n IITA"
]
] |
2210.05050 | Omar Costilla Reyes | Jennifer J. Sun, Megan Tjandrasuwita, Atharva Sehgal, Armando
Solar-Lezama, Swarat Chaudhuri, Yisong Yue, Omar Costilla-Reyes | Neurosymbolic Programming for Science | Neural Information Processing Systems 2022 - AI for science workshop | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Neurosymbolic Programming (NP) techniques have the potential to accelerate
scientific discovery. These models combine neural and symbolic components to
learn complex patterns and representations from data, using high-level concepts
or known constraints. NP techniques can interface with symbolic domain
knowledge from scientists, such as prior knowledge and experimental context, to
produce interpretable outputs. We identify opportunities and challenges between
current NP models and scientific workflows, with real-world examples from
behavior analysis in science: to enable the use of NP broadly for workflows
across the natural and social sciences.
| [
{
"version": "v1",
"created": "Mon, 10 Oct 2022 23:46:41 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2022 15:21:32 GMT"
}
] | 1,667,865,600,000 | [
[
"Sun",
"Jennifer J.",
""
],
[
"Tjandrasuwita",
"Megan",
""
],
[
"Sehgal",
"Atharva",
""
],
[
"Solar-Lezama",
"Armando",
""
],
[
"Chaudhuri",
"Swarat",
""
],
[
"Yue",
"Yisong",
""
],
[
"Costilla-Reyes",
"Omar",
""
]
] |
2210.05327 | Hana Chockler | Sander Beckers, Hana Chockler, Joseph Y. Halpern | A Causal Analysis of Harm | Accepted at NeurIPS 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As autonomous systems rapidly become ubiquitous, there is a growing need for
a legal and regulatory framework to address when and how such a system harms
someone. There have been several attempts within the philosophy literature to
define harm, but none of them has proven capable of dealing with with the many
examples that have been presented, leading some to suggest that the notion of
harm should be abandoned and "replaced by more well-behaved notions". As harm
is generally something that is caused, most of these definitions have involved
causality at some level. Yet surprisingly, none of them makes use of causal
models and the definitions of actual causality that they can express. In this
paper we formally define a qualitative notion of harm that uses causal models
and is based on a well-known definition of actual causality (Halpern, 2016).
The key novelty of our definition is that it is based on contrastive causation
and uses a default utility to which the utility of actual outcomes is compared.
We show that our definition is able to handle the examples from the literature,
and illustrate its importance for reasoning about situations involving
autonomous systems.
| [
{
"version": "v1",
"created": "Tue, 11 Oct 2022 10:36:24 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Jan 2023 13:26:30 GMT"
}
] | 1,674,172,800,000 | [
[
"Beckers",
"Sander",
""
],
[
"Chockler",
"Hana",
""
],
[
"Halpern",
"Joseph Y.",
""
]
] |
2210.06877 | Xulong Zhang | Aolan Sun, Xulong Zhang, Tiandong Ling, Jianzong Wang, Ning Cheng,
Jing Xiao | Pre-Avatar: An Automatic Presentation Generation Framework Leveraging
Talking Avatar | Accepted by ICTAI2022. The 34th IEEE International Conference on
Tools with Artificial Intelligence (ICTAI) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the beginning of the COVID-19 pandemic, remote conferencing and
school-teaching have become important tools. The previous applications aim to
save the commuting cost with real-time interactions. However, our application
is going to lower the production and reproduction costs when preparing the
communication materials. This paper proposes a system called Pre-Avatar,
generating a presentation video with a talking face of a target speaker with 1
front-face photo and a 3-minute voice recording. Technically, the system
consists of three main modules, user experience interface (UEI), talking face
module and few-shot text-to-speech (TTS) module. The system firstly clones the
target speaker's voice, and then generates the speech, and finally generate an
avatar with appropriate lip and head movements. Under any scenario, users only
need to replace slides with different notes to generate another new video. The
demo has been released here and will be published as free software for use.
| [
{
"version": "v1",
"created": "Thu, 13 Oct 2022 10:02:46 GMT"
}
] | 1,665,705,600,000 | [
[
"Sun",
"Aolan",
""
],
[
"Zhang",
"Xulong",
""
],
[
"Ling",
"Tiandong",
""
],
[
"Wang",
"Jianzong",
""
],
[
"Cheng",
"Ning",
""
],
[
"Xiao",
"Jing",
""
]
] |
2210.08007 | Ahmet Orun | Ahmet Orun | Knowledge acquisition via interactive Distributed Cognitive skill
Modules | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The human's cognitive capacity for problem solving is always limited to
his/her educational background, skills, experiences, etc. Hence, it is often
insufficient to bring solution to extraordinary problems especially when there
is a time restriction. Nowadays this sort of personal cognitive limitations are
overcome at some extend by the computational utilities (e.g. program packages,
internet, etc.) where each one provides a specific background skill to the
individual to solve a particular problem. Nevertheless these models are all
based on already available conventional tools or knowledge and unable to solve
spontaneous unique problems, except human's procedural cognitive skills. But
unfortunately such low-level skills can not be modelled and stored in a
conventional way like classical models and knowledge. This work aims to
introduce an early stage of a modular approach to procedural skill acquisition
and storage via distributed cognitive skill modules which provide unique
opportunity to extend the limits of its exploitation.
| [
{
"version": "v1",
"created": "Thu, 13 Oct 2022 01:41:11 GMT"
}
] | 1,666,051,200,000 | [
[
"Orun",
"Ahmet",
""
]
] |
2210.08153 | Jin Zhang | Jin Zhang, Siyuan Li, Chongjie Zhang | CUP: Critic-Guided Policy Reuse | null | NeurIPS 2022 | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The ability to reuse previous policies is an important aspect of human
intelligence. To achieve efficient policy reuse, a Deep Reinforcement Learning
(DRL) agent needs to decide when to reuse and which source policies to reuse.
Previous methods solve this problem by introducing extra components to the
underlying algorithm, such as hierarchical high-level policies over source
policies, or estimations of source policies' value functions on the target
task. However, training these components induces either optimization
non-stationarity or heavy sampling cost, significantly impairing the
effectiveness of transfer. To tackle this problem, we propose a novel policy
reuse algorithm called Critic-gUided Policy reuse (CUP), which avoids training
any extra components and efficiently reuses source policies. CUP utilizes the
critic, a common component in actor-critic methods, to evaluate and choose
source policies. At each state, CUP chooses the source policy that has the
largest one-step improvement over the current target policy, and forms a
guidance policy. The guidance policy is theoretically guaranteed to be a
monotonic improvement over the current target policy. Then the target policy is
regularized to imitate the guidance policy to perform efficient policy search.
Empirical results demonstrate that CUP achieves efficient transfer and
significantly outperforms baseline algorithms.
| [
{
"version": "v1",
"created": "Sat, 15 Oct 2022 00:53:03 GMT"
}
] | 1,666,051,200,000 | [
[
"Zhang",
"Jin",
""
],
[
"Li",
"Siyuan",
""
],
[
"Zhang",
"Chongjie",
""
]
] |
2210.08203 | Ang Li | Ang Li, Song Jiang, Yizhou Sun, Judea Pearl | Unit Selection: Learning Benefit Function from Finite Population Data | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The unit selection problem is to identify a group of individuals who are most
likely to exhibit a desired mode of behavior, for example, selecting
individuals who would respond one way if incentivized and a different way if
not. The unit selection problem consists of evaluation and search subproblems.
Li and Pearl defined the "benefit function" to evaluate the average payoff of
selecting a certain individual with given characteristics. The search
subproblem is then to design an algorithm to identify the characteristics that
maximize the above benefit function. The hardness of the search subproblem
arises due to the large number of characteristics available for each individual
and the sparsity of the data available in each cell of characteristics. In this
paper, we present a machine learning framework that uses the bounds of the
benefit function that are estimable from the finite population data to learn
the bounds of the benefit function for each cell of characteristics. Therefore,
we could easily obtain the characteristics that maximize the benefit function.
| [
{
"version": "v1",
"created": "Sat, 15 Oct 2022 05:48:01 GMT"
}
] | 1,666,051,200,000 | [
[
"Li",
"Ang",
""
],
[
"Jiang",
"Song",
""
],
[
"Sun",
"Yizhou",
""
],
[
"Pearl",
"Judea",
""
]
] |
2210.08263 | Sheel Shah | Sheel Shah, Shubham Gupta | Reinforcement Learning for ConnectX | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | ConnectX is a two-player game that generalizes the popular game Connect 4.
The objective is to get X coins across a row, column, or diagonal of an M x N
board. The first player to do so wins the game. The parameters (M, N, X) are
allowed to change in each game, making ConnectX a novel and challenging
problem. In this paper, we present our work on the implementation and
modification of various reinforcement learning algorithms to play ConnectX.
| [
{
"version": "v1",
"created": "Sat, 15 Oct 2022 11:38:19 GMT"
}
] | 1,666,051,200,000 | [
[
"Shah",
"Sheel",
""
],
[
"Gupta",
"Shubham",
""
]
] |
2210.08445 | Hsu-Chieh Hu | Hsu-Chieh Hu, Joseph Zhou, Gregory J. Barlow, Stephen F. Smith | Connection-Based Scheduling for Real-Time Intersection Control | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a heuristic scheduling algorithm for real-time adaptive traffic
signal control to reduce traffic congestion. This algorithm adopts a lane-based
model that estimates the arrival time of all vehicles approaching an
intersection through different lanes, and then computes a schedule (i.e., a
signal timing plan) that minimizes the cumulative delay incurred by all
approaching vehicles. State space, pruning checks and an admissible heuristic
for A* search are described and shown to be capable of generating an
intersection schedule in real-time (i.e., every second). Due to the
effectiveness of the heuristics, the proposed approach outperforms a less
expressive Dynamic Programming approach and previous A*-based approaches in
run-time performance, both in simulated test environments and actual field
tests.
| [
{
"version": "v1",
"created": "Sun, 16 Oct 2022 04:37:03 GMT"
}
] | 1,666,051,200,000 | [
[
"Hu",
"Hsu-Chieh",
""
],
[
"Zhou",
"Joseph",
""
],
[
"Barlow",
"Gregory J.",
""
],
[
"Smith",
"Stephen F.",
""
]
] |
2210.08608 | Hao Yan | Jiayu Huang, Yutian Pang, Yongming Liu, Hao Yan | Posterior Regularized Bayesian Neural Network Incorporating Soft and
Hard Knowledge Constraints | Accepted in Knowledge-Based System | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Neural Networks (NNs) have been widely {used in supervised learning} due to
their ability to model complex nonlinear patterns, often presented in
high-dimensional data such as images and text. However, traditional NNs often
lack the ability for uncertainty quantification. Bayesian NNs (BNNS) could help
measure the uncertainty by considering the distributions of the NN model
parameters. Besides, domain knowledge is commonly available and could improve
the performance of BNNs if it can be appropriately incorporated. In this work,
we propose a novel Posterior-Regularized Bayesian Neural Network (PR-BNN) model
by incorporating different types of knowledge constraints, such as the soft and
hard constraints, as a posterior regularization term. Furthermore, we propose
to combine the augmented Lagrangian method and the existing BNN solvers for
efficient inference. The experiments in simulation and two case studies about
aviation landing prediction and solar energy output prediction have shown the
knowledge constraints and the performance improvement of the proposed model
over traditional BNNs without the constraints.
| [
{
"version": "v1",
"created": "Sun, 16 Oct 2022 18:58:50 GMT"
}
] | 1,666,051,200,000 | [
[
"Huang",
"Jiayu",
""
],
[
"Pang",
"Yutian",
""
],
[
"Liu",
"Yongming",
""
],
[
"Yan",
"Hao",
""
]
] |
2210.08713 | Xiaohui Song | Xiaohui Song, Longtao Huang, Hui Xue, Songlin Hu | Supervised Prototypical Contrastive Learning for Emotion Recognition in
Conversation | Accepted by EMNLP 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Capturing emotions within a conversation plays an essential role in modern
dialogue systems. However, the weak correlation between emotions and semantics
brings many challenges to emotion recognition in conversation (ERC). Even
semantically similar utterances, the emotion may vary drastically depending on
contexts or speakers. In this paper, we propose a Supervised Prototypical
Contrastive Learning (SPCL) loss for the ERC task. Leveraging the Prototypical
Network, the SPCL targets at solving the imbalanced classification problem
through contrastive learning and does not require a large batch size.
Meanwhile, we design a difficulty measure function based on the distance
between classes and introduce curriculum learning to alleviate the impact of
extreme samples. We achieve state-of-the-art results on three widely used
benchmarks. Further, we conduct analytical experiments to demonstrate the
effectiveness of our proposed SPCL and curriculum learning strategy. We release
the code at https://github.com/caskcsg/SPCL.
| [
{
"version": "v1",
"created": "Mon, 17 Oct 2022 03:08:23 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Oct 2022 08:52:55 GMT"
}
] | 1,666,224,000,000 | [
[
"Song",
"Xiaohui",
""
],
[
"Huang",
"Longtao",
""
],
[
"Xue",
"Hui",
""
],
[
"Hu",
"Songlin",
""
]
] |
2210.08809 | Jingwei Yi | Jingwei Yi, Fangzhao Wu, Chuhan Wu, Xiaolong Huang, Binxing Jiao,
Guangzhong Sun, Xing Xie | Effective and Efficient Query-aware Snippet Extraction for Web Search | Accepted by EMNLP2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Query-aware webpage snippet extraction is widely used in search engines to
help users better understand the content of the returned webpages before
clicking. Although important, it is very rarely studied. In this paper, we
propose an effective query-aware webpage snippet extraction method named
DeepQSE, aiming to select a few sentences which can best summarize the webpage
content in the context of input query. DeepQSE first learns query-aware
sentence representations for each sentence to capture the fine-grained
relevance between query and sentence, and then learns document-aware
query-sentence relevance representations for snippet extraction. Since the
query and each sentence are jointly modeled in DeepQSE, its online inference
may be slow. Thus, we further propose an efficient version of DeepQSE, named
Efficient-DeepQSE, which can significantly improve the inference speed of
DeepQSE without affecting its performance. The core idea of Efficient-DeepQSE
is to decompose the query-aware snippet extraction task into two stages, i.e.,
a coarse-grained candidate sentence selection stage where sentence
representations can be cached, and a fine-grained relevance modeling stage.
Experiments on two real-world datasets validate the effectiveness and
efficiency of our methods.
| [
{
"version": "v1",
"created": "Mon, 17 Oct 2022 07:46:17 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Oct 2022 10:32:59 GMT"
}
] | 1,666,915,200,000 | [
[
"Yi",
"Jingwei",
""
],
[
"Wu",
"Fangzhao",
""
],
[
"Wu",
"Chuhan",
""
],
[
"Huang",
"Xiaolong",
""
],
[
"Jiao",
"Binxing",
""
],
[
"Sun",
"Guangzhong",
""
],
[
"Xie",
"Xing",
""
]
] |
2210.08874 | Ang Li | Ang Li, Judea Pearl | Probabilities of Causation: Role of Observational Data | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Probabilities of causation play a crucial role in modern decision-making.
Pearl defined three binary probabilities of causation, the probability of
necessity and sufficiency (PNS), the probability of sufficiency (PS), and the
probability of necessity (PN). These probabilities were then bounded by Tian
and Pearl using a combination of experimental and observational data. However,
observational data are not always available in practice; in such a case, Tian
and Pearl's Theorem provided valid but less effective bounds using pure
experimental data. In this paper, we discuss the conditions that observational
data are worth considering to improve the quality of the bounds. More
specifically, we defined the expected improvement of the bounds by assuming the
observational distributions are uniformly distributed on their feasible
interval. We further applied the proposed theorems to the unit selection
problem defined by Li and Pearl.
| [
{
"version": "v1",
"created": "Mon, 17 Oct 2022 09:10:11 GMT"
}
] | 1,666,051,200,000 | [
[
"Li",
"Ang",
""
],
[
"Pearl",
"Judea",
""
]
] |
2210.08906 | Andrea Tocchetti | Andrea Tocchetti, Lorenzo Corti, Agathe Balayn, Mireia Yurrita, Philip
Lippmann, Marco Brambilla, and Jie Yang | A.I. Robustness: a Human-Centered Perspective on Technological
Challenges and Opportunities | Under Review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Despite the impressive performance of Artificial Intelligence (AI) systems,
their robustness remains elusive and constitutes a key issue that impedes
large-scale adoption. Robustness has been studied in many domains of AI, yet
with different interpretations across domains and contexts. In this work, we
systematically survey the recent progress to provide a reconciled terminology
of concepts around AI robustness. We introduce three taxonomies to organize and
describe the literature both from a fundamental and applied point of view: 1)
robustness by methods and approaches in different phases of the machine
learning pipeline; 2) robustness for specific model architectures, tasks, and
systems; and in addition, 3) robustness assessment methodologies and insights,
particularly the trade-offs with other trustworthiness properties. Finally, we
identify and discuss research gaps and opportunities and give an outlook on the
field. We highlight the central role of humans in evaluating and enhancing AI
robustness, considering the necessary knowledge humans can provide, and discuss
the need for better understanding practices and developing supportive tools in
the future.
| [
{
"version": "v1",
"created": "Mon, 17 Oct 2022 10:00:51 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Oct 2022 07:37:47 GMT"
}
] | 1,666,224,000,000 | [
[
"Tocchetti",
"Andrea",
""
],
[
"Corti",
"Lorenzo",
""
],
[
"Balayn",
"Agathe",
""
],
[
"Yurrita",
"Mireia",
""
],
[
"Lippmann",
"Philip",
""
],
[
"Brambilla",
"Marco",
""
],
[
"Yang",
"Jie",
""
]
] |
2210.08956 | Pan Li | Pan Li, Peizhuo Lv, Shenchen Zhu, Ruigang Liang, Kai Chen, | A Novel Membership Inference Attack against Dynamic Neural Networks by
Utilizing Policy Networks Information | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unlike traditional static deep neural networks (DNNs), dynamic neural
networks (NNs) adjust their structures or parameters to different inputs to
guarantee accuracy and computational efficiency. Meanwhile, it has been an
emerging research area in deep learning recently. Although traditional static
DNNs are vulnerable to the membership inference attack (MIA) , which aims to
infer whether a particular point was used to train the model, little is known
about how such an attack performs on the dynamic NNs. In this paper, we propose
a novel MI attack against dynamic NNs, leveraging the unique policy networks
mechanism of dynamic NNs to increase the effectiveness of membership inference.
We conducted extensive experiments using two dynamic NNs, i.e., GaterNet,
BlockDrop, on four mainstream image classification tasks, i.e., CIFAR-10,
CIFAR-100, STL-10, and GTSRB. The evaluation results demonstrate that the
control-flow information can significantly promote the MIA. Based on
backbone-finetuning and information-fusion, our method achieves better results
than baseline attack and traditional attack using intermediate information.
| [
{
"version": "v1",
"created": "Mon, 17 Oct 2022 11:51:02 GMT"
}
] | 1,666,051,200,000 | [
[
"Li",
"Pan",
""
],
[
"Lv",
"Peizhuo",
""
],
[
"Zhu",
"Shenchen",
""
],
[
"Liang",
"Ruigang",
""
],
[
"Chen",
"Kai",
""
]
] |
2210.08994 | Seng-Beng Ho | Seng-Beng Ho, Zhaoxia Wang, Boon-Kiat Quek, Erik Cambria | Knowledge Representation for Conceptual, Motivational, and Affective
Processes in Natural Language Communication | 8 pages, 7 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural language communication is an intricate and complex process. The
speaker usually begins with an intention and motivation of what is to be
communicated, and what effects are expected from the communication, while
taking into consideration the listener's mental model to concoct an appropriate
sentence. The listener likewise has to interpret what the speaker means, and
respond accordingly, also with the speaker's mental state in mind. To do this
successfully, conceptual, motivational, and affective processes have to be
represented appropriately to drive the language generation and understanding
processes. Language processing has succeeded well with the big data approach in
applications such as chatbots and machine translation. However, in human-robot
collaborative social communication and in using natural language for delivering
precise instructions to robots, a deeper representation of the conceptual,
motivational, and affective processes is needed. This paper capitalizes on the
UGALRS (Unified General Autonomous and Language Reasoning System) framework and
the CD+ (Conceptual Representation Plus) representational scheme to illustrate
how social communication through language is supported by a knowledge
representational scheme that handles conceptual, motivational, and affective
processes in a deep and general way. Though a small set of concepts,
motivations, and emotions is treated in this paper, its main contribution is in
articulating a general framework of knowledge representation and processing to
link these aspects together in serving the purpose of natural language
communication for an intelligent system.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2022 01:37:50 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Oct 2022 07:08:26 GMT"
}
] | 1,666,310,400,000 | [
[
"Ho",
"Seng-Beng",
""
],
[
"Wang",
"Zhaoxia",
""
],
[
"Quek",
"Boon-Kiat",
""
],
[
"Cambria",
"Erik",
""
]
] |
2210.08998 | Richard Freedman | Richard G. Freedman, Joseph B. Mueller, Jack Ladwig, Steven Johnston,
David McDonald, Helen Wauck, Ruta Wheelock, Hayley Borck | A Symbolic Representation of Human Posture for Interpretable Learning
and Reasoning | Accepted for presentation at the AAAI 2022 Fall Symposium Series, in
the symposium for Artificial Intelligence for Human-Robot Interaction | null | null | AIHRI/2022/6066 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robots that interact with humans in a physical space or application need to
think about the person's posture, which typically comes from visual sensors
like cameras and infra-red. Artificial intelligence and machine learning
algorithms use information from these sensors either directly or after some
level of symbolic abstraction, and the latter usually partitions the range of
observed values to discretize the continuous signal data. Although these
representations have been effective in a variety of algorithms with respect to
accuracy and task completion, the underlying models are rarely interpretable,
which also makes their outputs more difficult to explain to people who request
them. Instead of focusing on the possible sensor values that are familiar to a
machine, we introduce a qualitative spatial reasoning approach that describes
the human posture in terms that are more familiar to people. This paper
explores the derivation of our symbolic representation at two levels of detail
and its preliminary use as features for interpretable activity recognition.
| [
{
"version": "v1",
"created": "Mon, 17 Oct 2022 12:22:13 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Oct 2022 03:11:44 GMT"
}
] | 1,666,656,000,000 | [
[
"Freedman",
"Richard G.",
""
],
[
"Mueller",
"Joseph B.",
""
],
[
"Ladwig",
"Jack",
""
],
[
"Johnston",
"Steven",
""
],
[
"McDonald",
"David",
""
],
[
"Wauck",
"Helen",
""
],
[
"Wheelock",
"Ruta",
""
],
[
"Borck",
"Hayley",
""
]
] |
2210.09708 | Zixuan Li | Zixuan Li, Zhongni Hou, Saiping Guan, Xiaolong Jin, Weihua Peng, Long
Bai, Yajuan Lyu, Wei Li, Jiafeng Guo, Xueqi Cheng | HiSMatch: Historical Structure Matching based Temporal Knowledge Graph
Reasoning | Full paper of EMNLP 2022 Findings | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Temporal Knowledge Graph (TKG) is a sequence of KGs with respective
timestamps, which adopts quadruples in the form of (\emph{subject},
\emph{relation}, \emph{object}, \emph{timestamp}) to describe dynamic facts.
TKG reasoning has facilitated many real-world applications via answering such
queries as (\emph{query entity}, \emph{query relation}, \emph{?}, \emph{future
timestamp}) about future. This is actually a matching task between a query and
candidate entities based on their historical structures, which reflect
behavioral trends of the entities at different timestamps. In addition, recent
KGs provide background knowledge of all the entities, which is also helpful for
the matching. Thus, in this paper, we propose the \textbf{Hi}storical
\textbf{S}tructure \textbf{Match}ing (\textbf{HiSMatch}) model. It applies two
structure encoders to capture the semantic information contained in the
historical structures of the query and candidate entities. Besides, it adopts
another encoder to integrate the background knowledge into the model. TKG
reasoning experiments on six benchmark datasets demonstrate the significant
improvement of the proposed HiSMatch model, with up to 5.6\% performance
improvement in MRR, compared to the state-of-the-art baselines.
| [
{
"version": "v1",
"created": "Tue, 18 Oct 2022 09:39:26 GMT"
}
] | 1,666,137,600,000 | [
[
"Li",
"Zixuan",
""
],
[
"Hou",
"Zhongni",
""
],
[
"Guan",
"Saiping",
""
],
[
"Jin",
"Xiaolong",
""
],
[
"Peng",
"Weihua",
""
],
[
"Bai",
"Long",
""
],
[
"Lyu",
"Yajuan",
""
],
[
"Li",
"Wei",
""
],
[
"Guo",
"Jiafeng",
""
],
[
"Cheng",
"Xueqi",
""
]
] |
2210.09880 | Yudong Xu | Yudong Xu, Elias B. Khalil, Scott Sanner | Graphs, Constraints, and Search for the Abstraction and Reasoning Corpus | 9 pages, 5 figures, to be published in AAAI-23 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The Abstraction and Reasoning Corpus (ARC) aims at benchmarking the
performance of general artificial intelligence algorithms. The ARC's focus on
broad generalization and few-shot learning has made it difficult to solve using
pure machine learning. A more promising approach has been to perform program
synthesis within an appropriately designed Domain Specific Language (DSL).
However, these too have seen limited success. We propose Abstract Reasoning
with Graph Abstractions (ARGA), a new object-centric framework that first
represents images using graphs and then performs a search for a correct program
in a DSL that is based on the abstracted graph space. The complexity of this
combinatorial search is tamed through the use of constraint acquisition, state
hashing, and Tabu search. An extensive set of experiments demonstrates the
promise of ARGA in tackling some of the complicated object-centric tasks of the
ARC rather efficiently, producing programs that are correct and easy to
understand.
| [
{
"version": "v1",
"created": "Tue, 18 Oct 2022 14:13:43 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Dec 2022 00:54:33 GMT"
}
] | 1,670,198,400,000 | [
[
"Xu",
"Yudong",
""
],
[
"Khalil",
"Elias B.",
""
],
[
"Sanner",
"Scott",
""
]
] |
2210.09992 | Chun-Kit Ngan | Chun-Kit Ngan, Alexander Brodsky | Optimal Event Monitoring through Internet Mashup over Multivariate Time
Series | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | We propose a Web-Mashup Application Service Framework for Multivariate Time
Series Analytics (MTSA) that supports the services of model definitions,
querying, parameter learning, model evaluations, data monitoring, decision
recommendations, and web portals. This framework maintains the advantage of
combining the strengths of both the domain-knowledge-based and the
formal-learning-based approaches and is designed for a more general class of
problems over multivariate time series. More specifically, we identify a
general-hybrid-based model, MTSA-Parameter Estimation, to solve this class of
problems in which the objective function is maximized or minimized from the
optimal decision parameters regardless of particular time points. This model
also allows domain experts to include multiple types of constraints, e.g.,
global constraints and monitoring constraints. We further extend the MTSA data
model and query language to support this class of problems for the services of
learning, monitoring, and recommendation. At the end, we conduct an
experimental case study for a university campus microgrid as a practical
example to demonstrate our proposed framework, models, and language.
| [
{
"version": "v1",
"created": "Tue, 18 Oct 2022 16:56:17 GMT"
}
] | 1,666,137,600,000 | [
[
"Ngan",
"Chun-Kit",
""
],
[
"Brodsky",
"Alexander",
""
]
] |
2210.10374 | Jiang Zetian | Zetian Jiang, Jiaxin Lu, Tianzhe Wang, Junchi Yan | Learning Universe Model for Partial Matching Networks over Multiple
Graphs | 17 pages, 16 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the general setting for partial matching of two or multiple
graphs, in the sense that not necessarily all the nodes in one graph can find
their correspondences in another graph and vice versa. We take a universe
matching perspective to this ubiquitous problem, whereby each node is either
matched into an anchor in a virtual universe graph or regarded as an outlier.
Such a universe matching scheme enjoys a few important merits, which have not
been adopted in existing learning-based graph matching (GM) literature. First,
the subtle logic for inlier matching and outlier detection can be clearly
modeled, which is otherwise less convenient to handle in the pairwise matching
scheme. Second, it enables end-to-end learning especially for universe level
affinity metric learning for inliers matching, and loss design for gathering
outliers together. Third, the resulting matching model can easily handle new
arriving graphs under online matching, or even the graphs coming from different
categories of the training set. To our best knowledge, this is the first deep
learning network that can cope with two-graph matching, multiple-graph
matching, online matching, and mixture graph matching simultaneously. Extensive
experimental results show the state-of-the-art performance of our method in
these settings.
| [
{
"version": "v1",
"created": "Wed, 19 Oct 2022 08:34:00 GMT"
}
] | 1,666,224,000,000 | [
[
"Jiang",
"Zetian",
""
],
[
"Lu",
"Jiaxin",
""
],
[
"Wang",
"Tianzhe",
""
],
[
"Yan",
"Junchi",
""
]
] |
2210.10903 | Rashid Mehmood PhD | Istiak Ahmad, Fahad AlQurashi, Rashid Mehmood | Machine and Deep Learning Methods with Manual and Automatic Labelling
for News Classification in Bangla Language | 29 pages, 30 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Research in Natural Language Processing (NLP) has increasingly become
important due to applications such as text classification, text mining,
sentiment analysis, POS tagging, named entity recognition, textual entailment,
and many others. This paper introduces several machine and deep learning
methods with manual and automatic labelling for news classification in the
Bangla language. We implemented several machine (ML) and deep learning (DL)
algorithms. The ML algorithms are Logistic Regression (LR), Stochastic Gradient
Descent (SGD), Support Vector Machine (SVM), Random Forest (RF), and K-Nearest
Neighbour (KNN), used with Bag of Words (BoW), Term Frequency-Inverse Document
Frequency (TF-IDF), and Doc2Vec embedding models. The DL algorithms are Long
Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), Gated Recurrent Unit
(GRU), and Convolutional Neural Network (CNN), used with Word2vec, Glove, and
FastText word embedding models. We develop automatic labelling methods using
Latent Dirichlet Allocation (LDA) and investigate the performance of
single-label and multi-label article classification methods. To investigate
performance, we developed from scratch Potrika, the largest and the most
extensive dataset for news classification in the Bangla language, comprising
185.51 million words and 12.57 million sentences contained in 664,880 news
articles in eight distinct categories, curated from six popular online news
portals in Bangladesh for the period 2014-2020. GRU and Fasttext with 91.83%
achieve the highest accuracy for manually-labelled data. For the automatic
labelling case, KNN and Doc2Vec at 57.72% and 75% achieve the highest accuracy
for single-label and multi-label data, respectively. The methods developed in
this paper are expected to advance research in Bangla and other languages.
| [
{
"version": "v1",
"created": "Wed, 19 Oct 2022 21:53:49 GMT"
}
] | 1,666,310,400,000 | [
[
"Ahmad",
"Istiak",
""
],
[
"AlQurashi",
"Fahad",
""
],
[
"Mehmood",
"Rashid",
""
]
] |
2210.11151 | V\'ictor Guti\'errez-Basulto | Zhiwei Hu, V\'ictor Guti\'errez-Basulto, Zhiliang Xiang, Ru Li, Jeff
Z. Pan | Transformer-based Entity Typing in Knowledge Graphs | Paper accepted at EMNLP 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We investigate the knowledge graph entity typing task which aims at inferring
plausible entity types. In this paper, we propose a novel Transformer-based
Entity Typing (TET) approach, effectively encoding the content of neighbors of
an entity. More precisely, TET is composed of three different mechanisms: a
local transformer allowing to infer missing types of an entity by independently
encoding the information provided by each of its neighbors; a global
transformer aggregating the information of all neighbors of an entity into a
single long sequence to reason about more complex entity types; and a context
transformer integrating neighbors content based on their contribution to the
type inference through information exchange between neighbor pairs.
Furthermore, TET uses information about class membership of types to
semantically strengthen the representation of an entity. Experiments on two
real-world datasets demonstrate the superior performance of TET compared to the
state-of-the-art.
| [
{
"version": "v1",
"created": "Thu, 20 Oct 2022 10:40:25 GMT"
}
] | 1,666,310,400,000 | [
[
"Hu",
"Zhiwei",
""
],
[
"Gutiérrez-Basulto",
"Víctor",
""
],
[
"Xiang",
"Zhiliang",
""
],
[
"Li",
"Ru",
""
],
[
"Pan",
"Jeff Z.",
""
]
] |
2210.11174 | Md. Nurul Muttakin | Md Nurul Muttakin, Md Iqbal Hossain, Md Saidur Rahman | Overlapping Community Detection using Dynamic Dilated Aggregation in
Deep Residual GCN | Will resubmit later | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Overlapping community detection is a key problem in graph mining. Some
research has considered applying graph convolutional networks (GCN) to tackle
the problem. However, it is still challenging to incorporate deep graph
convolutional networks in the case of general irregular graphs. In this study,
we design a deep dynamic residual graph convolutional network (DynaResGCN)
based on our novel dynamic dilated aggregation mechanisms and a unified
end-to-end encoder-decoder-based framework to detect overlapping communities in
networks. The deep DynaResGCN model is used as the encoder, whereas we
incorporate the Bernoulli-Poisson (BP) model as the decoder. Consequently, we
apply our overlapping community detection framework in a research topics
dataset without having ground truth, a set of networks from Facebook having a
reliable (hand-labeled) ground truth, and in a set of very large co-authorship
networks having empirical (not hand-labeled) ground truth. Our experimentation
on these datasets shows significantly superior performance over many
state-of-the-art methods for the detection of overlapping communities in
networks.
| [
{
"version": "v1",
"created": "Thu, 20 Oct 2022 11:22:58 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 13:21:47 GMT"
}
] | 1,695,686,400,000 | [
[
"Muttakin",
"Md Nurul",
""
],
[
"Hossain",
"Md Iqbal",
""
],
[
"Rahman",
"Md Saidur",
""
]
] |
2210.11194 | Qian-Wei Wang | Qian-Wei Wang, Bowen Zhao, Mingyan Zhu, Tianxiang Li, Zimo Liu,
Shu-Tao Xia | Controller-Guided Partial Label Consistency Regularization with
Unlabeled Data | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partial label learning (PLL) learns from training examples each associated
with multiple candidate labels, among which only one is valid. In recent years,
benefiting from the strong capability of dealing with ambiguous supervision and
the impetus of modern data augmentation methods, consistency
regularization-based PLL methods have achieved a series of successes and become
mainstream. However, as the partial annotation becomes insufficient, their
performances drop significantly. In this paper, we leverage easily accessible
unlabeled examples to facilitate the partial label consistency regularization.
In addition to a partial supervised loss, our method performs a
controller-guided consistency regularization at both the label-level and
representation-level with the help of unlabeled data. To minimize the
disadvantages of insufficient capabilities of the initial supervised model, we
use the controller to estimate the confidence of each current prediction to
guide the subsequent consistency regularization. Furthermore, we dynamically
adjust the confidence thresholds so that the number of samples of each class
participating in consistency regularization remains roughly equal to alleviate
the problem of class-imbalance. Experiments show that our method achieves
satisfactory performances in more practical situations, and its modules can be
applied to existing PLL methods to enhance their capabilities.
| [
{
"version": "v1",
"created": "Thu, 20 Oct 2022 12:15:13 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Dec 2023 11:53:26 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Dec 2023 13:23:02 GMT"
},
{
"version": "v4",
"created": "Tue, 27 Feb 2024 13:51:07 GMT"
}
] | 1,709,078,400,000 | [
[
"Wang",
"Qian-Wei",
""
],
[
"Zhao",
"Bowen",
""
],
[
"Zhu",
"Mingyan",
""
],
[
"Li",
"Tianxiang",
""
],
[
"Liu",
"Zimo",
""
],
[
"Xia",
"Shu-Tao",
""
]
] |
2210.11217 | Xinghan Liu | Xinghan Liu, Emiliano Lorini, Antonino Rotolo, Giovanni Sartor | Modelling and Explaining Legal Case-based Reasoners through Classifiers | 16 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper brings together two lines of research: factor-based models of
case-based reasoning (CBR) and the logical specification of classifiers.
Logical approaches to classifiers capture the connection between features and
outcomes in classifier systems. Factor-based reasoning is a popular approach to
reasoning by precedent in AI & Law. Horty (2011) has developed the factor-based
models of precedent into a theory of precedential constraint. In this paper we
combine the modal logic approach (binary-input classifier, BLC) to classifiers
and their explanations given by Liu & Lorini (2021) with Horty's account of
factor-based CBR, since both a classifier and CBR map sets of features to
decisions or classifications. We reformulate case bases of Horty in the
language of BCL, and give several representation results. Furthermore, we show
how notions of CBR, e.g. reason, preference between reasons, can be analyzed by
notions of classifier system.
| [
{
"version": "v1",
"created": "Thu, 20 Oct 2022 12:51:12 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Dec 2022 09:25:43 GMT"
}
] | 1,670,544,000,000 | [
[
"Liu",
"Xinghan",
""
],
[
"Lorini",
"Emiliano",
""
],
[
"Rotolo",
"Antonino",
""
],
[
"Sartor",
"Giovanni",
""
]
] |
2210.11298 | Zhuo Chen | Zhuo Chen, Wen Zhang, Yufeng Huang, Mingyang Chen, Yuxia Geng, Hongtao
Yu, Zhen Bi, Yichi Zhang, Zhen Yao, Wenting Song, Xinliang Wu, Yi Yang,
Mingyi Chen, Zhaoyang Lian, Yingying Li, Lei Cheng, Huajun Chen | Tele-Knowledge Pre-training for Fault Analysis | ICDE 2023 https://github.com/hackerchenzhuo/KTeleBERT | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we share our experience on tele-knowledge pre-training for
fault analysis, a crucial task in telecommunication applications that requires
a wide range of knowledge normally found in both machine log data and product
documents. To organize this knowledge from experts uniformly, we propose to
create a Tele-KG (tele-knowledge graph). Using this valuable data, we further
propose a tele-domain language pre-training model TeleBERT and its
knowledge-enhanced version, a tele-knowledge re-training model KTeleBERT. which
includes effective prompt hints, adaptive numerical data encoding, and two
knowledge injection paradigms. Concretely, our proposal includes two stages:
first, pre-training TeleBERT on 20 million tele-related corpora, and then
re-training it on 1 million causal and machine-related corpora to obtain
KTeleBERT. Our evaluation on multiple tasks related to fault analysis in
tele-applications, including root-cause analysis, event association prediction,
and fault chain tracing, shows that pre-training a language model with
tele-domain data is beneficial for downstream tasks. Moreover, the KTeleBERT
re-training further improves the performance of task models, highlighting the
effectiveness of incorporating diverse tele-knowledge into the model.
| [
{
"version": "v1",
"created": "Thu, 20 Oct 2022 14:31:48 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2023 13:31:52 GMT"
}
] | 1,676,851,200,000 | [
[
"Chen",
"Zhuo",
""
],
[
"Zhang",
"Wen",
""
],
[
"Huang",
"Yufeng",
""
],
[
"Chen",
"Mingyang",
""
],
[
"Geng",
"Yuxia",
""
],
[
"Yu",
"Hongtao",
""
],
[
"Bi",
"Zhen",
""
],
[
"Zhang",
"Yichi",
""
],
[
"Yao",
"Zhen",
""
],
[
"Song",
"Wenting",
""
],
[
"Wu",
"Xinliang",
""
],
[
"Yang",
"Yi",
""
],
[
"Chen",
"Mingyi",
""
],
[
"Lian",
"Zhaoyang",
""
],
[
"Li",
"Yingying",
""
],
[
"Cheng",
"Lei",
""
],
[
"Chen",
"Huajun",
""
]
] |
2210.11846 | Jasmina Gajcin | Jasmina Gajcin and Ivana Dusparic | Redefining Counterfactual Explanations for Reinforcement Learning:
Overview, Challenges and Opportunities | 32 pages, 6 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | While AI algorithms have shown remarkable success in various fields, their
lack of transparency hinders their application to real-life tasks. Although
explanations targeted at non-experts are necessary for user trust and human-AI
collaboration, the majority of explanation methods for AI are focused on
developers and expert users. Counterfactual explanations are local explanations
that offer users advice on what can be changed in the input for the output of
the black-box model to change. Counterfactuals are user-friendly and provide
actionable advice for achieving the desired output from the AI system. While
extensively researched in supervised learning, there are few methods applying
them to reinforcement learning (RL). In this work, we explore the reasons for
the underrepresentation of a powerful explanation method in RL. We start by
reviewing the current work in counterfactual explanations in supervised
learning. Additionally, we explore the differences between counterfactual
explanations in supervised learning and RL and identify the main challenges
that prevent the adoption of methods from supervised in reinforcement learning.
Finally, we redefine counterfactuals for RL and propose research directions for
implementing counterfactuals in RL.
| [
{
"version": "v1",
"created": "Fri, 21 Oct 2022 09:50:53 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Feb 2024 15:28:52 GMT"
}
] | 1,707,696,000,000 | [
[
"Gajcin",
"Jasmina",
""
],
[
"Dusparic",
"Ivana",
""
]
] |
2210.12026 | Fabian Neuhaus | Fabian Neuhaus and Janna Hastings | Ontology Development is Consensus Creation, Not (Merely) Representation | null | Applied Ontology, vol. 17, no. 4, pp. 495-513, 2022 | 10.3233/AO-220273 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ontology development methodologies emphasise knowledge gathering from domain
experts and documentary resources, and knowledge representation using an
ontology language such as OWL or FOL. However, working ontologists are often
surprised by how challenging and slow it can be to develop ontologies. Here,
with a particular emphasis on the sorts of ontologies that are content-heavy
and intended to be shared across a community of users (reference ontologies),
we propose that a significant and heretofore under-emphasised contributor of
challenges during ontology development is the need to create, or bring about,
consensus in the face of disagreement. For this reason reference ontology
development cannot be automated, at least within the limitations of existing AI
approaches. Further, for the same reason ontologists are required to have
specific social-negotiating skills which are currently lacking in most
technical curricula.
| [
{
"version": "v1",
"created": "Fri, 21 Oct 2022 15:16:28 GMT"
}
] | 1,673,395,200,000 | [
[
"Neuhaus",
"Fabian",
""
],
[
"Hastings",
"Janna",
""
]
] |
2210.12080 | Gyunam Park | Gyunam Park and Wil. M. P. van der Aalst | Monitoring Constraints in Business Processes Using Object-Centric
Constraint Graphs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constraint monitoring aims to monitor the violation of constraints in
business processes, e.g., an invoice should be cleared within 48 hours after
the corresponding goods receipt, by analyzing event data. Existing techniques
for constraint monitoring assume that a single case notion exists in a business
process, e.g., a patient in a healthcare process, and each event is associated
with the case notion. However, in reality, business processes are
object-centric, i.e., multiple case notions (objects) exist, and an event may
be associated with multiple objects. For instance, an Order-To-Cash (O2C)
process involves order, item, delivery, etc., and they interact when executing
an event, e.g., packing multiple items together for a delivery. The existing
techniques produce misleading insights when applied to such object-centric
business processes. In this work, we propose an approach to monitoring
constraints in object-centric business processes. To this end, we introduce
Object-Centric Constraint Graphs (OCCGs) to represent constraints that consider
the interaction of objects. Next, we evaluate the constraints represented by
OCCGs by analyzing Object-Centric Event Logs (OCELs) that store the interaction
of different objects in events. We have implemented a web application to
support the proposed approach and conducted two case studies using a real-life
SAP ERP system.
| [
{
"version": "v1",
"created": "Fri, 21 Oct 2022 16:11:29 GMT"
}
] | 1,666,569,600,000 | [
[
"Park",
"Gyunam",
""
],
[
"van der Aalst",
"Wil. M. P.",
""
]
] |
2210.12114 | Minal Suresh Patil | Minal Suresh Patil | Modelling Control Arguments via Cooperation Logic in Unforeseen
Scenarios | Thinking Fast and Slow in AI - AAAI 2022 Fall Symposium Series | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The intent of control argumentation frameworks is to specifically model
strategic scenarios from the perspective of an agent by extending the standard
model of argumentation framework in a way that takes unquantified uncertainty
regarding arguments and attacks into account. They do not, however, adequately
account for coalition formation and interactions among a set of agents in an
uncertain environment. To address this challenge, we propose a formalism of a
multi-agent scenario via cooperation logic and investigate agents' strategies
and actions in a dynamic environment.
| [
{
"version": "v1",
"created": "Fri, 21 Oct 2022 17:14:41 GMT"
}
] | 1,666,569,600,000 | [
[
"Patil",
"Minal Suresh",
""
]
] |
2210.12324 | Hisashi Kashima | Hisashi Kashima, Satoshi Oyama, Hiromi Arai, and Junichiro Mori | Trustworthy Human Computation: A Survey | 35 pages, 2 figures, 9 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human computation is an approach to solving problems that prove difficult
using AI only, and involves the cooperation of many humans. Because human
computation requires close engagement with both "human populations as users"
and "human populations as driving forces," establishing mutual trust between AI
and humans is an important issue to further the development of human
computation. This survey lays the groundwork for the realization of trustworthy
human computation. First, the trustworthiness of human computation as computing
systems, that is, trust offered by humans to AI, is examined using the RAS
(Reliability, Availability, and Serviceability) analogy, which define measures
of trustworthiness in conventional computer systems. Next, the social
trustworthiness provided by human computation systems to users or participants
is discussed from the perspective of AI ethics, including fairness, privacy,
and transparency. Then, we consider human--AI collaboration based on two-way
trust, in which humans and AI build mutual trust and accomplish difficult tasks
through reciprocal collaboration. Finally, future challenges and research
directions for realizing trustworthy human computation are discussed.
| [
{
"version": "v1",
"created": "Sat, 22 Oct 2022 01:30:50 GMT"
}
] | 1,666,656,000,000 | [
[
"Kashima",
"Hisashi",
""
],
[
"Oyama",
"Satoshi",
""
],
[
"Arai",
"Hiromi",
""
],
[
"Mori",
"Junichiro",
""
]
] |
2210.12373 | Caesar Wu | Caesar Wu, Kotagiri Ramamohanarao, Rui Zhang, Pascal Bouvry | Strategic Decisions Survey, Taxonomy, and Future Directions from
Artificial Intelligence Perspective | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Strategic Decision-Making is always challenging because it is inherently
uncertain, ambiguous, risky, and complex. It is the art of possibility. We
develop a systematic taxonomy of decision-making frames that consists of 6
bases, 18 categorical, and 54 frames. We aim to lay out the computational
foundation that is possible to capture a comprehensive landscape view of a
strategic problem. Compared with traditional models, it covers irrational,
non-rational and rational frames c dealing with certainty, uncertainty,
complexity, ambiguity, chaos, and ignorance.
| [
{
"version": "v1",
"created": "Sat, 22 Oct 2022 07:01:10 GMT"
}
] | 1,666,656,000,000 | [
[
"Wu",
"Caesar",
""
],
[
"Ramamohanarao",
"Kotagiri",
""
],
[
"Zhang",
"Rui",
""
],
[
"Bouvry",
"Pascal",
""
]
] |
2210.12556 | Sigurdur Adalgeirsson | Sigurdur Orn Adalgeirsson, Cynthia Breazeal | B$^3$RTDP: A Belief Branch and Bound Real-Time Dynamic Programming
Approach to Solving POMDPs | Originally authored in 2014-2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partially Observable Markov Decision Processes (POMDPs) offer a promising
world representation for autonomous agents, as they can model both transitional
and perceptual uncertainties. Calculating the optimal solution to POMDP
problems can be computationally expensive as they require reasoning over the
(possibly infinite) space of beliefs. Several approaches have been proposed to
overcome this difficulty, such as discretizing the belief space, point-based
belief sampling, and Monte Carlo tree search. The Real-Time Dynamic Programming
approach of the RTDP-Bel algorithm approximates the value function by storing
it in a hashtable with discretized belief keys. We propose an extension to the
RTDP-Bel algorithm which we call Belief Branch and Bound RTDP (B$^3$RTDP). Our
algorithm uses a bounded value function representation and takes advantage of
this in two novel ways: a search-bounding technique based on action selection
convergence probabilities, and a method for leveraging early action convergence
called the \textit{Convergence Frontier}. Lastly, we empirically demonstrate
that B$^3$RTDP can achieve greater returns in less time than the
state-of-the-art SARSOP solver on known POMDP problems.
| [
{
"version": "v1",
"created": "Sat, 22 Oct 2022 21:42:59 GMT"
}
] | 1,666,656,000,000 | [
[
"Adalgeirsson",
"Sigurdur Orn",
""
],
[
"Breazeal",
"Cynthia",
""
]
] |
2210.12896 | Shijie Han | Shijie Han, Siyuan Li, Bo An, Wei Zhao, Peng Liu | Classifying Ambiguous Identities in Hidden-Role Stochastic Games with
Multi-Agent Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-agent reinforcement learning (MARL) is a prevalent learning paradigm
for solving stochastic games. In most MARL studies, agents in a game are
defined as teammates or enemies beforehand, and the relationships among the
agents remain fixed throughout the game. However, in real-world problems, the
agent relationships are commonly unknown in advance or dynamically changing.
Many multi-party interactions start off by asking: who is on my team? This
question arises whether it is the first day at the stock exchange or the
kindergarten. Therefore, training policies for such situations in the face of
imperfect information and ambiguous identities is an important problem that
needs to be addressed. In this work, we develop a novel identity detection
reinforcement learning (IDRL) framework that allows an agent to dynamically
infer the identities of nearby agents and select an appropriate policy to
accomplish the task. In the IDRL framework, a relation network is constructed
to deduce the identities of other agents by observing the behaviors of the
agents. A danger network is optimized to estimate the risk of false-positive
identifications. Beyond that, we propose an intrinsic reward that balances the
need to maximize external rewards and accurate identification. After
identifying the cooperation-competition pattern among the agents, IDRL applies
one of the off-the-shelf MARL methods to learn the policy. To evaluate the
proposed method, we conduct experiments on Red-10 card-shedding game, and the
results show that IDRL achieves superior performance over other
state-of-the-art MARL methods. Impressively, the relation network has the par
performance to identify the identities of agents with top human players; the
danger network reasonably avoids the risk of imperfect identification. The code
to reproduce all the reported results is available online at
https://github.com/MR-BENjie/IDRL.
| [
{
"version": "v1",
"created": "Mon, 24 Oct 2022 00:54:59 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Mar 2023 11:37:38 GMT"
}
] | 1,678,147,200,000 | [
[
"Han",
"Shijie",
""
],
[
"Li",
"Siyuan",
""
],
[
"An",
"Bo",
""
],
[
"Zhao",
"Wei",
""
],
[
"Liu",
"Peng",
""
]
] |
2210.13207 | Dalton Lunga | Dalton Lunga, Yingjie Hu, Shawn Newsam, Song Gao, Bruno Martins, Lexie
Yang, Xueqing Deng | GeoAI at ACM SIGSPATIAL: The New Frontier of Geospatial Artificial
Intelligence Research | 12 pages, 1 figure, 1 table | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Geospatial Artificial Intelligence (GeoAI) is an interdisciplinary field
enjoying tremendous adoption. However, the efficient design and implementation
of GeoAI systems face many open challenges. This is mainly due to the lack of
non-standardized approaches to artificial intelligence tool development,
inadequate platforms, and a lack of multidisciplinary engagements, which all
motivate domain experts to seek a shared stage with scientists and engineers to
solve problems of significant impact on society. Since its inception in 2017,
the GeoAI series of workshops has been co-located with the Association for
Computing Machinery International Conference on Advances in Geographic
Information Systems. The workshop series has fostered a nexus for
geoscientists, computer scientists, engineers, entrepreneurs, and
decision-makers, from academia, industry, and government to engage in
artificial intelligence, spatiotemporal data computing, and geospatial data
science research, motivated by various challenges. In this article, we revisit
and discuss the state of GeoAI open research directions, the recent
developments, and an emerging agenda calling for a continued cross-disciplinary
community engagement.
| [
{
"version": "v1",
"created": "Thu, 20 Oct 2022 18:02:17 GMT"
}
] | 1,666,656,000,000 | [
[
"Lunga",
"Dalton",
""
],
[
"Hu",
"Yingjie",
""
],
[
"Newsam",
"Shawn",
""
],
[
"Gao",
"Song",
""
],
[
"Martins",
"Bruno",
""
],
[
"Yang",
"Lexie",
""
],
[
"Deng",
"Xueqing",
""
]
] |
2210.14640 | Aurelien Delage | Aur\'elien Delage, Olivier Buffet, Jilles S. Dibangoye, Abdallah
Saffidine | HSVI can solve zero-sum Partially Observable Stochastic Games | 42 pages, 2 algorithms. arXiv admin note: substantial text overlap
with arXiv:2110.14529 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | State-of-the-art methods for solving 2-player zero-sum imperfect information
games rely on linear programming or regret minimization, though not on dynamic
programming (DP) or heuristic search (HS), while the latter are often at the
core of state-of-the-art solvers for other sequential decision-making problems.
In partially observable or collaborative settings (e.g., POMDPs and Dec-
POMDPs), DP and HS require introducing an appropriate statistic that induces a
fully observable problem as well as bounding (convex) approximators of the
optimal value function. This approach has succeeded in some subclasses of
2-player zero-sum partially observable stochastic games (zs- POSGs) as well,
but how to apply it in the general case still remains an open question. We
answer it by (i) rigorously defining an equivalent game to work with, (ii)
proving mathematical properties of the optimal value function that allow
deriving bounds that come with solution strategies, (iii) proposing for the
first time an HSVI-like solver that provably converges to an $\epsilon$-optimal
solution in finite time, and (iv) empirically analyzing it. This opens the door
to a novel family of promising approaches complementing those relying on linear
programming or iterative methods.
| [
{
"version": "v1",
"created": "Wed, 26 Oct 2022 11:41:57 GMT"
}
] | 1,666,828,800,000 | [
[
"Delage",
"Aurélien",
""
],
[
"Buffet",
"Olivier",
""
],
[
"Dibangoye",
"Jilles S.",
""
],
[
"Saffidine",
"Abdallah",
""
]
] |
2210.15096 | Utkarsh Soni | Utkarsh Soni, Nupur Thakur, Sarath Sreedharan, Lin Guan, Mudit Verma,
Matthew Marquez, Subbarao Kambhampati | Towards customizable reinforcement learning agents: Enabling preference
specification through online vocabulary expansion | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | There is a growing interest in developing automated agents that can work
alongside humans. In addition to completing the assigned task, such an agent
will undoubtedly be expected to behave in a manner that is preferred by the
human. This requires the human to communicate their preferences to the agent.
To achieve this, the current approaches either require the users to specify the
reward function or the preference is interactively learned from queries that
ask the user to compare behavior. The former approach can be challenging if the
internal representation used by the agent is inscrutable to the human while the
latter is unnecessarily cumbersome for the user if their preference can be
specified more easily in symbolic terms. In this work, we propose PRESCA
(PREference Specification through Concept Acquisition), a system that allows
users to specify their preferences in terms of concepts that they understand.
PRESCA maintains a set of such concepts in a shared vocabulary. If the relevant
concept is not in the shared vocabulary, then it is learned. To make learning a
new concept more feedback efficient, PRESCA leverages causal associations
between the target concept and concepts that are already known. In addition, we
use a novel data augmentation approach to further reduce required feedback. We
evaluate PRESCA by using it on a Minecraft environment and show that it can
effectively align the agent with the user's preference.
| [
{
"version": "v1",
"created": "Thu, 27 Oct 2022 00:54:14 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Jan 2023 20:10:24 GMT"
}
] | 1,675,296,000,000 | [
[
"Soni",
"Utkarsh",
""
],
[
"Thakur",
"Nupur",
""
],
[
"Sreedharan",
"Sarath",
""
],
[
"Guan",
"Lin",
""
],
[
"Verma",
"Mudit",
""
],
[
"Marquez",
"Matthew",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2210.15236 | Federico Cabitza | Federico Cabitza and Matteo Cameli and Andrea Campagner and Chiara
Natali and Luca Ronzio | Painting the black box white: experimental findings from applying XAI to
an ECG reading setting | 15 pages, 7 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The shift from symbolic AI systems to black-box, sub-symbolic, and
statistical ones has motivated a rapid increase in the interest toward
explainable AI (XAI), i.e. approaches to make black-box AI systems explainable
to human decision makers with the aim of making these systems more acceptable
and more usable tools and supports. However, we make the point that, rather
than always making black boxes transparent, these approaches are at risk of
\emph{painting the black boxes white}, thus failing to provide a level of
transparency that would increase the system's usability and comprehensibility;
or, even, at risk of generating new errors, in what we termed the
\emph{white-box paradox}. To address these usability-related issues, in this
work we focus on the cognitive dimension of users' perception of explanations
and XAI systems. To this aim, we designed and conducted a questionnaire-based
experiment by which we involved 44 cardiology residents and specialists in an
AI-supported ECG reading task. In doing so, we investigated different research
questions concerning the relationship between users' characteristics (e.g.
expertise) and their perception of AI and XAI systems, including their trust,
the perceived explanations' quality and their tendency to defer the decision
process to automation (i.e. technology dominance), as well as the mutual
relationships among these different dimensions. Our findings provide a
contribution to the evaluation of AI-based support systems from a Human-AI
interaction-oriented perspective and lay the ground for further investigation
of XAI and its effects on decision making and user experience.
| [
{
"version": "v1",
"created": "Thu, 27 Oct 2022 07:47:50 GMT"
}
] | 1,666,915,200,000 | [
[
"Cabitza",
"Federico",
""
],
[
"Cameli",
"Matteo",
""
],
[
"Campagner",
"Andrea",
""
],
[
"Natali",
"Chiara",
""
],
[
"Ronzio",
"Luca",
""
]
] |
2210.15507 | Mieczys{\l}aw K{\l}opotek | Mieczys{\l}aw A. K{\l}opotek and Robert A. K{\l}opotek | How To Overcome Richness Axiom Fallacy | 18 pages, 3 figures, 3 tables, an extended version of ISMIS2022 paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper points at the grieving problems implied by the richness axiom in
the Kleinberg's axiomatic system and suggests resolutions. The richness induces
learnability problem in general and leads to conflicts with consistency axiom.
As a resolution, learnability constraints and usage of centric consistency or
restriction of the domain of considered clusterings to super-ball-clusterings
is proposed.
| [
{
"version": "v1",
"created": "Thu, 27 Oct 2022 14:39:48 GMT"
}
] | 1,666,915,200,000 | [
[
"Kłopotek",
"Mieczysław A.",
""
],
[
"Kłopotek",
"Robert A.",
""
]
] |
2210.15637 | Wensheng Gan | Lili Chen, Wensheng Gan, Chien-Ming Chen | Towards Correlated Sequential Rules | Preprint. 7 figures, 6 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of high-utility sequential pattern mining (HUSPM) is to efficiently
discover profitable or useful sequential patterns in a large number of
sequences. However, simply being aware of utility-eligible patterns is
insufficient for making predictions. To compensate for this deficiency,
high-utility sequential rule mining (HUSRM) is designed to explore the
confidence or probability of predicting the occurrence of consequence
sequential patterns based on the appearance of premise sequential patterns. It
has numerous applications, such as product recommendation and weather
prediction. However, the existing algorithm, known as HUSRM, is limited to
extracting all eligible rules while neglecting the correlation between the
generated sequential rules. To address this issue, we propose a novel algorithm
called correlated high-utility sequential rule miner (CoUSR) to integrate the
concept of correlation into HUSRM. The proposed algorithm requires not only
that each rule be correlated but also that the patterns in the antecedent and
consequent of the high-utility sequential rule be correlated. The algorithm
adopts a utility-list structure to avoid multiple database scans. Additionally,
several pruning strategies are used to improve the algorithm's efficiency and
performance. Based on several real-world datasets, subsequent experiments
demonstrated that CoUSR is effective and efficient in terms of operation time
and memory consumption.
| [
{
"version": "v1",
"created": "Thu, 27 Oct 2022 17:27:23 GMT"
}
] | 1,666,915,200,000 | [
[
"Chen",
"Lili",
""
],
[
"Gan",
"Wensheng",
""
],
[
"Chen",
"Chien-Ming",
""
]
] |
2210.15767 | Michael Littman | Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan
Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles
Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie
Shah, Steven Sloman, Shannon Vallor, Toby Walsh | Gathering Strength, Gathering Storms: The One Hundred Year Study on
Artificial Intelligence (AI100) 2021 Study Panel Report | 82 pages,
https://ai100.stanford.edu/gathering-strength-gathering-storms-one-hundred-year-study-artificial-intelligence-ai100-2021-study | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In September 2021, the "One Hundred Year Study on Artificial Intelligence"
project (AI100) issued the second report of its planned long-term periodic
assessment of artificial intelligence (AI) and its impact on society. It was
written by a panel of 17 study authors, each of whom is deeply rooted in AI
research, chaired by Michael Littman of Brown University. The report, entitled
"Gathering Strength, Gathering Storms," answers a set of 14 questions probing
critical areas of AI development addressing the major risks and dangers of AI,
its effects on society, its public perception and the future of the field. The
report concludes that AI has made a major leap from the lab to people's lives
in recent years, which increases the urgency to understand its potential
negative effects. The questions were developed by the AI100 Standing Committee,
chaired by Peter Stone of the University of Texas at Austin, consisting of a
group of AI leaders with expertise in computer science, sociology, ethics,
economics, and other disciplines.
| [
{
"version": "v1",
"created": "Thu, 27 Oct 2022 21:00:36 GMT"
}
] | 1,667,174,400,000 | [
[
"Littman",
"Michael L.",
""
],
[
"Ajunwa",
"Ifeoma",
""
],
[
"Berger",
"Guy",
""
],
[
"Boutilier",
"Craig",
""
],
[
"Currie",
"Morgan",
""
],
[
"Doshi-Velez",
"Finale",
""
],
[
"Hadfield",
"Gillian",
""
],
[
"Horowitz",
"Michael C.",
""
],
[
"Isbell",
"Charles",
""
],
[
"Kitano",
"Hiroaki",
""
],
[
"Levy",
"Karen",
""
],
[
"Lyons",
"Terah",
""
],
[
"Mitchell",
"Melanie",
""
],
[
"Shah",
"Julie",
""
],
[
"Sloman",
"Steven",
""
],
[
"Vallor",
"Shannon",
""
],
[
"Walsh",
"Toby",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.