id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2112.10433 | Dongfang Li | Junying Chen, Dongfang Li, Qingcai Chen, Wenxiu Zhou, Xin Liu | Diaformer: Automatic Diagnosis via Symptoms Sequence Generation | AAAI 2022; The first two authors contributed equally to this paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic diagnosis has attracted increasing attention but remains
challenging due to multi-step reasoning. Recent works usually address it by
reinforcement learning methods. However, these methods show low efficiency and
require taskspecific reward functions. Considering the conversation between
doctor and patient allows doctors to probe for symptoms and make diagnoses, the
diagnosis process can be naturally seen as the generation of a sequence
including symptoms and diagnoses. Inspired by this, we reformulate automatic
diagnosis as a symptoms Sequence Generation (SG) task and propose a simple but
effective automatic Diagnosis model based on Transformer (Diaformer). We
firstly design the symptom attention framework to learn the generation of
symptom inquiry and the disease diagnosis. To alleviate the discrepancy between
sequential generation and disorder of implicit symptoms, we further design
three orderless training mechanisms. Experiments on three public datasets show
that our model outperforms baselines on disease diagnosis by 1%, 6% and 11.5%
with the highest training efficiency. Detailed analysis on symptom inquiry
prediction demonstrates that the potential of applying symptoms sequence
generation for automatic diagnosis.
| [
{
"version": "v1",
"created": "Mon, 20 Dec 2021 10:26:59 GMT"
}
] | 1,640,044,800,000 | [
[
"Chen",
"Junying",
""
],
[
"Li",
"Dongfang",
""
],
[
"Chen",
"Qingcai",
""
],
[
"Zhou",
"Wenxiu",
""
],
[
"Liu",
"Xin",
""
]
] |
2112.10892 | Thierry Petit | Thierry Petit and Randy J. Zauhar | A Constraint Programming Approach to Weighted Isomorphic Mapping of
Fragment-based Shape Signatures | 9 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fragment-based shape signature techniques have proven to be powerful tools
for computer-aided drug design. They allow scientists to search for target
molecules with some similarity to a known active compound. They do not require
reference to the full underlying chemical structure, which is essential to deal
with chemical databases containing millions of compounds. However, finding the
optimal match of a part of the fragmented compound can be time-consuming. In
this paper, we use constraint programming to solve this specific problem. It
involves finding a weighted assignment of fragments subject to connectivity
constraints. Our experiments demonstrate the practical relevance of our
approach and open new perspectives, including generating multiple, diverse
solutions. Our approach constitutes an original use of a constraint solver in a
real time setting, where propagation allows to avoid an enumeration of weighted
paths. The model must remain robust to the addition of constraints making some
instances not tractable. This particular context requires the use of unusual
criteria for the choice of the model: lightweight, standard propagation
algorithms, data structures without prohibitive constant cost. The objective is
not to design new, complex algorithms to solve difficult instances.
| [
{
"version": "v1",
"created": "Mon, 20 Dec 2021 22:35:36 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jan 2022 16:58:59 GMT"
}
] | 1,641,340,800,000 | [
[
"Petit",
"Thierry",
""
],
[
"Zauhar",
"Randy J.",
""
]
] |
2112.11023 | Chen Jie | Jie Chen and Lifen Jiang and Chunmei Ma and Huazhi Sun | Robust Recommendation with Implicit Feedback via Eliminating the Effects
of Unexpected Behaviors | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the implicit feedback recommendation, incorporating short-term preference
into recommender systems has attracted increasing attention in recent years.
However, unexpected behaviors in historical interactions like clicking some
items by accident don't well reflect users' inherent preferences. Existing
studies fail to model the effects of unexpected behaviors, thus achieve
inferior recommendation performance. In this paper, we propose a
Multi-Preferences Model (MPM) to eliminate the effects of unexpected behaviors.
MPM first extracts the users' instant preferences from their recent historical
interactions by a fine-grained preference module. Then an unexpected-behaviors
detector is trained to judge whether these instant preferences are biased by
unexpected behaviors. We also integrate user's general preference in MPM.
Finally, an output module is performed to eliminate the effects of unexpected
behaviors and integrates all the information to make a final recommendation. We
conduct extensive experiments on two datasets of a movie and an e-retailing,
demonstrating significant improvements in our model over the state-of-the-art
methods. The experimental results show that MPM gets a massive improvement in
HR@10 and NDCG@10, which relatively increased by 3.643% and 4.107% compare with
AttRec model on average. We publish our code at
https://github.com/chenjie04/MPM/.
| [
{
"version": "v1",
"created": "Tue, 21 Dec 2021 07:29:23 GMT"
}
] | 1,640,131,200,000 | [
[
"Chen",
"Jie",
""
],
[
"Jiang",
"Lifen",
""
],
[
"Ma",
"Chunmei",
""
],
[
"Sun",
"Huazhi",
""
]
] |
2112.11701 | Rui Zhao | Rui Zhao, Jinming Song, Yufeng Yuan, Hu Haifeng, Yang Gao, Yi Wu,
Zhongqian Sun, Yang Wei | Maximum Entropy Population-Based Training for Zero-Shot Human-AI
Coordination | Accepted by NeurIPS Cooperative AI Workshop, 2021, link:
https://www.cooperativeai.com/workshop/neurips-2021#Workshop-Papers. Under
review at a conference | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of training a Reinforcement Learning (RL) agent that is
collaborative with humans without using any human data. Although such agents
can be obtained through self-play training, they can suffer significantly from
distributional shift when paired with unencountered partners, such as humans.
To mitigate this distributional shift, we propose Maximum Entropy
Population-based training (MEP). In MEP, agents in the population are trained
with our derived Population Entropy bonus to promote both pairwise diversity
between agents and individual diversity of agents themselves, and a common best
agent is trained by paring with agents in this diversified population via
prioritized sampling. The prioritization is dynamically adjusted based on the
training progress. We demonstrate the effectiveness of our method MEP, with
comparison to Self-Play PPO (SP), Population-Based Training (PBT), Trajectory
Diversity (TrajeDi), and Fictitious Co-Play (FCP) in the Overcooked game
environment, with partners being human proxy models and real humans. A
supplementary video showing experimental results is available at
https://youtu.be/Xh-FKD0AAKE.
| [
{
"version": "v1",
"created": "Wed, 22 Dec 2021 07:19:36 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2022 06:43:58 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Jun 2022 05:15:20 GMT"
}
] | 1,656,374,400,000 | [
[
"Zhao",
"Rui",
""
],
[
"Song",
"Jinming",
""
],
[
"Yuan",
"Yufeng",
""
],
[
"Haifeng",
"Hu",
""
],
[
"Gao",
"Yang",
""
],
[
"Wu",
"Yi",
""
],
[
"Sun",
"Zhongqian",
""
],
[
"Wei",
"Yang",
""
]
] |
2112.11937 | Aizaz Sharif | Aizaz Sharif, Dusica Marijan | Adversarial Deep Reinforcement Learning for Improving the Robustness of
Multi-agent Autonomous Driving Policies | null | null | 10.1109/APSEC57359.2022.00018 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous cars are well known for being vulnerable to adversarial attacks
that can compromise the safety of the car and pose danger to other road users.
To effectively defend against adversaries, it is required to not only test
autonomous cars for finding driving errors but to improve the robustness of the
cars to these errors. To this end, in this paper, we propose a two-step
methodology for autonomous cars that consists of (i) finding failure states in
autonomous cars by training the adversarial driving agent, and (ii) improving
the robustness of autonomous cars by retraining them with effective adversarial
inputs. Our methodology supports testing autonomous cars in a multi-agent
environment, where we train and compare adversarial car policy on two custom
reward functions to test the driving control decision of autonomous cars. We
run experiments in a vision-based high-fidelity urban driving simulated
environment. Our results show that adversarial testing can be used for finding
erroneous autonomous driving behavior, followed by adversarial training for
improving the robustness of deep reinforcement learning-based autonomous
driving policies. We demonstrate that the autonomous cars retrained using the
effective adversarial inputs noticeably increase the performance of their
driving policies in terms of reduced collision and offroad steering errors.
| [
{
"version": "v1",
"created": "Wed, 22 Dec 2021 15:00:16 GMT"
},
{
"version": "v2",
"created": "Fri, 27 May 2022 10:16:54 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Feb 2023 14:11:43 GMT"
}
] | 1,677,024,000,000 | [
[
"Sharif",
"Aizaz",
""
],
[
"Marijan",
"Dusica",
""
]
] |
2112.12754 | Ronald Brachman | Ronald J. Brachman (Jacobs Technion-Cornell Institute and Cornell
University), Hector J. Levesque (University of Toronto) | Toward a New Science of Common Sense | Initial version published in Proceedings of AAAI-22, the Thirty-Sixth
AAAI Conference on Artificial Intelligence. Original version extended
slightly to include acknowledgement of more recent work, including new
references, and to clarify remarks in a few paragraphs | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Common sense has always been of interest in Artificial Intelligence, but has
rarely taken center stage. Despite its mention in one of John McCarthy's
earliest papers and years of work by dedicated researchers, arguably no AI
system with a serious amount of general common sense has ever emerged. Why is
that? What's missing? Examples of AI systems' failures of common sense abound,
and they point to AI's frequent focus on expertise as the cause. Those
attempting to break the resulting brittleness barrier, even in the context of
modern deep learning, have tended to invest their energy in large numbers of
small bits of commonsense knowledge. While important, all the commonsense
knowledge fragments in the world don't add up to a system that actually
demonstrates common sense in a human-like way. We advocate examining common
sense from a broader perspective than in the past. Common sense should be
considered in the context of a full cognitive system with history, goals,
desires, and drives, not just in isolated circumscribed examples. A fresh look
is needed: common sense is worthy of its own dedicated scientific exploration.
| [
{
"version": "v1",
"created": "Thu, 23 Dec 2021 18:17:47 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Feb 2022 23:34:00 GMT"
}
] | 1,644,278,400,000 | [
[
"Brachman",
"Ronald J.",
"",
"Jacobs Technion-Cornell Institute and Cornell\n University"
],
[
"Levesque",
"Hector J.",
"",
"University of Toronto"
]
] |
2112.12768 | Bikram Bhuyan Mr | Bikram Pratim Bhuyan, Ravi Tomar, Maanak Gupta and Amar Ramdane-Cherif | An Ontological Knowledge Representation for Smart Agriculture | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In order to provide the agricultural industry with the infrastructure it
needs to take advantage of advanced technology, such as big data, the cloud,
and the internet of things (IoT); smart farming is a management concept that
focuses on providing the infrastructure necessary to track, monitor, automate,
and analyse operations. To represent the knowledge extracted from the primary
data collected is of utmost importance. An agricultural ontology framework for
smart agriculture systems is presented in this study. The knowledge graph is
represented as a lattice to capture and perform reasoning on spatio-temporal
agricultural data.
| [
{
"version": "v1",
"created": "Tue, 21 Dec 2021 14:58:04 GMT"
}
] | 1,640,304,000,000 | [
[
"Bhuyan",
"Bikram Pratim",
""
],
[
"Tomar",
"Ravi",
""
],
[
"Gupta",
"Maanak",
""
],
[
"Ramdane-Cherif",
"Amar",
""
]
] |
2112.12876 | Denghui Zhang | Denghui Zhang, Zixuan Yuan, Hao Liu, Xiaodong Lin, Hui Xiong | Learning to Walk with Dual Agents for Knowledge Graph Reasoning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph walking based on reinforcement learning (RL) has shown great success in
navigating an agent to automatically complete various reasoning tasks over an
incomplete knowledge graph (KG) by exploring multi-hop relational paths.
However, existing multi-hop reasoning approaches only work well on short
reasoning paths and tend to miss the target entity with the increasing path
length. This is undesirable for many reason-ing tasks in real-world scenarios,
where short paths connecting the source and target entities are not available
in incomplete KGs, and thus the reasoning performances drop drastically unless
the agent is able to seek out more clues from longer paths. To address the
above challenge, in this paper, we propose a dual-agent reinforcement learning
framework, which trains two agents (GIANT and DWARF) to walk over a KG jointly
and search for the answer collaboratively. Our approach tackles the reasoning
challenge in long paths by assigning one of the agents (GIANT) searching on
cluster-level paths quickly and providing stage-wise hints for another agent
(DWARF). Finally, experimental results on several KG reasoning benchmarks show
that our approach can search answers more accurately and efficiently, and
outperforms existing RL-based methods for long path queries by a large margin.
| [
{
"version": "v1",
"created": "Thu, 23 Dec 2021 23:03:24 GMT"
}
] | 1,640,649,600,000 | [
[
"Zhang",
"Denghui",
""
],
[
"Yuan",
"Zixuan",
""
],
[
"Liu",
"Hao",
""
],
[
"Lin",
"Xiaodong",
""
],
[
"Xiong",
"Hui",
""
]
] |
2112.13477 | Joao Leite | Jo\~ao Leite, Martin Slota | A Brief History of Updates of Answer-Set Programs | To appear in Theory and Practice of Logic Programming (TPLP) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Over the last couple of decades, there has been a considerable effort devoted
to the problem of updating logic programs under the stable model semantics
(a.k.a. answer-set programs) or, in other words, the problem of characterising
the result of bringing up-to-date a logic program when the world it describes
changes. Whereas the state-of-the-art approaches are guided by the same basic
intuitions and aspirations as belief updates in the context of classical logic,
they build upon fundamentally different principles and methods, which have
prevented a unifying framework that could embrace both belief and rule updates.
In this paper, we will overview some of the main approaches and results related
to answer-set programming updates, while pointing out some of the main
challenges that research in this topic has faced.
| [
{
"version": "v1",
"created": "Mon, 27 Dec 2021 01:46:33 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Feb 2022 04:31:38 GMT"
}
] | 1,645,488,000,000 | [
[
"Leite",
"João",
""
],
[
"Slota",
"Martin",
""
]
] |
2112.14243 | Adrian Haret | Adrian Haret, Johannes P. Wallner | An AGM Approach to Revising Preferences | Presented at the NMR 2021 workshop | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We look at preference change arising out of an interaction between two
elements: the first is an initial preference ranking encoding a pre-existing
attitude; the second element is new preference information signaling input from
an authoritative source, which may come into conflict with the initial
preference. The aim is to adjust the initial preference and bring it in line
with the new preference, without having to give up more information than
necessary. We model this process using the formal machinery of belief change,
along the lines of the well-known AGM approach. We propose a set of fundamental
rationality postulates, and derive the main results of the paper: a set of
representation theorems showing that preference change according to these
postulates can be rationalized as a choice function guided by a ranking on the
comparisons in the initial preference order. We conclude by presenting
operators satisfying our proposed postulates. Our approach thus allows us to
situate preference revision within the larger family of belief change
operators.
| [
{
"version": "v1",
"created": "Tue, 28 Dec 2021 18:12:57 GMT"
}
] | 1,640,822,400,000 | [
[
"Haret",
"Adrian",
""
],
[
"Wallner",
"Johannes P.",
""
]
] |
2112.14476 | Alessandro Antonucci | Claudio Bonesana and Francesca Mangili and Alessandro Antonucci | ADAPQUEST: A Software for Web-Based Adaptive Questionnaires based on
Bayesian Networks | Presented at the IJCAI 2021 Workshop on Artificial Intelligence for
Education | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce ADAPQUEST, a software tool written in Java for the development
of adaptive questionnaires based on Bayesian networks. Adaptiveness is intended
here as the dynamical choice of the question sequence on the basis of an
evolving model of the skill level of the test taker. Bayesian networks offer a
flexible and highly interpretable framework to describe such testing process,
especially when coping with multiple skills. ADAPQUEST embeds dedicated
elicitation strategies to simplify the elicitation of the questionnaire
parameters. An application of this tool for the diagnosis of mental disorders
is also discussed together with some implementation details.
| [
{
"version": "v1",
"created": "Wed, 29 Dec 2021 09:50:44 GMT"
}
] | 1,640,822,400,000 | [
[
"Bonesana",
"Claudio",
""
],
[
"Mangili",
"Francesca",
""
],
[
"Antonucci",
"Alessandro",
""
]
] |
2112.14480 | Luciano Serafini | Luciano Serafini, Raul Barbosa, Jasmin Grosinger, Luca Iocchi,
Christian Napoli, Salvatore Rinzivillo, Jacques Robin, Alessandro Saffiotti,
Teresa Scantamburlo, Peter Schueller, Paolo Traverso, Javier Vazquez-Salceda | On some Foundational Aspects of Human-Centered Artificial Intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The burgeoning of AI has prompted recommendations that AI techniques should
be "human-centered". However, there is no clear definition of what is meant by
Human Centered Artificial Intelligence, or for short, HCAI. This paper aims to
improve this situation by addressing some foundational aspects of HCAI. To do
so, we introduce the term HCAI agent to refer to any physical or software
computational agent equipped with AI components and that interacts and/or
collaborates with humans. This article identifies five main conceptual
components that participate in an HCAI agent: Observations, Requirements,
Actions, Explanations and Models. We see the notion of HCAI agent, together
with its components and functions, as a way to bridge the technical and
non-technical discussions on human-centered AI. In this paper, we focus our
analysis on scenarios consisting of a single agent operating in dynamic
environments in presence of humans.
| [
{
"version": "v1",
"created": "Wed, 29 Dec 2021 09:58:59 GMT"
}
] | 1,640,822,400,000 | [
[
"Serafini",
"Luciano",
""
],
[
"Barbosa",
"Raul",
""
],
[
"Grosinger",
"Jasmin",
""
],
[
"Iocchi",
"Luca",
""
],
[
"Napoli",
"Christian",
""
],
[
"Rinzivillo",
"Salvatore",
""
],
[
"Robin",
"Jacques",
""
],
[
"Saffiotti",
"Alessandro",
""
],
[
"Scantamburlo",
"Teresa",
""
],
[
"Schueller",
"Peter",
""
],
[
"Traverso",
"Paolo",
""
],
[
"Vazquez-Salceda",
"Javier",
""
]
] |
2112.14624 | Jamie Duell | Jamie Duell, Monika Seisenberger, Gert Aarts, Shangming Zhou and Xiuyi
Fan | Towards a Shapley Value Graph Framework for Medical peer-influence | Preliminary work - to be expanded and amended | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | eXplainable Artificial Intelligence (XAI) is a sub-field of Artificial
Intelligence (AI) that is at the forefront of AI research. In XAI, feature
attribution methods produce explanations in the form of feature importance.
People often use feature importance as guidance for intervention. However, a
limitation of existing feature attribution methods is that there is a lack of
explanation towards the consequence of intervention. In other words, although
contribution towards a certain prediction is highlighted by feature attribution
methods, the relation between features and the consequence of intervention is
not studied. The aim of this paper is to introduce a new framework, called a
peer influence framework to look deeper into explanations using graph
representation for feature-to-feature interactions to improve the
interpretability of black-box Machine Learning models and inform intervention.
| [
{
"version": "v1",
"created": "Wed, 29 Dec 2021 16:24:50 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Feb 2022 11:45:05 GMT"
}
] | 1,644,364,800,000 | [
[
"Duell",
"Jamie",
""
],
[
"Seisenberger",
"Monika",
""
],
[
"Aarts",
"Gert",
""
],
[
"Zhou",
"Shangming",
""
],
[
"Fan",
"Xiuyi",
""
]
] |
2112.15221 | Tong Mu | Tong Mu, Georgios Theocharous, David Arbour, Emma Brunskill | Constraint Sampling Reinforcement Learning: Incorporating Expertise For
Faster Learning | null | AAAI2022 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online reinforcement learning (RL) algorithms are often difficult to deploy
in complex human-facing applications as they may learn slowly and have poor
early performance. To address this, we introduce a practical algorithm for
incorporating human insight to speed learning. Our algorithm, Constraint
Sampling Reinforcement Learning (CSRL), incorporates prior domain knowledge as
constraints/restrictions on the RL policy. It takes in multiple potential
policy constraints to maintain robustness to misspecification of individual
constraints while leveraging helpful ones to learn quickly. Given a base RL
learning algorithm (ex. UCRL, DQN, Rainbow) we propose an upper confidence with
elimination scheme that leverages the relationship between the constraints, and
their observed performance, to adaptively switch among them. We instantiate our
algorithm with DQN-type algorithms and UCRL as base algorithms, and evaluate
our algorithm in four environments, including three simulators based on real
data: recommendations, educational activity sequencing, and HIV treatment
sequencing. In all cases, CSRL learns a good policy faster than baselines.
| [
{
"version": "v1",
"created": "Thu, 30 Dec 2021 22:02:42 GMT"
}
] | 1,641,168,000,000 | [
[
"Mu",
"Tong",
""
],
[
"Theocharous",
"Georgios",
""
],
[
"Arbour",
"David",
""
],
[
"Brunskill",
"Emma",
""
]
] |
2112.15360 | Madhav Agarwal | Madhav Agarwal and Siddhant Bansal | Making AI 'Smart': Bridging AI and Cognitive Science | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The last two decades have seen tremendous advances in Artificial
Intelligence. The exponential growth in terms of computation capabilities has
given us hope of developing humans like robots. The question is: are we there
yet? Maybe not. With the integration of cognitive science, the 'artificial'
characteristic of Artificial Intelligence (AI) might soon be replaced with
'smart'. This will help develop more powerful AI systems and simultaneously
gives us a better understanding of how the human brain works. We discuss the
various possibilities and challenges of bridging these two fields and how they
can benefit each other. We argue that the possibility of AI taking over human
civilization is low as developing such an advanced system requires a better
understanding of the human brain first.
| [
{
"version": "v1",
"created": "Fri, 31 Dec 2021 09:30:44 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Feb 2022 08:15:27 GMT"
}
] | 1,643,760,000,000 | [
[
"Agarwal",
"Madhav",
""
],
[
"Bansal",
"Siddhant",
""
]
] |
2112.15422 | Peter Vamplew | Peter Vamplew, Benjamin J. Smith, Johan Kallstrom, Gabriel Ramos,
Roxana Radulescu, Diederik M. Roijers, Conor F. Hayes, Fredrik Heintz,
Patrick Mannion, Pieter J.K. Libin, Richard Dazeley, Cameron Foale | Scalar reward is not enough: A response to Silver, Singh, Precup and
Sutton (2021) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent paper `"Reward is Enough" by Silver, Singh, Precup and Sutton
posits that the concept of reward maximisation is sufficient to underpin all
intelligence, both natural and artificial. We contest the underlying assumption
of Silver et al. that such reward can be scalar-valued. In this paper we
explain why scalar rewards are insufficient to account for some aspects of both
biological and computational intelligence, and argue in favour of explicitly
multi-objective models of reward maximisation. Furthermore, we contend that
even if scalar reward functions can trigger intelligent behaviour in specific
cases, it is still undesirable to use this approach for the development of
artificial general intelligence due to unacceptable risks of unsafe or
unethical behaviour.
| [
{
"version": "v1",
"created": "Thu, 25 Nov 2021 00:58:23 GMT"
}
] | 1,641,168,000,000 | [
[
"Vamplew",
"Peter",
""
],
[
"Smith",
"Benjamin J.",
""
],
[
"Kallstrom",
"Johan",
""
],
[
"Ramos",
"Gabriel",
""
],
[
"Radulescu",
"Roxana",
""
],
[
"Roijers",
"Diederik M.",
""
],
[
"Hayes",
"Conor F.",
""
],
[
"Heintz",
"Fredrik",
""
],
[
"Mannion",
"Patrick",
""
],
[
"Libin",
"Pieter J. K.",
""
],
[
"Dazeley",
"Richard",
""
],
[
"Foale",
"Cameron",
""
]
] |
2112.15424 | Denis Kleyko | Denis Kleyko, Dmitri A. Rachkovskij, Evgeny Osipov, Abbas Rahimi | A Survey on Hyperdimensional Computing aka Vector Symbolic
Architectures, Part II: Applications, Cognitive Models, and Challenges | 37 pages | ACM Computing Surveys (2023), vol. 55, no. 9 | 10.1145/3558000 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is Part II of the two-part comprehensive survey devoted to a computing
framework most commonly known under the names Hyperdimensional Computing and
Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of
computational models that use high-dimensional distributed representations and
rely on the algebraic properties of their key operations to incorporate the
advantages of structured symbolic representations and vector distributed
representations. Holographic Reduced Representations is an influential HDC/VSA
model that is well-known in the machine learning domain and often used to refer
to the whole family. However, for the sake of consistency, we use HDC/VSA to
refer to the field. Part I of this survey covered foundational aspects of the
field, such as the historical context leading to the development of HDC/VSA,
key elements of any HDC/VSA model, known HDC/VSA models, and the transformation
of input data of various types into high-dimensional vectors suitable for
HDC/VSA. This second part surveys existing applications, the role of HDC/VSA in
cognitive computing and architectures, as well as directions for future work.
Most of the applications lie within the Machine Learning/Artificial
Intelligence domain, however, we also cover other applications to provide a
complete picture. The survey is written to be useful for both newcomers and
practitioners.
| [
{
"version": "v1",
"created": "Fri, 12 Nov 2021 18:21:44 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Jan 2022 18:00:18 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Aug 2023 14:48:02 GMT"
}
] | 1,690,934,400,000 | [
[
"Kleyko",
"Denis",
""
],
[
"Rachkovskij",
"Dmitri A.",
""
],
[
"Osipov",
"Evgeny",
""
],
[
"Rahimi",
"Abbas",
""
]
] |
2201.00180 | Mohammadhossein Ghahramani | Mohammadhossein Ghahramani, Mengchu Zhou, Anna Molter, Francesco Pilla | IoT-based Route Recommendation for an Intelligent Waste Management
System | 11 | null | 10.1109/JIOT.2021.3132126 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The Internet of Things (IoT) is a paradigm characterized by a network of
embedded sensors and services. These sensors are incorporated to collect
various information, track physical conditions, e.g., waste bins' status, and
exchange data with different centralized platforms. The need for such sensors
is increasing; however, proliferation of technologies comes with various
challenges. For example, how can IoT and its associated data be used to enhance
waste management? In smart cities, an efficient waste management system is
crucial. Artificial Intelligence (AI) and IoT-enabled approaches can empower
cities to manage the waste collection. This work proposes an intelligent
approach to route recommendation in an IoT-enabled waste management system
given spatial constraints. It performs a thorough analysis based on AI-based
methods and compares their corresponding results. Our solution is based on a
multiple-level decision-making process in which bins' status and coordinates
are taken into account to address the routing problem. Such AI-based models can
help engineers design a sustainable infrastructure system.
| [
{
"version": "v1",
"created": "Sat, 1 Jan 2022 12:36:22 GMT"
}
] | 1,641,254,400,000 | [
[
"Ghahramani",
"Mohammadhossein",
""
],
[
"Zhou",
"Mengchu",
""
],
[
"Molter",
"Anna",
""
],
[
"Pilla",
"Francesco",
""
]
] |
2201.01027 | Zhou Shufen Zhou Shufen | Benting Wan, Shufen Zhou | A integrating critic-waspas group decision making method under
interval-valued q-rung orthogonal fuzzy enviroment | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper provides a new tool for multi-attribute multi-objective group
decision-making with unknown weights and attributes' weights. An
interval-valued generalized orthogonal fuzzy group decision-making method is
proposed based on the Yager operator and CRITIC-WASPAS method with unknown
weights. The method integrates Yager operator, CRITIC, WASPAS, and interval
value generalized orthogonal fuzzy group. Its merits lie in allowing
decision-makers greater freedom, avoiding bias due to decision-makers' weight,
and yielding accurate evaluation. The research includes: expanding the interval
value generalized distance measurement method for comparison and application of
similarity measurement and decision-making methods; developing a new scoring
function for comparing the size of interval value generalized orthogonal fuzzy
numbers,and further existing researches. The proposed interval-valued Yager
weighted average operator (IVq-ROFYWA) and Yager weighted geometric average
operator (IVq-ROFYWG) are used for information aggregation. The CRITIC-WASPAS
combines the advantages of CRITIC and WASPAS, which not only work in the single
decision but also serve as the basis of the group decision. The in-depth study
of the decision-maker's weight matrix overcomes the shortcomings of taking the
decision as a whole, and weighs the decision-maker's information aggregation.
Finally, the group decision algorithm is used for hypertension risk management.
The results are consistent with decision-makers' opinions. Practice and case
analysis have proved the effectiveness of the method proposed in this paper. At
the same time, it is compared with other operators and decision-making methods,
which proves the method effective and feasible.
| [
{
"version": "v1",
"created": "Tue, 4 Jan 2022 08:11:28 GMT"
}
] | 1,641,340,800,000 | [
[
"Wan",
"Benting",
""
],
[
"Zhou",
"Shufen",
""
]
] |
2201.03472 | Peter Nightingale | Peter Nightingale | Savile Row Manual | arXiv admin note: substantial text overlap with arXiv:1601.02865 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | We describe the constraint modelling tool Savile Row, its input language and
its main features. Savile Row translates a solver-independent constraint
modelling language to the input languages for various solvers including
constraint, SAT, and SMT solvers. After a brief introduction, the manual
describes the Essence Prime language, which is the input language of Savile
Row. Then we describe the functions of the tool, its main features and options
and how to install and use it.
| [
{
"version": "v1",
"created": "Fri, 12 Nov 2021 09:47:55 GMT"
}
] | 1,641,859,200,000 | [
[
"Nightingale",
"Peter",
""
]
] |
2201.03647 | Utkarshani Jaimini | Utkarshani Jaimini, Amit Sheth | CausalKG: Causal Knowledge Graph Explainability using interventional and
counterfactual reasoning | null | IEEE Internet Computing, 26 (1), Jan-Feb 2022 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Humans use causality and hypothetical retrospection in their daily
decision-making, planning, and understanding of life events. The human mind,
while retrospecting a given situation, think about questions such as "What was
the cause of the given situation?", "What would be the effect of my action?",
or "Which action led to this effect?". It develops a causal model of the world,
which learns with fewer data points, makes inferences, and contemplates
counterfactual scenarios. The unseen, unknown, scenarios are known as
counterfactuals. AI algorithms use a representation based on knowledge graphs
(KG) to represent the concepts of time, space, and facts. A KG is a graphical
data model which captures the semantic relationships between entities such as
events, objects, or concepts. The existing KGs represent causal relationships
extracted from texts based on linguistic patterns of noun phrases for causes
and effects as in ConceptNet and WordNet. The current causality representation
in KGs makes it challenging to support counterfactual reasoning. A richer
representation of causality in AI systems using a KG-based approach is needed
for better explainability, and support for intervention and counterfactuals
reasoning, leading to improved understanding of AI systems by humans. The
causality representation requires a higher representation framework to define
the context, the causal information, and the causal effects. The proposed
Causal Knowledge Graph (CausalKG) framework, leverages recent progress of
causality and KG towards explainability. CausalKG intends to address the lack
of a domain adaptable causal model and represent the complex causal relations
using the hyper-relational graph representation in the KG. We show that the
CausalKG's interventional and counterfactual reasoning can be used by the AI
system for the domain explainability.
| [
{
"version": "v1",
"created": "Thu, 6 Jan 2022 20:27:19 GMT"
}
] | 1,641,945,600,000 | [
[
"Jaimini",
"Utkarshani",
""
],
[
"Sheth",
"Amit",
""
]
] |
2201.03810 | Debo Cheng | Debo Cheng (1) and Jiuyong Li (1) and Lin Liu (1) and Jiji Zhang (2)
and Thuc duy Le (1) and Jixue Liu (1) ((1) STEM, University of South
Australia, Adelaide, SA, Australia, (2) Department of Religion and
Philosophy, Hong Kong Baptist University, Hong Kong, China) | Ancestral Instrument Method for Causal Inference without Complete
Knowledge | 11 pages, 5 figures and 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unobserved confounding is the main obstacle to causal effect estimation from
observational data. Instrumental variables (IVs) are widely used for causal
effect estimation when there exist latent confounders. With the standard IV
method, when a given IV is valid, unbiased estimation can be obtained, but the
validity requirement on a standard IV is strict and untestable. Conditional IVs
have been proposed to relax the requirement of standard IVs by conditioning on
a set of observed variables (known as a conditioning set for a conditional IV).
However, the criterion for finding a conditioning set for a conditional IV
needs a directed acyclic graph (DAG) representing the causal relationships of
both observed and unobserved variables. This makes it challenging to discover a
conditioning set directly from data. In this paper, by leveraging maximal
ancestral graphs (MAGs) for causal inference with latent variables, we study
the graphical properties of ancestral IVs, a type of conditional IVs using
MAGs, and develop the theory to support data-driven discovery of the
conditioning set for a given ancestral IV in data under the pretreatment
variable assumption. Based on the theory, we develop an algorithm for unbiased
causal effect estimation with a given ancestral IV and observational data.
Extensive experiments on synthetic and real-world datasets demonstrate the
performance of the algorithm in comparison with existing IV methods.
| [
{
"version": "v1",
"created": "Tue, 11 Jan 2022 07:02:16 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Dec 2023 23:39:15 GMT"
}
] | 1,702,339,200,000 | [
[
"Cheng",
"Debo",
""
],
[
"Li",
"Jiuyong",
""
],
[
"Liu",
"Lin",
""
],
[
"Zhang",
"Jiji",
""
],
[
"Le",
"Thuc duy",
""
],
[
"Liu",
"Jixue",
""
]
] |
2201.03824 | Rahma Dandan | Rahma Dandan, Sylvie Despres, Karima Sedki | Acquisition and Representation of User Preferences Guided by an Ontology | in French, JFO 2016 - 6{\`e}mes Journ{\'e}es Francophones sur les
Ontologies, Nov 2016, Bordeaux, France | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our food preferences guide our food choices and in turn affect our personal
health and our social life. In this paper, we adopt an approach using a domain
ontology expressed in OWL2 to support the acquisition and representation of
preferences in formalism CP-Net. Specifically, we present the construction of
the domain ontology and questionnaire design to acquire and represent the
preferences. The acquisition and representation of preferences are implemented
in the field of university canteen. Our main contribution in this preliminary
work is to acquire preferences and enrich the model preferably with domain
knowledge represented in the ontology.
| [
{
"version": "v1",
"created": "Tue, 11 Jan 2022 08:09:08 GMT"
}
] | 1,641,945,600,000 | [
[
"Dandan",
"Rahma",
""
],
[
"Despres",
"Sylvie",
""
],
[
"Sedki",
"Karima",
""
]
] |
2201.04204 | Devleena Das | Devleena Das, Been Kim, Sonia Chernova | Subgoal-Based Explanations for Unreliable Intelligent Decision Support
Systems | Accepted to 2023 International Conference on Intelligent User
Interfaces | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intelligent decision support (IDS) systems leverage artificial intelligence
techniques to generate recommendations that guide human users through the
decision making phases of a task. However, a key challenge is that IDS systems
are not perfect, and in complex real-world scenarios may produce incorrect
output or fail to work altogether. The field of explainable AI planning (XAIP)
has sought to develop techniques that make the decision making of sequential
decision making AI systems more explainable to end-users. Critically, prior
work in applying XAIP techniques to IDS systems has assumed that the plan being
proposed by the planner is always optimal, and therefore the action or plan
being recommended as decision support to the user is always correct. In this
work, we examine novice user interactions with a non-robust IDS system -- one
that occasionally recommends the wrong action, and one that may become
unavailable after users have become accustomed to its guidance. We introduce a
novel explanation type, subgoal-based explanations, for planning-based IDS
systems, that supplements traditional IDS output with information about the
subgoal toward which the recommended action would contribute. We demonstrate
that subgoal-based explanations lead to improved user task performance, improve
user ability to distinguish optimal and suboptimal IDS recommendations, are
preferred by users, and enable more robust user performance in the case of IDS
failure
| [
{
"version": "v1",
"created": "Tue, 11 Jan 2022 21:13:22 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Apr 2022 15:05:22 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Feb 2023 16:27:45 GMT"
}
] | 1,675,641,600,000 | [
[
"Das",
"Devleena",
""
],
[
"Kim",
"Been",
""
],
[
"Chernova",
"Sonia",
""
]
] |
2201.04349 | Eunika Mercier-Laurent | Dominique Verdejo, Eunika Mercier-Laurent (CRESTIC) | Video Intelligence as a component of a Global Security system | null | Artificial Intelligence for Knowledge Management, 5th IFIP WG 12.6
International Workshop, AI4KM 2017 Held at IJCAI 2017, 2019 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes the evolution of our research from video analytics to a
global security system with focus on the video surveillance component. Indeed
video surveillance has evolved from a commodity security tool up to the most
efficient way of tracking perpetrators when terrorism hits our modern urban
centers. As number of cameras soars, one could expect the system to leverage
the huge amount of data carried through the video streams to provide fast
access to video evidences, actionable intelligence for monitoring real-time
events and enabling predictive capacities to assist operators in their
surveillance tasks. This research explores a hybrid platform for video
intelligence capture, automated data extraction, supervised Machine Learning
for intelligently assisted urban video surveillance; Extension to other
components of a global security system are discussed. Applying Knowledge
Management principles in this research helps with deep problem understanding
and facilitates the implementation of efficient information and experience
sharing decision support systems providing assistance to people on the field as
well as in operations centers. The originality of this work is also the
creation of "common" human-machine and machine to machine language and a
security ontology.
| [
{
"version": "v1",
"created": "Wed, 12 Jan 2022 07:49:46 GMT"
}
] | 1,642,032,000,000 | [
[
"Verdejo",
"Dominique",
"",
"CRESTIC"
],
[
"Mercier-Laurent",
"Eunika",
"",
"CRESTIC"
]
] |
2201.04841 | David Rouquet | David Rouquet, Val\'erie Bellynck (UGA), Christian Boitet (UGA),
Vincent Berment | Transforming UNL graphs in OWL representations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extracting formal knowledge (ontologies) from natural language is a challenge
that can benefit from a (semi-) formal linguistic representation of texts, at
the semantic level. We propose to achieve such a representation by implementing
the Universal Networking Language (UNL) specifications on top of RDF. Thus, the
meaning of a statement in any language will be soundly expressed as a RDF-UNL
graph that constitutes a middle ground between natural language and formal
knowledge. In particular, we show that RDF-UNL graphs can support content
extraction using generic SHACL rules and that reasoning on the extracted facts
allows detecting incoherence in the original texts. This approach is
experimented in the UNseL project that aims at extracting ontological
representations from system requirements/specifications in order to check that
they are consistent, complete and unambiguous. Our RDF-UNL implementation and
all code for the working examples of this paper are publicly available under
the CeCILL-B license at https://gitlab.tetras-libre.fr/unl/rdf-unl
| [
{
"version": "v1",
"created": "Thu, 13 Jan 2022 09:04:00 GMT"
}
] | 1,642,118,400,000 | [
[
"Rouquet",
"David",
"",
"UGA"
],
[
"Bellynck",
"Valérie",
"",
"UGA"
],
[
"Boitet",
"Christian",
"",
"UGA"
],
[
"Berment",
"Vincent",
""
]
] |
2201.05528 | Muhammed Murat Ozbek | Muhammed Murat Ozbek and Emre Koyuncu | Reinforcement Learning based Air Combat Maneuver Generation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The advent of artificial intelligence technology paved the way of many
researches to be made within air combat sector. Academicians and many other
researchers did a research on a prominent research direction called autonomous
maneuver decision of UAV. Elaborative researches produced some outcomes, but
decisions that include Reinforcement Learning(RL) came out to be more
efficient. There have been many researches and experiments done to make an
agent reach its target in an optimal way, most prominent are Genetic
Algorithm(GA) , A star, RRT and other various optimization techniques have been
used. But Reinforcement Learning is the well known one for its success. In
DARPHA Alpha Dogfight Trials, reinforcement learning prevailed against a real
veteran F16 human pilot who was trained by Boeing. This successor model was
developed by Heron Systems. After this accomplishment, reinforcement learning
bring tremendous attention on itself. In this research we aimed our UAV which
has a dubin vehicle dynamic property to move to the target in two dimensional
space in an optimal path using Twin Delayed Deep Deterministic Policy Gradients
(TD3) and used in experience replay Hindsight Experience Replay(HER).We did
tests on two different environments and used simulations.
| [
{
"version": "v1",
"created": "Fri, 14 Jan 2022 15:55:44 GMT"
}
] | 1,642,377,600,000 | [
[
"Ozbek",
"Muhammed Murat",
""
],
[
"Koyuncu",
"Emre",
""
]
] |
2201.05544 | Jiongzhi Zheng | Jiongzhi Zheng and Kun He and Jianrong Zhou and Yan Jin and Chu-Min Li
and Felip Manya | BandMaxSAT: A Local Search MaxSAT Solver with Multi-armed Bandit | Accepted by IJCAI 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We address Partial MaxSAT (PMS) and Weighted PMS (WPMS), two practical
generalizations of the MaxSAT problem, and propose a local search algorithm for
these problems, called BandMaxSAT, that applies a multi-armed bandit model to
guide the search direction. The bandit in our method is associated with all the
soft clauses in the input (W)PMS instance. Each arm corresponds to a soft
clause. The bandit model can help BandMaxSAT to select a good direction to
escape from local optima by selecting a soft clause to be satisfied in the
current step, that is, selecting an arm to be pulled. We further propose an
initialization method for (W)PMS that prioritizes both unit and binary clauses
when producing the initial solutions. Extensive experiments demonstrate that
BandMaxSAT significantly outperforms the state-of-the-art (W)PMS local search
algorithm SATLike3.0. Specifically, the number of instances in which BandMaxSAT
obtains better results is about twice that obtained by SATLike3.0. Moreover, we
combine BandMaxSAT with the complete solver TT-Open-WBO-Inc. The resulting
solver BandMaxSAT-c also outperforms some of the best state-of-the-art complete
(W)PMS solvers, including SATLike-c, Loandra and TT-Open-WBO-Inc.
| [
{
"version": "v1",
"created": "Fri, 14 Jan 2022 16:32:39 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Jun 2022 06:28:00 GMT"
}
] | 1,655,424,000,000 | [
[
"Zheng",
"Jiongzhi",
""
],
[
"He",
"Kun",
""
],
[
"Zhou",
"Jianrong",
""
],
[
"Jin",
"Yan",
""
],
[
"Li",
"Chu-Min",
""
],
[
"Manya",
"Felip",
""
]
] |
2201.05576 | Jayati Deshmukh | Srinath Srinivasa and Jayati Deshmukh | AI and the Sense of Self | Previous version of this paper was published in Jijnasa 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | After several winters, AI is center-stage once again, with current advances
enabling a vast array of AI applications. This renewed wave of AI has brought
back to the fore several questions from the past, about philosophical
foundations of intelligence and common sense -- predominantly motivated by
ethical concerns of AI decision-making. In this paper, we address some of the
arguments that led to research interest in intelligent agents, and argue for
their relevance even in today's context. Specifically we focus on the cognitive
sense of "self" and its role in autonomous decision-making leading to
responsible behaviour. The authors hope to make a case for greater research
interest in building richer computational models of AI agents with a sense of
self.
| [
{
"version": "v1",
"created": "Fri, 7 Jan 2022 10:54:06 GMT"
}
] | 1,642,377,600,000 | [
[
"Srinivasa",
"Srinath",
""
],
[
"Deshmukh",
"Jayati",
""
]
] |
2201.05910 | Samaa Elnagar | Samaa Elnagar, Victoria Yoon and Manoj A.Thomas | An Automatic Ontology Generation Framework with An Organizational
Perspective | Proceedings of the 53rd Hawaii International Conference on System
Sciences | 2020 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Ontologies have been known for their semantic representation of knowledge.
ontologies cannot automatically evolve to reflect updates that occur in
respective domains. To address this limitation, researchers have called for
automatic ontology generation from unstructured text corpus. Unfortunately,
systems that aim to generate ontologies from unstructured text corpus are
domain-specific and require manual intervention. In addition, they suffer from
uncertainty in creating concept linkages and difficulty in finding axioms for
the same concept. Knowledge Graphs (KGs) has emerged as a powerful model for
the dynamic representation of knowledge. However, KGs have many quality
limitations and need extensive refinement. This research aims to develop a
novel domain-independent automatic ontology generation framework that converts
unstructured text corpus into domain consistent ontological form. The framework
generates KGs from unstructured text corpus as well as refine and correct them
to be consistent with domain ontologies. The power of the proposed
automatically generated ontology is that it integrates the dynamic features of
KGs and the quality features of ontologies.
| [
{
"version": "v1",
"created": "Sat, 15 Jan 2022 18:54:22 GMT"
}
] | 1,642,550,400,000 | [
[
"Elnagar",
"Samaa",
""
],
[
"Yoon",
"Victoria",
""
],
[
"Thomas",
"Manoj A.",
""
]
] |
2201.06202 | Qibiao Peng | Liang Chen, Qibiao Peng, Jintang Li, Yang Liu, Jiawei Chen, Yong Li,
Zibin Zheng | Neighboring Backdoor Attacks on Graph Convolutional Network | 12 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Backdoor attacks have been widely studied to hide the misclassification rules
in the normal models, which are only activated when the model is aware of the
specific inputs (i.e., the trigger). However, despite their success in the
conventional Euclidean space, there are few studies of backdoor attacks on
graph structured data. In this paper, we propose a new type of backdoor which
is specific to graph data, called neighboring backdoor. Considering the
discreteness of graph data, how to effectively design the triggers while
retaining the model accuracy on the original task is the major challenge. To
address such a challenge, we set the trigger as a single node, and the backdoor
is activated when the trigger node is connected to the target node. To preserve
the model accuracy, the model parameters are not allowed to be modified. Thus,
when the trigger node is not connected, the model performs normally. Under
these settings, in this work, we focus on generating the features of the
trigger node. Two types of backdoors are proposed: (1) Linear Graph Convolution
Backdoor which finds an approximation solution for the feature generation (can
be viewed as an integer programming problem) by looking at the linear part of
GCNs. (2) Variants of existing graph attacks. We extend current gradient-based
attack methods to our backdoor attack scenario. Extensive experiments on two
social networks and two citation networks datasets demonstrate that all
proposed backdoors can achieve an almost 100\% attack success rate while having
no impact on predictive accuracy.
| [
{
"version": "v1",
"created": "Mon, 17 Jan 2022 03:49:32 GMT"
}
] | 1,642,550,400,000 | [
[
"Chen",
"Liang",
""
],
[
"Peng",
"Qibiao",
""
],
[
"Li",
"Jintang",
""
],
[
"Liu",
"Yang",
""
],
[
"Chen",
"Jiawei",
""
],
[
"Li",
"Yong",
""
],
[
"Zheng",
"Zibin",
""
]
] |
2201.06248 | Hossein Sadr | Fatemeh Mohades Deilami, Hossein Sadr, Mojdeh Nazari | Using Machine Learning Based Models for Personality Recognition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personality can be defined as the combination of behavior, emotion,
motivation, and thoughts that aim at describing various aspects of human
behavior based on a few stable and measurable characteristics. Considering the
fact that our personality has a remarkable influence in our daily life,
automatic recognition of a person's personality attributes can provide many
essential practical applications in various aspects of cognitive science. deep
learning based method for the task of personality recognition from text is
proposed in this paper. Among various deep neural networks, Convolutional
Neural Networks (CNN) have demonstrated profound efficiency in natural language
processing and especially personality detection. Owing to the fact that various
filter sizes in CNN may influence its performance, we decided to combine CNN
with AdaBoost, a classical ensemble algorithm, to consider the possibility of
using the contribution of various filter lengths and gasp their potential in
the final classification via combining various classifiers with respective
filter size using AdaBoost. Our proposed method was validated on the Essay
dataset by conducting a series of experiments and the empirical results
demonstrated the superiority of our proposed method compared to both machine
learning and deep learning methods for the task of personality recognition.
| [
{
"version": "v1",
"created": "Mon, 17 Jan 2022 07:20:51 GMT"
}
] | 1,642,550,400,000 | [
[
"Deilami",
"Fatemeh Mohades",
""
],
[
"Sadr",
"Hossein",
""
],
[
"Nazari",
"Mojdeh",
""
]
] |
2201.06254 | Guangda Huzhang | Zizhao Zhang, Yifei Zhao, Guangda Huzhang | Exploit Customer Life-time Value with Memoryless Experiments | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a measure of the long-term contribution produced by customers in a service
or product relationship, life-time value, or LTV, can more comprehensively find
the optimal strategy for service delivery. However, it is challenging to
accurately abstract the LTV scene, model it reasonably, and find the optimal
solution. The current theories either cannot precisely express LTV because of
the single modeling structure, or there is no efficient solution. We propose a
general LTV modeling method, which solves the problem that customers' long-term
contribution is difficult to quantify while existing methods, such as modeling
the click-through rate, only pursue the short-term contribution. At the same
time, we also propose a fast dynamic programming solution based on a mutated
bisection method and the memoryless repeated experiments assumption. The model
and method can be applied to different service scenarios, such as the
recommendation system. Experiments on real-world datasets confirm the
effectiveness of the proposed model and optimization method. In addition, this
whole LTV structure was deployed at a large E-commerce mobile phone
application, where it managed to select optimal push message sending time and
achieved a 10\% LTV improvement.
| [
{
"version": "v1",
"created": "Mon, 17 Jan 2022 07:43:06 GMT"
}
] | 1,642,550,400,000 | [
[
"Zhang",
"Zizhao",
""
],
[
"Zhao",
"Yifei",
""
],
[
"Huzhang",
"Guangda",
""
]
] |
2201.06401 | Dennis Soemers | Dennis J.N.J. Soemers and \'Eric Piette and Matthew Stephenson and
Cameron Browne | Spatial State-Action Features for General Games | Accepted for publication in the journal of Artificial Intelligence
(AIJ) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many board games and other abstract games, patterns have been used as
features that can guide automated game-playing agents. Such patterns or
features often represent particular configurations of pieces, empty positions,
etc., which may be relevant for a game's strategies. Their use has been
particularly prevalent in the game of Go, but also many other games used as
benchmarks for AI research. In this paper, we formulate a design and efficient
implementation of spatial state-action features for general games. These are
patterns that can be trained to incentivise or disincentivise actions based on
whether or not they match variables of the state in a local area around action
variables. We provide extensive details on several design and implementation
choices, with a primary focus on achieving a high degree of generality to
support a wide variety of different games using different board geometries or
other graphs. Secondly, we propose an efficient approach for evaluating active
features for any given set of features. In this approach, we take inspiration
from heuristics used in problems such as SAT to optimise the order in which
parts of patterns are matched and prune unnecessary evaluations. This approach
is defined for a highly general and abstract description of the problem --
phrased as optimising the order in which propositions of formulas in
disjunctive normal form are evaluated -- and may therefore also be of interest
to other types of problems than board games. An empirical evaluation on 33
distinct games in the Ludii general game system demonstrates the efficiency of
this approach in comparison to a naive baseline, as well as a baseline based on
prefix trees, and demonstrates that the additional efficiency significantly
improves the playing strength of agents using the features to guide search.
| [
{
"version": "v1",
"created": "Mon, 17 Jan 2022 13:34:04 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2023 11:43:32 GMT"
}
] | 1,683,244,800,000 | [
[
"Soemers",
"Dennis J. N. J.",
""
],
[
"Piette",
"Éric",
""
],
[
"Stephenson",
"Matthew",
""
],
[
"Browne",
"Cameron",
""
]
] |
2201.06409 | Dan Halbersberg | Dan Halbersberg, Matan Halevi, Moshe Salhov | Search and Score-based Waterfall Auction Optimization | Published as a conference paper at LION 2022 | The 16th International Conference on Learning and Intelligent
Optimization, 2022 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Online advertising is a major source of income for many online companies. One
common approach is to sell online advertisements via waterfall auctions,
through which a publisher makes sequential price offers to ad networks. The
publisher controls the order and prices of the waterfall in an attempt to
maximize his revenue. In this work, we propose a methodology to learn a
waterfall strategy from historical data by wisely searching in the space of
possible waterfalls and selecting the one leading to the highest revenues. The
contribution of this work is twofold; First, we propose a novel method to
estimate the valuation distribution of each user, with respect to each ad
network. Second, we utilize the valuation matrix to score our candidate
waterfalls as part of a procedure that iteratively searches in local
neighborhoods. Our framework guarantees that the waterfall revenue improves
between iterations ultimately converging into a local optimum. Real-world
demonstrations are provided to show that the proposed method improves the total
revenue of real-world waterfalls, as compared to manual expert optimization.
Finally, the code and the data are available here.
| [
{
"version": "v1",
"created": "Mon, 17 Jan 2022 13:59:12 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 08:27:22 GMT"
}
] | 1,649,376,000,000 | [
[
"Halbersberg",
"Dan",
""
],
[
"Halevi",
"Matan",
""
],
[
"Salhov",
"Moshe",
""
]
] |
2201.06692 | Xiuyi Fan | Xiuyi Fan, Francesca Toni | Explainable Decision Making with Lean and Argumentative Explanations | JAIR submission. 74 pages (50 excluding proofs, appendix, and
references) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is widely acknowledged that transparency of automated decision making is
crucial for deployability of intelligent systems, and explaining the reasons
why some decisions are "good" and some are not is a way to achieving this
transparency. We consider two variants of decision making, where "good"
decisions amount to alternatives (i) meeting "most" goals, and (ii) meeting
"most preferred" goals. We then define, for each variant and notion of
"goodness" (corresponding to a number of existing notions in the literature),
explanations in two formats, for justifying the selection of an alternative to
audiences with differing needs and competences: lean explanations, in terms of
goals satisfied and, for some notions of "goodness", alternative decisions, and
argumentative explanations, reflecting the decision process leading to the
selection, while corresponding to the lean explanations. To define
argumentative explanations, we use assumption-based argumentation (ABA), a
well-known form of structured argumentation. Specifically, we define ABA
frameworks such that "good" decisions are admissible ABA arguments and draw
argumentative explanations from dispute trees sanctioning this admissibility.
Finally, we instantiate our overall framework for explainable decision-making
to accommodate connections between goals and decisions in terms of decision
graphs incorporating defeasible and non-defeasible information.
| [
{
"version": "v1",
"created": "Tue, 18 Jan 2022 01:29:02 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jan 2022 00:59:13 GMT"
}
] | 1,643,068,800,000 | [
[
"Fan",
"Xiuyi",
""
],
[
"Toni",
"Francesca",
""
]
] |
2201.06779 | Shuai Niu | Shuai Niu and Qing Yin and Yunya Song and Yike Guo and Xian Yang | Label Dependent Attention Model for Disease Risk Prediction Using
Multimodal Electronic Health Records | null | null | 10.1109/ICDM51629.2021.00056 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Disease risk prediction has attracted increasing attention in the field of
modern healthcare, especially with the latest advances in artificial
intelligence (AI). Electronic health records (EHRs), which contain
heterogeneous patient information, are widely used in disease risk prediction
tasks. One challenge of applying AI models for risk prediction lies in
generating interpretable evidence to support the prediction results while
retaining the prediction ability. In order to address this problem, we propose
the method of jointly embedding words and labels whereby attention modules
learn the weights of words from medical notes according to their relevance to
the names of risk prediction labels. This approach boosts interpretability by
employing an attention mechanism and including the names of prediction tasks in
the model. However, its application is only limited to the handling of textual
inputs such as medical notes. In this paper, we propose a label dependent
attention model LDAM to 1) improve the interpretability by exploiting
Clinical-BERT (a biomedical language model pre-trained on a large clinical
corpus) to encode biomedically meaningful features and labels jointly; 2)
extend the idea of joint embedding to the processing of time-series data, and
develop a multi-modal learning framework for integrating heterogeneous
information from medical notes and time-series health status indicators. To
demonstrate our method, we apply LDAM to the MIMIC-III dataset to predict
different disease risks. We evaluate our method both quantitatively and
qualitatively. Specifically, the predictive power of LDAM will be shown, and
case studies will be carried out to illustrate its interpretability.
| [
{
"version": "v1",
"created": "Tue, 18 Jan 2022 07:21:20 GMT"
}
] | 1,642,550,400,000 | [
[
"Niu",
"Shuai",
""
],
[
"Yin",
"Qing",
""
],
[
"Song",
"Yunya",
""
],
[
"Guo",
"Yike",
""
],
[
"Yang",
"Xian",
""
]
] |
2201.06783 | Shuai Niu | Shuai Niu and Yunya Song and Qing Yin and Yike Guo and Xian Yang | Label-dependent and event-guided interpretable disease risk prediction
using EHRs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Electronic health records (EHRs) contain patients' heterogeneous data that
are collected from medical providers involved in the patient's care, including
medical notes, clinical events, laboratory test results, symptoms, and
diagnoses. In the field of modern healthcare, predicting whether patients would
experience any risks based on their EHRs has emerged as a promising research
area, in which artificial intelligence (AI) plays a key role. To make AI models
practically applicable, it is required that the prediction results should be
both accurate and interpretable. To achieve this goal, this paper proposed a
label-dependent and event-guided risk prediction model (LERP) to predict the
presence of multiple disease risks by mainly extracting information from
unstructured medical notes. Our model is featured in the following aspects.
First, we adopt a label-dependent mechanism that gives greater attention to
words from medical notes that are semantically similar to the names of risk
labels. Secondly, as the clinical events (e.g., treatments and drugs) can also
indicate the health status of patients, our model utilizes the information from
events and uses them to generate an event-guided representation of medical
notes. Thirdly, both label-dependent and event-guided representations are
integrated to make a robust prediction, in which the interpretability is
enabled by the attention weights over words from medical notes. To demonstrate
the applicability of the proposed method, we apply it to the MIMIC-III dataset,
which contains real-world EHRs collected from hospitals. Our method is
evaluated in both quantitative and qualitative ways.
| [
{
"version": "v1",
"created": "Tue, 18 Jan 2022 07:24:28 GMT"
}
] | 1,642,550,400,000 | [
[
"Niu",
"Shuai",
""
],
[
"Song",
"Yunya",
""
],
[
"Yin",
"Qing",
""
],
[
"Guo",
"Yike",
""
],
[
"Yang",
"Xian",
""
]
] |
2201.06863 | Rasmus Larsen | Rasmus Larsen, Mikkel N{\o}rgaard Schmidt | Programmatic Policy Extraction by Iterative Local Search | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Reinforcement learning policies are often represented by neural networks, but
programmatic policies are preferred in some cases because they are more
interpretable, amenable to formal verification, or generalize better. While
efficient algorithms for learning neural policies exist, learning programmatic
policies is challenging. Combining imitation-projection and dataset aggregation
with a local search heuristic, we present a simple and direct approach to
extracting a programmatic policy from a pretrained neural policy. After
examining our local search heuristic on a programming by example problem, we
demonstrate our programmatic policy extraction method on a pendulum swing-up
problem. Both when trained using a hand crafted expert policy and a learned
neural policy, our method discovers simple and interpretable policies that
perform almost as well as the original.
| [
{
"version": "v1",
"created": "Tue, 18 Jan 2022 10:39:40 GMT"
}
] | 1,642,550,400,000 | [
[
"Larsen",
"Rasmus",
""
],
[
"Schmidt",
"Mikkel Nørgaard",
""
]
] |
2201.07040 | Matthias Samwald | Kathrin Blagec, Jakob Kraiger, Wolfgang Fr\"uhwirt, Matthias Samwald | Benchmark datasets driving artificial intelligence development fail to
capture the needs of medical professionals | (this version extends the literature references) | Journal of Bioinformatics, January 2023 | 10.1016/j.jbi.2022.104274 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Publicly accessible benchmarks that allow for assessing and comparing model
performances are important drivers of progress in artificial intelligence (AI).
While recent advances in AI capabilities hold the potential to transform
medical practice by assisting and augmenting the cognitive processes of
healthcare professionals, the coverage of clinically relevant tasks by AI
benchmarks is largely unclear. Furthermore, there is a lack of systematized
meta-information that allows clinical AI researchers to quickly determine
accessibility, scope, content and other characteristics of datasets and
benchmark datasets relevant to the clinical domain.
To address these issues, we curated and released a comprehensive catalogue of
datasets and benchmarks pertaining to the broad domain of clinical and
biomedical natural language processing (NLP), based on a systematic review of
literature and online resources. A total of 450 NLP datasets were manually
systematized and annotated with rich metadata, such as targeted tasks, clinical
applicability, data types, performance metrics, accessibility and licensing
information, and availability of data splits. We then compared tasks covered by
AI benchmark datasets with relevant tasks that medical practitioners reported
as highly desirable targets for automation in a previous empirical study.
Our analysis indicates that AI benchmarks of direct clinical relevance are
scarce and fail to cover most work activities that clinicians want to see
addressed. In particular, tasks associated with routine documentation and
patient data administration workflows are not represented despite significant
associated workloads. Thus, currently available AI benchmarks are improperly
aligned with desired targets for AI automation in clinical settings, and novel
benchmarks should be created to fill these gaps.
| [
{
"version": "v1",
"created": "Tue, 18 Jan 2022 15:05:28 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2022 13:25:37 GMT"
}
] | 1,672,012,800,000 | [
[
"Blagec",
"Kathrin",
""
],
[
"Kraiger",
"Jakob",
""
],
[
"Frühwirt",
"Wolfgang",
""
],
[
"Samwald",
"Matthias",
""
]
] |
2201.07125 | Kamil Faber | Kamil Faber, Roberto Corizzo, Bartlomiej Sniezynski, Michael Baron,
Nathalie Japkowicz | WATCH: Wasserstein Change Point Detection for High-Dimensional Time
Series Data | null | 2021 IEEE International Conference on Big Data (Big Data) | 10.1109/BigData52589.2021.9671962 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting relevant changes in dynamic time series data in a timely manner is
crucially important for many data analysis tasks in real-world settings. Change
point detection methods have the ability to discover changes in an unsupervised
fashion, which represents a desirable property in the analysis of unbounded and
unlabeled data streams. However, one limitation of most of the existing
approaches is represented by their limited ability to handle multivariate and
high-dimensional data, which is frequently observed in modern applications such
as traffic flow prediction, human activity recognition, and smart grids
monitoring. In this paper, we attempt to fill this gap by proposing WATCH, a
novel Wasserstein distance-based change point detection approach that models an
initial distribution and monitors its behavior while processing new data
points, providing accurate and robust detection of change points in dynamic
high-dimensional data. An extensive experimental evaluation involving a large
number of benchmark datasets shows that WATCH is capable of accurately
identifying change points and outperforming state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 18 Jan 2022 16:55:29 GMT"
}
] | 1,642,550,400,000 | [
[
"Faber",
"Kamil",
""
],
[
"Corizzo",
"Roberto",
""
],
[
"Sniezynski",
"Bartlomiej",
""
],
[
"Baron",
"Michael",
""
],
[
"Japkowicz",
"Nathalie",
""
]
] |
2201.07474 | Albert Benveniste | Albert Benveniste, Jean-Baptiste Raclet | Mixed Nondeterministic-Probabilistic Automata: Blending graphical
probabilistic models with nondeterminism | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graphical models in probability and statistics are a core concept in the area
of probabilistic reasoning and probabilistic programming-graphical models
include Bayesian networks and factor graphs. In this paper we develop a new
model of mixed (nondeterministic/probabilistic) automata that subsumes both
nondeterministic automata and graphical probabilistic models. Mixed Automata
are equipped with parallel composition, simulation relation, and support
message passing algorithms inherited from graphical probabilistic models.
Segala's Probabilistic Automatacan be mapped to Mixed Automata.
| [
{
"version": "v1",
"created": "Wed, 19 Jan 2022 08:55:55 GMT"
}
] | 1,642,636,800,000 | [
[
"Benveniste",
"Albert",
""
],
[
"Raclet",
"Jean-Baptiste",
""
]
] |
2201.07642 | Oliver Niggemann | Philipp Rosenthal and Oliver Niggemann | Problem examination for AI methods in product design | published at IJCAI 21 Workshop AI and Design | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence (AI) has significant potential for product design: AI
can check technical and non-technical constraints on products, it can support a
quick design of new product variants and new AI methods may also support
creativity. But currently product design and AI are separate communities
fostering different terms and theories. This makes a mapping of AI approaches
to product design needs difficult and prevents new solutions. As a solution,
this paper first clarifies important terms and concepts for the
interdisciplinary domain of AI methods in product design. A key contribution of
this paper is a new classification of design problems using the four
characteristics decomposability, inter-dependencies, innovation and creativity.
Definitions of these concepts are given where they are lacking. Early mappings
of these concepts to AI solutions are sketched and verified using design
examples. The importance of creativity in product design and a corresponding
gap in AI is pointed out for future research.
| [
{
"version": "v1",
"created": "Wed, 19 Jan 2022 15:19:29 GMT"
}
] | 1,642,636,800,000 | [
[
"Rosenthal",
"Philipp",
""
],
[
"Niggemann",
"Oliver",
""
]
] |
2201.07719 | Federico Malato | Federico Malato, Joona Jehkonen, Ville Hautam\"aki | Improving Behavioural Cloning with Human-Driven Dynamic Dataset
Augmentation | 6 pages, 5 figures, 2 code snippets, accepted at the AAAI-22 Workshop
on Interactive Machine Learning | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Behavioural cloning has been extensively used to train agents and is
recognized as a fast and solid approach to teach general behaviours based on
expert trajectories. Such method follows the supervised learning paradigm and
it strongly depends on the distribution of the data. In our paper, we show how
combining behavioural cloning with human-in-the-loop training solves some of
its flaws and provides an agent task-specific corrections to overcome tricky
situations while speeding up the training time and lowering the required
resources. To do this, we introduce a novel approach that allows an expert to
take control of the agent at any moment during a simulation and provide optimal
solutions to its problematic situations. Our experiments show that this
approach leads to better policies both in terms of quantitative evaluation and
in human-likeliness.
| [
{
"version": "v1",
"created": "Wed, 19 Jan 2022 16:57:17 GMT"
}
] | 1,642,636,800,000 | [
[
"Malato",
"Federico",
""
],
[
"Jehkonen",
"Joona",
""
],
[
"Hautamäki",
"Ville",
""
]
] |
2201.07749 | Tom Bewley | Tom Bewley, Jonathan Lawry, Arthur Richards | Summarising and Comparing Agent Dynamics with Contrastive Spatiotemporal
Abstraction | 13 pages (6 body, 1 references, 6 appendix). Accepted for
presentation at XAI-IJCAI22 Workshop, July 2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a data-driven, model-agnostic technique for generating a
human-interpretable summary of the salient points of contrast within an
evolving dynamical system, such as the learning process of a control agent. It
involves the aggregation of transition data along both spatial and temporal
dimensions according to an information-theoretic divergence measure. A
practical algorithm is outlined for continuous state spaces, and deployed to
summarise the learning histories of deep reinforcement learning agents with the
aid of graphical and textual communication methods. We expect our method to be
complementary to existing techniques in the realm of agent interpretability.
| [
{
"version": "v1",
"created": "Mon, 17 Jan 2022 11:34:59 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jun 2022 10:53:57 GMT"
}
] | 1,655,856,000,000 | [
[
"Bewley",
"Tom",
""
],
[
"Lawry",
"Jonathan",
""
],
[
"Richards",
"Arthur",
""
]
] |
2201.07839 | Debangshu Banerjee | Debangshu Banerjee and Kavita Wagh | Critic Algorithms using Cooperative Networks | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | An algorithm is proposed for policy evaluation in Markov Decision Processes
which gives good empirical results with respect to convergence rates. The
algorithm tracks the Projected Bellman Error and is implemented as a true
gradient based algorithm. In this respect this algorithm differs from
TD($\lambda$) class of algorithms. This algorithm tracks the Projected Bellman
Algorithm and is therefore different from the class of residual algorithms.
Further the convergence of this algorithm is empirically much faster than GTD2
class of algorithms which aim at tracking the Projected Bellman Error. We
implemented proposed algorithm in DQN and DDPG framework and found that our
algorithm achieves comparable results in both of these experiments
| [
{
"version": "v1",
"created": "Wed, 19 Jan 2022 19:47:18 GMT"
}
] | 1,642,723,200,000 | [
[
"Banerjee",
"Debangshu",
""
],
[
"Wagh",
"Kavita",
""
]
] |
2201.08017 | Chenxing Wang | Chenxing Wang, Fang Zhao, Haichao Zhang, Haiyong Luo, Yanjun Qin, and
Yuchen Fang | Fine-Grained Trajectory-based Travel Time Estimation for Multi-city
Scenarios Based on Deep Meta-Learning | null | null | 10.1109/TITS.2022.3145382 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Travel Time Estimation (TTE) is indispensable in intelligent transportation
system (ITS). It is significant to achieve the fine-grained Trajectory-based
Travel Time Estimation (TTTE) for multi-city scenarios, namely to accurately
estimate travel time of the given trajectory for multiple city scenarios.
However, it faces great challenges due to complex factors including dynamic
temporal dependencies and fine-grained spatial dependencies. To tackle these
challenges, we propose a meta learning based framework, MetaTTE, to
continuously provide accurate travel time estimation over time by leveraging
well-designed deep neural network model called DED, which consists of Data
preprocessing module and Encoder-Decoder network module. By introducing meta
learning techniques, the generalization ability of MetaTTE is enhanced using
small amount of examples, which opens up new opportunities to increase the
potential of achieving consistent performance on TTTE when traffic conditions
and road networks change over time in the future. The DED model adopts an
encoder-decoder network to capture fine-grained spatial and temporal
representations. Extensive experiments on two real-world datasets are conducted
to confirm that our MetaTTE outperforms six state-of-art baselines, and improve
29.35% and 25.93% accuracy than the best baseline on Chengdu and Porto
datasets, respectively.
| [
{
"version": "v1",
"created": "Thu, 20 Jan 2022 06:35:51 GMT"
}
] | 1,642,723,200,000 | [
[
"Wang",
"Chenxing",
""
],
[
"Zhao",
"Fang",
""
],
[
"Zhang",
"Haichao",
""
],
[
"Luo",
"Haiyong",
""
],
[
"Qin",
"Yanjun",
""
],
[
"Fang",
"Yuchen",
""
]
] |
2201.08032 | Sajjad Ahmed | Sajjad Ahmed, Knut Hinkelmann, Flavio Corradini | Combining Machine Learning with Knowledge Engineering to detect Fake
News in Social Networks-a survey | 12 pages | AAAI MAKE 2019 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Due to extensive spread of fake news on social and news media it became an
emerging research topic now a days that gained attention. In the news media and
social media the information is spread highspeed but without accuracy and hence
detection mechanism should be able to predict news fast enough to tackle the
dissemination of fake news. It has the potential for negative impacts on
individuals and society. Therefore, detecting fake news on social media is
important and also a technically challenging problem these days. We knew that
Machine learning is helpful for building Artificial intelligence systems based
on tacit knowledge because it can help us to solve complex problems due to real
word data. On the other side we knew that Knowledge engineering is helpful for
representing experts knowledge which people aware of that knowledge. Due to
this we proposed that integration of Machine learning and knowledge engineering
can be helpful in detection of fake news. In this paper we present what is fake
news, importance of fake news, overall impact of fake news on different areas,
different ways to detect fake news on social media, existing detections
algorithms that can help us to overcome the issue, similar application areas
and at the end we proposed combination of data driven and engineered knowledge
to combat fake news. We studied and compared three different modules text
classifiers, stance detection applications and fact checking existing
techniques that can help to detect fake news. Furthermore, we investigated the
impact of fake news on society. Experimental evaluation of publically available
datasets and our proposed fake news detection combination can serve better in
detection of fake news.
| [
{
"version": "v1",
"created": "Thu, 20 Jan 2022 07:43:15 GMT"
}
] | 1,642,723,200,000 | [
[
"Ahmed",
"Sajjad",
""
],
[
"Hinkelmann",
"Knut",
""
],
[
"Corradini",
"Flavio",
""
]
] |
2201.08112 | Alessandro Antonucci | Lilith Mattei and Alessandro Facchini and Alessandro Antonucci | Belief Revision in Sentential Decision Diagrams | Extended version with proofs of a paper under review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Belief revision is the task of modifying a knowledge base when new
information becomes available, while also respecting a number of desirable
properties. Classical belief revision schemes have been already specialised to
\emph{binary decision diagrams} (BDDs), the classical formalism to compactly
represent propositional knowledge. These results also apply to \emph{ordered}
BDDs (OBDDs), a special class of BDDs, designed to guarantee canonicity. Yet,
those revisions cannot be applied to \emph{sentential decision diagrams}
(SDDs), a typically more compact but still canonical class of Boolean circuits,
which generalizes OBDDs, while not being a subclass of BDDs. Here we fill this
gap by deriving a general revision algorithm for SDDs based on a syntactic
characterisation of Dalal revision. A specialised procedure for DNFs is also
presented. Preliminary experiments performed with randomly generated knowledge
bases show the advantages of directly perform revision within SDD formalism.
| [
{
"version": "v1",
"created": "Thu, 20 Jan 2022 11:01:41 GMT"
}
] | 1,642,723,200,000 | [
[
"Mattei",
"Lilith",
""
],
[
"Facchini",
"Alessandro",
""
],
[
"Antonucci",
"Alessandro",
""
]
] |
2201.08164 | Meike Nauta | Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle
Peters, Yasmin Schmitt, J\"org Schl\"otterer, Maurice van Keulen, Christin
Seifert | From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
Review on Evaluating Explainable AI | Published in ACM Computing Surveys (DOI
http://dx.doi.org/10.1145/3583558). This ArXiv version includes the
supplementary material. Website with categorization of XAI methods at
https://utwente-dmb.github.io/xai-papers/ | null | 10.1145/3583558 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The rising popularity of explainable artificial intelligence (XAI) to
understand high-performing black boxes raised the question of how to evaluate
explanations of machine learning (ML) models. While interpretability and
explainability are often presented as a subjectively validated binary property,
we consider it a multi-faceted concept. We identify 12 conceptual properties,
such as Compactness and Correctness, that should be evaluated for
comprehensively assessing the quality of an explanation. Our so-called Co-12
properties serve as categorization scheme for systematically reviewing the
evaluation practices of more than 300 papers published in the last 7 years at
major AI and ML conferences that introduce an XAI method. We find that 1 in 3
papers evaluate exclusively with anecdotal evidence, and 1 in 5 papers evaluate
with users. This survey also contributes to the call for objective,
quantifiable evaluation methods by presenting an extensive overview of
quantitative XAI evaluation methods. Our systematic collection of evaluation
methods provides researchers and practitioners with concrete tools to
thoroughly validate, benchmark and compare new and existing XAI methods. The
Co-12 categorization scheme and our identified evaluation methods open up
opportunities to include quantitative metrics as optimization criteria during
model training in order to optimize for accuracy and interpretability
simultaneously.
| [
{
"version": "v1",
"created": "Thu, 20 Jan 2022 13:23:20 GMT"
},
{
"version": "v2",
"created": "Tue, 31 May 2022 08:30:57 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Feb 2023 13:47:39 GMT"
}
] | 1,677,456,000,000 | [
[
"Nauta",
"Meike",
""
],
[
"Trienes",
"Jan",
""
],
[
"Pathak",
"Shreyasi",
""
],
[
"Nguyen",
"Elisa",
""
],
[
"Peters",
"Michelle",
""
],
[
"Schmitt",
"Yasmin",
""
],
[
"Schlötterer",
"Jörg",
""
],
[
"van Keulen",
"Maurice",
""
],
[
"Seifert",
"Christin",
""
]
] |
2201.08450 | Yuan Yang | Yuan Yang, Deepayan Sanyal, Joel Michelson, James Ainooson, Maithilee
Kunda | Automatic Item Generation of Figural Analogy Problems: A Review and
Outlook | Presented at The Ninth Advances in Cognitive Systems (ACS) Conference
2021 (arXiv:2201.06134) | null | null | ACS2021/02 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Figural analogy problems have long been a widely used format in human
intelligence tests. In the past four decades, more and more research has
investigated automatic item generation for figural analogy problems, i.e.,
algorithmic approaches for systematically and automatically creating such
problems. In cognitive science and psychometrics, this research can deepen our
understandings of human analogical ability and psychometric properties of
figural analogies. With the recent development of data-driven AI models for
reasoning about figural analogies, the territory of automatic item generation
of figural analogies has further expanded. This expansion brings new challenges
as well as opportunities, which demand reflection on previous item generation
research and planning future studies. This paper reviews the important works of
automatic item generation of figural analogies for both human intelligence
tests and data-driven AI models. From an interdisciplinary perspective, the
principles and technical details of these works are analyzed and compared, and
desiderata for future research are suggested.
| [
{
"version": "v1",
"created": "Thu, 20 Jan 2022 20:51:10 GMT"
}
] | 1,642,982,400,000 | [
[
"Yang",
"Yuan",
""
],
[
"Sanyal",
"Deepayan",
""
],
[
"Michelson",
"Joel",
""
],
[
"Ainooson",
"James",
""
],
[
"Kunda",
"Maithilee",
""
]
] |
2201.08883 | Sravya Kondrakunta | Sravya Kondrakunta, Venkatsampath Raja Gogineni, Michael T. Cox,
Demetris Coleman, Xiaobao Tan, Tony Lin, Mengxue Hou, Fumin Zhang, Frank
McQuarrie, Catherine R. Edwards | The Rational Selection of Goal Operations and the Integration ofSearch
Strategies with Goal-Driven Autonomy | Presented at The Ninth Advances in Cognitive Systems (ACS) Conference
2021 (arXiv:2201.06134) | null | null | Report-no: ACS2021/08 | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Intelligent physical systems as embodied cognitive systems must perform
high-level reasoning while concurrently managing an underlying control
architecture. The link between cognition and control must manage the problem of
converting continuous values from the real world to symbolic representations
(and back). To generate effective behaviors, reasoning must include a capacity
to replan, acquire and update new information, detect and respond to anomalies,
and perform various operations on system goals. But, these processes are not
independent and need further exploration. This paper examines an agent's
choices when multiple goal operations co-occur and interact, and it establishes
a method of choosing between them. We demonstrate the benefits and discuss the
trade offs involved with this and show positive results in a dynamic marine
search task.
| [
{
"version": "v1",
"created": "Fri, 21 Jan 2022 20:53:49 GMT"
}
] | 1,643,068,800,000 | [
[
"Kondrakunta",
"Sravya",
""
],
[
"Gogineni",
"Venkatsampath Raja",
""
],
[
"Cox",
"Michael T.",
""
],
[
"Coleman",
"Demetris",
""
],
[
"Tan",
"Xiaobao",
""
],
[
"Lin",
"Tony",
""
],
[
"Hou",
"Mengxue",
""
],
[
"Zhang",
"Fumin",
""
],
[
"McQuarrie",
"Frank",
""
],
[
"Edwards",
"Catherine R.",
""
]
] |
2201.08950 | Zhuoran Zeng | Zhuoran Zeng and Ernest Davis | Physical Reasoning in an Open World | Presented at The Ninth Advances in Cognitive Systems (ACS) Conference
2021 (arXiv:2201.06134) | null | null | ACS2021/07 | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Most work on physical reasoning, both in artificial intelligence and in
cognitive science, has focused on closed-world reasoning, in which it is
assumed that the problem specification specifies all relevant objects and
substance, all their relations in an initial situation, and all exogenous
events. However, in many situations, it is important to do open-world
reasoning; that is, making valid conclusions from very incomplete information.
We have implemented in Prolog an open-world reasoner for a toy microworld of
containers that can be loaded, unloaded, sealed, unsealed, carried, and dumped.
| [
{
"version": "v1",
"created": "Sat, 22 Jan 2022 02:35:16 GMT"
}
] | 1,643,068,800,000 | [
[
"Zeng",
"Zhuoran",
""
],
[
"Davis",
"Ernest",
""
]
] |
2201.09222 | Andrea Burattin | Andrea Burattin | Online Soft Conformance Checking: Any Perspective Can Indicate
Deviations | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Within process mining, a relevant activity is conformance checking. Such
activity consists of establishing the extent to which actual executions of a
process conform the expected behavior of a reference model. Current techniques
focus on prescriptive models of the control-flow as references. In certain
scenarios, however, a prescriptive model might not be available and,
additionally, the control-flow perspective might not be ideal for this purpose.
This paper tackles these two problems by suggesting a conformance approach that
uses a descriptive model (i.e., a pattern of the observed behavior over a
certain amount of time) which is not necessarily referring to the control-flow
(e.g., it can be based on the social network of handover of work).
Additionally, the entire approach can work both offline and online, thus
providing feedback in real time. The approach, which is implemented in ProM,
has been tested and results from 3 experiments with real world as well as
synthetic data are reported.
| [
{
"version": "v1",
"created": "Sun, 23 Jan 2022 10:26:44 GMT"
}
] | 1,643,068,800,000 | [
[
"Burattin",
"Andrea",
""
]
] |
2201.09305 | John Laird | John E. Laird | An Analysis and Comparison of ACT-R and Soar | 18 pages, 1 figure. Presented at The Ninth Advances in Cognitive
Systems (ACS) Conference 2021 (arXiv:2201.06134) | null | null | ACS2021/06 | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This is a detailed analysis and comparison of the ACT-R and Soar cognitive
architectures, including their overall structure, their representations of
agent data and metadata, and their associated processing. It focuses on working
memory, procedural memory, and long-term declarative memory. I emphasize the
commonalities, which are many, but also highlight the differences. I identify
the processes and distinct classes of information used by these architectures,
including agent data, metadata, and meta-process data, and explore the roles
that metadata play in decision making, memory retrievals, and learning.
| [
{
"version": "v1",
"created": "Sun, 23 Jan 2022 16:22:48 GMT"
}
] | 1,643,068,800,000 | [
[
"Laird",
"John E.",
""
]
] |
2201.09424 | Jiongzhi Zheng | Jiongzhi Zheng and Yawei Hong and Wenchang Xu and Wentao Li and Yongfu
Chen | An Effective Iterated Two-stage Heuristic Algorithm for the Multiple
Traveling Salesmen Problem | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The multiple Traveling Salesmen Problem (mTSP) is a general extension of the
famous NP-hard Traveling Salesmen Problem (TSP), that there are m (m > 1)
salesmen to visit the cities. In this paper, we address the mTSP with both the
minsum objective and minmax objective, which aims at minimizing the total
length of the $m$ tours and the length of the longest tour among all the m
tours, respectively. We propose an iterated two-stage heuristic algorithm
called ITSHA for the mTSP. Each iteration of ITSHA consists of an
initialization stage and an improvement stage. The initialization stage aims to
generate high-quality and diverse initial solutions. The improvement stage
mainly applies the variable neighborhood search (VNS) approach based on our
proposed effective local search neighborhoods to optimize the initial solution.
Moreover, some local optima escaping approaches are employed to enhance the
search ability of the algorithm. Extensive experimental results on a wide range
of public benchmark instances show that ITSHA significantly outperforms the
state-of-the-art heuristic algorithms in solving the mTSP on both the
objectives.
| [
{
"version": "v1",
"created": "Mon, 24 Jan 2022 02:43:08 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Feb 2022 09:27:00 GMT"
}
] | 1,646,092,800,000 | [
[
"Zheng",
"Jiongzhi",
""
],
[
"Hong",
"Yawei",
""
],
[
"Xu",
"Wenchang",
""
],
[
"Li",
"Wentao",
""
],
[
"Chen",
"Yongfu",
""
]
] |
2201.09760 | Shanbin Wu | Shangbin Wu, Xu Yan, Xiaoliang Fan, Shirui Pan, Shichao Zhu, Chuanpan
Zheng, Ming Cheng, Cheng Wang | Multi-Graph Fusion Networks for Urban Region Embedding | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning the embeddings for urban regions from human mobility data can reveal
the functionality of regions, and then enables the correlated but distinct
tasks such as crime prediction. Human mobility data contains rich but abundant
information, which yields to the comprehensive region embeddings for cross
domain tasks. In this paper, we propose multi-graph fusion networks (MGFN) to
enable the cross domain prediction tasks. First, we integrate the graphs with
spatio-temporal similarity as mobility patterns through a mobility graph fusion
module. Then, in the mobility pattern joint learning module, we design the
multi-level cross-attention mechanism to learn the comprehensive embeddings
from multiple mobility patterns based on intra-pattern and inter-pattern
messages. Finally, we conduct extensive experiments on real-world urban
datasets. Experimental results demonstrate that the proposed MGFN outperforms
the state-of-the-art methods by up to 12.35% improvement.
| [
{
"version": "v1",
"created": "Mon, 24 Jan 2022 15:48:50 GMT"
},
{
"version": "v2",
"created": "Mon, 9 May 2022 03:33:52 GMT"
}
] | 1,652,140,800,000 | [
[
"Wu",
"Shangbin",
""
],
[
"Yan",
"Xu",
""
],
[
"Fan",
"Xiaoliang",
""
],
[
"Pan",
"Shirui",
""
],
[
"Zhu",
"Shichao",
""
],
[
"Zheng",
"Chuanpan",
""
],
[
"Cheng",
"Ming",
""
],
[
"Wang",
"Cheng",
""
]
] |
2201.09880 | Zev Battad | Zev Battad, Mei Si | A System for Image Understanding using Sensemaking and Narrative | Presented at The Ninth Advances in Cognitive Systems (ACS) Conference
2021 (arXiv:2201.06134) | null | null | ACS2021/26 | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Sensemaking and narrative are two inherently interconnected concepts about
how people understand the world around them. Sensemaking is the process by
which people structure and interconnect the information they encounter in the
world with the knowledge and inferences they have made in the past. Narratives
are important constructs that people use sensemaking to create; ones that
reflect provide a more holistic account of the world than the information
within any given narrative is able to alone. Both are important to how human
beings parse the world, and both would be valuable for a computational system
attempting to do the same. In this paper, we discuss theories of sensemaking
and narrative with respect to how people build an understanding of the world
based on the information they encounter, as well as the links between the
fields of sensemaking and narrative research. We highlight a specific
computational task, visual storytelling, whose solutions we believe can be
enhanced by employing a sensemaking and narrative component. We then describe
our system for visual storytelling using sensemaking and narrative and discuss
examples from its current implementation.
| [
{
"version": "v1",
"created": "Fri, 21 Jan 2022 20:52:02 GMT"
}
] | 1,643,155,200,000 | [
[
"Battad",
"Zev",
""
],
[
"Si",
"Mei",
""
]
] |
2201.10315 | Zhaohao Wang | Zhaohao Wang and Huifang Yue | Comparison research on binary relations based on transitive degrees and
cluster degrees | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Interval-valued information systems are generalized models of single-valued
information systems. By rough set approach, interval-valued information systems
have been extensively studied. Authors could establish many binary relations
from the same interval-valued information system. In this paper, we do some
researches on comparing these binary relations so as to provide numerical
scales for choosing suitable relations in dealing with interval-valued
information systems. Firstly, based on similarity degrees, we compare the most
common three binary relations induced from the same interval-valued information
system. Secondly, we propose the concepts of transitive degree and cluster
degree, and investigate their properties. Finally, we provide some methods to
compare binary relations by means of the transitive degree and the cluster
degree. Furthermore, we use these methods to analyze the most common three
relations induced from Face Recognition Dataset, and obtain that $RF_{B}
^{\lambda}$ is a good choice when we deal with an interval-valued information
system by means of rough set approach.
| [
{
"version": "v1",
"created": "Tue, 25 Jan 2022 13:39:37 GMT"
}
] | 1,643,155,200,000 | [
[
"Wang",
"Zhaohao",
""
],
[
"Yue",
"Huifang",
""
]
] |
2201.10334 | Michael Beukman | Michael Beukman, Steven James and Christopher Cleghorn | Towards Objective Metrics for Procedurally Generated Video Game Levels | 7 pages, 10 figures. V3: This work has been submitted to the IEEE for
possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessible. Code is located at
https://github.com/Michael-Beukman/PCGNN | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With increasing interest in procedural content generation by academia and
game developers alike, it is vital that different approaches can be compared
fairly. However, evaluating procedurally generated video game levels is often
difficult, due to the lack of standardised, game-independent metrics. In this
paper, we introduce two simulation-based evaluation metrics that involve
analysing the behaviour of an A* agent to measure the diversity and difficulty
of generated levels in a general, game-independent manner. Diversity is
calculated by comparing action trajectories from different levels using the
edit distance, and difficulty is measured as how much exploration and expansion
of the A* search tree is necessary before the agent can solve the level. We
demonstrate that our diversity metric is more robust to changes in level size
and representation than current methods and additionally measures factors that
directly affect playability, instead of focusing on visual information. The
difficulty metric shows promise, as it correlates with existing estimates of
difficulty in one of the tested domains, but it does face some challenges in
the other domain. Finally, to promote reproducibility, we publicly release our
evaluation framework.
| [
{
"version": "v1",
"created": "Tue, 25 Jan 2022 14:13:50 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2022 14:10:46 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Mar 2022 05:53:12 GMT"
}
] | 1,646,870,400,000 | [
[
"Beukman",
"Michael",
""
],
[
"James",
"Steven",
""
],
[
"Cleghorn",
"Christopher",
""
]
] |
2201.10436 | Harald Ruess | Harald Rue{\ss}, Simon Burton | Safe AI -- How is this Possible? | 42 pages, 4 figures, 1 table | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Ttraditional safety engineering is coming to a turning point moving from
deterministic, non-evolving systems operating in well-defined contexts to
increasingly autonomous and learning-enabled AI systems which are acting in
largely unpredictable operating contexts. We outline some of underlying
challenges of safe AI and suggest a rigorous engineering framework for
minimizing uncertainty, thereby increasing confidence, up to tolerable levels,
in the safe behavior of AI systems.
| [
{
"version": "v1",
"created": "Tue, 25 Jan 2022 16:32:35 GMT"
},
{
"version": "v2",
"created": "Wed, 11 May 2022 18:34:11 GMT"
}
] | 1,652,400,000,000 | [
[
"Rueß",
"Harald",
""
],
[
"Burton",
"Simon",
""
]
] |
2201.10453 | Laurens Bliek | Laurens Bliek, Paulo da Costa, Reza Refaei Afshar, Yingqian Zhang, Tom
Catshoek, Dani\"el Vos, Sicco Verwer, Fynn Schmitt-Ulms, Andr\'e Hottung,
Tapan Shah, Meinolf Sellmann, Kevin Tierney, Carl Perreault-Lafleur, Caroline
Leboeuf, Federico Bobbio, Justine Pepin, Warley Almeida Silva, Ricardo Gama,
Hugo L. Fernandes, Martin Zaefferer, Manuel L\'opez-Ib\'a\~nez, Ekhine
Irurozki | The First AI4TSP Competition: Learning to Solve Stochastic Routing
Problems | 21 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper reports on the first international competition on AI for the
traveling salesman problem (TSP) at the International Joint Conference on
Artificial Intelligence 2021 (IJCAI-21). The TSP is one of the classical
combinatorial optimization problems, with many variants inspired by real-world
applications. This first competition asked the participants to develop
algorithms to solve a time-dependent orienteering problem with stochastic
weights and time windows (TD-OPSWTW). It focused on two types of learning
approaches: surrogate-based optimization and deep reinforcement learning. In
this paper, we describe the problem, the setup of the competition, the winning
methods, and give an overview of the results. The winning methods described in
this work have advanced the state-of-the-art in using AI for stochastic routing
problems. Overall, by organizing this competition we have introduced routing
problems as an interesting problem setting for AI researchers. The simulator of
the problem has been made open-source and can be used by other researchers as a
benchmark for new AI methods.
| [
{
"version": "v1",
"created": "Tue, 25 Jan 2022 16:55:33 GMT"
}
] | 1,643,155,200,000 | [
[
"Bliek",
"Laurens",
""
],
[
"da Costa",
"Paulo",
""
],
[
"Afshar",
"Reza Refaei",
""
],
[
"Zhang",
"Yingqian",
""
],
[
"Catshoek",
"Tom",
""
],
[
"Vos",
"Daniël",
""
],
[
"Verwer",
"Sicco",
""
],
[
"Schmitt-Ulms",
"Fynn",
""
],
[
"Hottung",
"André",
""
],
[
"Shah",
"Tapan",
""
],
[
"Sellmann",
"Meinolf",
""
],
[
"Tierney",
"Kevin",
""
],
[
"Perreault-Lafleur",
"Carl",
""
],
[
"Leboeuf",
"Caroline",
""
],
[
"Bobbio",
"Federico",
""
],
[
"Pepin",
"Justine",
""
],
[
"Silva",
"Warley Almeida",
""
],
[
"Gama",
"Ricardo",
""
],
[
"Fernandes",
"Hugo L.",
""
],
[
"Zaefferer",
"Martin",
""
],
[
"López-Ibáñez",
"Manuel",
""
],
[
"Irurozki",
"Ekhine",
""
]
] |
2201.10556 | Taylor Olson | Taylor Olson and Ken Forbus | Learning Norms via Natural Language Teachings | Presented at The Ninth Advances in Cognitive Systems (ACS) Conference
2021 (arXiv:2201.06134) | null | null | ACS2021/17 | cs.AI | http://creativecommons.org/licenses/by/4.0/ | To interact with humans, artificial intelligence (AI) systems must understand
our social world. Within this world norms play an important role in motivating
and guiding agents. However, very few computational theories for learning
social norms have been proposed. There also exists a long history of debate on
the distinction between what is normal (is) and what is normative (ought). Many
have argued that being capable of learning both concepts and recognizing the
difference is necessary for all social agents. This paper introduces and
demonstrates a computational approach to learning norms from natural language
text that accounts for both what is normal and what is normative. It provides a
foundation for everyday people to train AI systems about social norms.
| [
{
"version": "v1",
"created": "Thu, 20 Jan 2022 22:09:42 GMT"
}
] | 1,643,241,600,000 | [
[
"Olson",
"Taylor",
""
],
[
"Forbus",
"Ken",
""
]
] |
2201.11117 | Alexander Kott | Stephanie Galaitsi, Benjamin D. Trump, Jeffrey M. Keisler, Igor
Linkov, Alexander Kott | Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | To benefit from AI advances, users and operators of AI systems must have
reason to trust it. Trust arises from multiple interactions, where predictable
and desirable behavior is reinforced over time. Providing the system's users
with some understanding of AI operations can support predictability, but
forcing AI to explain itself risks constraining AI capabilities to only those
reconcilable with human cognition. We argue that AI systems should be designed
with features that build trust by bringing decision-analytic perspectives and
formal tools into AI. Instead of trying to achieve explainable AI, we should
develop interpretable and actionable AI. Actionable and Interpretable AI (AI2)
will incorporate explicit quantifications and visualizations of user confidence
in AI recommendations. In doing so, it will allow examining and testing of AI
system predictions to establish a basis for trust in the systems' decision
making and ensure broad benefits from deploying and advancing its computational
capabilities.
| [
{
"version": "v1",
"created": "Wed, 26 Jan 2022 18:53:09 GMT"
}
] | 1,643,241,600,000 | [
[
"Galaitsi",
"Stephanie",
""
],
[
"Trump",
"Benjamin D.",
""
],
[
"Keisler",
"Jeffrey M.",
""
],
[
"Linkov",
"Igor",
""
],
[
"Kott",
"Alexander",
""
]
] |
2201.11239 | Alon Jacovi | Alon Jacovi, Jasmijn Bastings, Sebastian Gehrmann, Yoav Goldberg,
Katja Filippova | Diagnosing AI Explanation Methods with Folk Concepts of Behavior | Accepted to JAIR (Vol. 78, 2023) | Journal of Artificial Intelligence Research 73 (2023) 459-489 | 10.1613/jair.1.14053 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate a formalism for the conditions of a successful explanation of
AI. We consider "success" to depend not only on what information the
explanation contains, but also on what information the human explainee
understands from it. Theory of mind literature discusses the folk concepts that
humans use to understand and generalize behavior. We posit that folk concepts
of behavior provide us with a "language" that humans understand behavior with.
We use these folk concepts as a framework of social attribution by the human
explainee - the information constructs that humans are likely to comprehend
from explanations - by introducing a blueprint for an explanatory narrative
(Figure 1) that explains AI behavior with these constructs. We then demonstrate
that many XAI methods today can be mapped to folk concepts of behavior in a
qualitative evaluation. This allows us to uncover their failure modes that
prevent current methods from explaining successfully - i.e., the information
constructs that are missing for any given XAI method, and whose inclusion can
decrease the likelihood of misunderstanding AI behavior.
| [
{
"version": "v1",
"created": "Thu, 27 Jan 2022 00:19:41 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2022 07:37:35 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Feb 2023 17:34:07 GMT"
},
{
"version": "v4",
"created": "Sat, 11 Nov 2023 14:19:33 GMT"
},
{
"version": "v5",
"created": "Tue, 14 Nov 2023 11:32:11 GMT"
},
{
"version": "v6",
"created": "Wed, 15 Nov 2023 14:34:39 GMT"
}
] | 1,700,524,800,000 | [
[
"Jacovi",
"Alon",
""
],
[
"Bastings",
"Jasmijn",
""
],
[
"Gehrmann",
"Sebastian",
""
],
[
"Goldberg",
"Yoav",
""
],
[
"Filippova",
"Katja",
""
]
] |
2201.11331 | Heather Bowling | Da Chen Emily Koo, Heather Bowling, Kenneth Ashworth, David J. Heeger,
Stefano Pacifico | Epistemic AI platform accelerates innovation by connecting biomedical
knowledge | 12 pages, 2 main figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Epistemic AI accelerates biomedical discovery by finding hidden connections
in the network of biomedical knowledge. The Epistemic AI web-based software
platform embodies the concept of knowledge mapping, an interactive process that
relies on a knowledge graph in combination with natural language processing
(NLP), information retrieval, relevance feedback, and network analysis.
Knowledge mapping reduces information overload, prevents costly mistakes, and
minimizes missed opportunities in the research process. The platform combines
state-of-the-art methods for information extraction with machine learning,
artificial intelligence and network analysis. Starting from a single biological
entity, such as a gene or disease, users may: a) construct a map of connections
to that entity, b) map an entire domain of interest, and c) gain insight into
large biological networks of knowledge. Knowledge maps provide clarity and
organization, simplifying the day-to-day research processes.
| [
{
"version": "v1",
"created": "Thu, 27 Jan 2022 05:34:13 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Jan 2022 05:10:12 GMT"
},
{
"version": "v3",
"created": "Thu, 31 Mar 2022 21:53:06 GMT"
}
] | 1,649,030,400,000 | [
[
"Koo",
"Da Chen Emily",
""
],
[
"Bowling",
"Heather",
""
],
[
"Ashworth",
"Kenneth",
""
],
[
"Heeger",
"David J.",
""
],
[
"Pacifico",
"Stefano",
""
]
] |
2201.11404 | Jinke He | Jinke He, Miguel Suau, Hendrik Baier, Michael Kaisers, Frans A.
Oliehoek | Online Planning in POMDPs with Self-Improving Simulators | presented at IJCAI 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | How can we plan efficiently in a large and complex environment when the time
budget is limited? Given the original simulator of the environment, which may
be computationally very demanding, we propose to learn online an approximate
but much faster simulator that improves over time. To plan reliably and
efficiently while the approximate simulator is learning, we develop a method
that adaptively decides which simulator to use for every simulation, based on a
statistic that measures the accuracy of the approximate simulator. This allows
us to use the approximate simulator to replace the original simulator for
faster simulations when it is accurate enough under the current context, thus
trading off simulation speed and accuracy. Experimental results in two large
domains show that when integrated with POMCP, our approach allows to plan with
improving efficiency over time.
| [
{
"version": "v1",
"created": "Thu, 27 Jan 2022 09:41:59 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2022 23:13:21 GMT"
}
] | 1,670,976,000,000 | [
[
"He",
"Jinke",
""
],
[
"Suau",
"Miguel",
""
],
[
"Baier",
"Hendrik",
""
],
[
"Kaisers",
"Michael",
""
],
[
"Oliehoek",
"Frans A.",
""
]
] |
2201.11580 | Dongdong Bai | Qibin Zhou, Dongdong Bai, Junge Zhang, Fuqing Duan, Kaiqi Huang | DecisionHoldem: Safe Depth-Limited Solving With Diverse Opponents for
Imperfect-Information Games | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An imperfect-information game is a type of game with asymmetric information.
It is more common in life than perfect-information game. Artificial
intelligence (AI) in imperfect-information games, such like poker, has made
considerable progress and success in recent years. The great success of
superhuman poker AI, such as Libratus and Deepstack, attracts researchers to
pay attention to poker research. However, the lack of open-source code limits
the development of Texas hold'em AI to some extent. This article introduces
DecisionHoldem, a high-level AI for heads-up no-limit Texas hold'em with safe
depth-limited subgame solving by considering possible ranges of opponent's
private hands to reduce the exploitability of the strategy. Experimental
results show that DecisionHoldem defeats the strongest openly available agent
in heads-up no-limit Texas hold'em poker, namely Slumbot, and a high-level
reproduction of Deepstack, viz, Openstack, by more than 730 mbb/h
(one-thousandth big blind per round) and 700 mbb/h. Moreover, we release the
source codes and tools of DecisionHoldem to promote AI development in
imperfect-information games.
| [
{
"version": "v1",
"created": "Thu, 27 Jan 2022 15:35:49 GMT"
},
{
"version": "v2",
"created": "Tue, 28 May 2024 05:04:52 GMT"
}
] | 1,716,940,800,000 | [
[
"Zhou",
"Qibin",
""
],
[
"Bai",
"Dongdong",
""
],
[
"Zhang",
"Junge",
""
],
[
"Duan",
"Fuqing",
""
],
[
"Huang",
"Kaiqi",
""
]
] |
2201.11691 | Denis Kleyko | Dmitri A. Rachkovskij, Denis Kleyko | Recursive Binding for Similarity-Preserving Hypervector Representations
of Sequences | 8 pages, 4, figures, 2 tables. arXiv admin note: some overlap with
arXiv:2112.15475 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hyperdimensional computing (HDC), also known as vector symbolic architectures
(VSA), is a computing framework used within artificial intelligence and
cognitive computing that operates with distributed vector representations of
large fixed dimensionality. A critical step for designing the HDC/VSA solutions
is to obtain such representations from the input data. Here, we focus on
sequences and propose their transformation to distributed representations that
both preserve the similarity of identical sequence elements at nearby positions
and are equivariant to the sequence shift. These properties are enabled by
forming representations of sequence positions using recursive binding and
superposition operations. The proposed transformation was experimentally
investigated with symbolic strings used for modeling human perception of word
similarity. The obtained results are on a par with more sophisticated
approaches from the literature. The proposed transformation was designed for
the HDC/VSA model known as Fourier Holographic Reduced Representations.
However, it can be adapted to some other HDC/VSA models.
| [
{
"version": "v1",
"created": "Thu, 27 Jan 2022 17:41:28 GMT"
},
{
"version": "v2",
"created": "Tue, 17 May 2022 03:31:47 GMT"
}
] | 1,652,832,000,000 | [
[
"Rachkovskij",
"Dmitri A.",
""
],
[
"Kleyko",
"Denis",
""
]
] |
2201.11802 | Jia Wang | Xizhe Wang, Ning Zhang, Jia Wang, Jing Ni, Xinzi Sun, John Zhang,
Zitao Liu, Yu Cao, Benyuan Liu | A Knowledge-Based Decision Support System for In Vitro Fertilization
Treatment | 8 pages, 2020 IEEE International Conference on E-health Networking,
Application & Services (HEALTHCOM). IEEE, 2021 | null | 10.1109/HEALTHCOM49281.2021.9398914 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In Vitro Fertilization (IVF) is the most widely used Assisted Reproductive
Technology (ART). IVF usually involves controlled ovarian stimulation, oocyte
retrieval, fertilization in the laboratory with subsequent embryo transfer. The
first two steps correspond with follicular phase of females and ovulation in
their menstrual cycle. Therefore, we refer to it as the treatment cycle in our
paper. The treatment cycle is crucial because the stimulation medications in
IVF treatment are applied directly on patients. In order to optimize the
stimulation effects and lower the side effects of the stimulation medications,
prompt treatment adjustments are in need. In addition, the quality and quantity
of the retrieved oocytes have a significant effect on the outcome of the
following procedures. To improve the IVF success rate, we propose a
knowledge-based decision support system that can provide medical advice on the
treatment protocol and medication adjustment for each patient visit during IVF
treatment cycle. Our system is efficient in data processing and light-weighted
which can be easily embedded into electronic medical record systems. Moreover,
an oocyte retrieval oriented evaluation demonstrates that our system performs
well in terms of accuracy of advice for the protocols and medications.
| [
{
"version": "v1",
"created": "Thu, 27 Jan 2022 20:30:52 GMT"
}
] | 1,643,587,200,000 | [
[
"Wang",
"Xizhe",
""
],
[
"Zhang",
"Ning",
""
],
[
"Wang",
"Jia",
""
],
[
"Ni",
"Jing",
""
],
[
"Sun",
"Xinzi",
""
],
[
"Zhang",
"John",
""
],
[
"Liu",
"Zitao",
""
],
[
"Cao",
"Yu",
""
],
[
"Liu",
"Benyuan",
""
]
] |
2201.12845 | Guilong Li | Guilong Li, Yixian Chen, Qionghua Liao, Zhaocheng He | Potential destination discovery for low predictability individuals based
on knowledge graph | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Travelers may travel to locations they have never visited, which we call
potential destinations of them. Especially under a very limited observation,
travelers tend to show random movement patterns and usually have a large number
of potential destinations, which make them difficult to handle for mobility
prediction (e.g., destination prediction). In this paper, we develop a new
knowledge graph-based framework (PDPFKG) for potential destination discovery of
low predictability travelers by considering trip association relationships
between them. We first construct a trip knowledge graph (TKG) to model the trip
scenario by entities (e.g., travelers, destinations and time information) and
their relationships, in which we introduce the concept of private relationship
for complexity reduction. Then a modified knowledge graph embedding algorithm
is implemented to optimize the overall graph representation. Based on the trip
knowledge graph embedding model (TKGEM), the possible ranking of individuals'
unobserved destinations to be chosen in the future can be obtained by
calculating triples' distance. Empirically. PDPFKG is tested using an anonymous
vehicular dataset from 138 intersections equipped with video-based vehicle
detection systems in Xuancheng city, China. The results show that (i) the
proposed method significantly outperforms baseline methods, and (ii) the
results show strong consistency with traveler behavior in choosing potential
destinations. Finally, we provide a comprehensive discussion of the innovative
points of the methodology.
| [
{
"version": "v1",
"created": "Sun, 30 Jan 2022 15:26:12 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Mar 2022 14:20:56 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Sep 2022 03:38:21 GMT"
}
] | 1,663,718,400,000 | [
[
"Li",
"Guilong",
""
],
[
"Chen",
"Yixian",
""
],
[
"Liao",
"Qionghua",
""
],
[
"He",
"Zhaocheng",
""
]
] |
2201.12885 | Michael Cox | Michael Cox, Zahiduddin Mohammad, Sravya Kondrakunta, Ventaksamapth
Raja Gogineni, Dustin Dannenhauer and Othalia Larue | Computational Metacognition | 20 pages, 9 figures, 2 tables, Presented at The Ninth Advances in
Cognitive Systems (ACS) Conference 2021 (arXiv:2201.06134) | null | null | ACS2021/01 | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Computational metacognition represents a cognitive systems perspective on
high-order reasoning in integrated artificial systems that seeks to leverage
ideas from human metacognition and from metareasoning approaches in artificial
intelligence. The key characteristic is to declaratively represent and then
monitor traces of cognitive activity in an intelligent system in order to
manage the performance of cognition itself. Improvements in cognition then lead
to improvements in behavior and thus performance. We illustrate these concepts
with an agent implementation in a cognitive architecture called MIDCA and show
the value of metacognition in problem-solving. The results illustrate how
computational metacognition improves performance by changing cognition through
meta-level goal operations and learning.
| [
{
"version": "v1",
"created": "Sun, 30 Jan 2022 17:34:53 GMT"
}
] | 1,643,673,600,000 | [
[
"Cox",
"Michael",
""
],
[
"Mohammad",
"Zahiduddin",
""
],
[
"Kondrakunta",
"Sravya",
""
],
[
"Gogineni",
"Ventaksamapth Raja",
""
],
[
"Dannenhauer",
"Dustin",
""
],
[
"Larue",
"Othalia",
""
]
] |
2201.13169 | Sander Beckers | Sander Beckers | Causal Explanations and XAI | To appear in Causal Learning and Reasoning 2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although standard Machine Learning models are optimized for making
predictions about observations, more and more they are used for making
predictions about the results of actions. An important goal of Explainable
Artificial Intelligence (XAI) is to compensate for this mismatch by offering
explanations about the predictions of an ML-model which ensure that they are
reliably action-guiding. As action-guiding explanations are causal
explanations, the literature on this topic is starting to embrace insights from
the literature on causal models. Here I take a step further down this path by
formally defining the causal notions of sufficient explanations and
counterfactual explanations. I show how these notions relate to (and improve
upon) existing work, and motivate their adequacy by illustrating how different
explanations are action-guiding under different circumstances. Moreover, this
work is the first to offer a formal definition of actual causation that is
founded entirely in action-guiding explanations. Although the definitions are
motivated by a focus on XAI, the analysis of causal explanation and actual
causation applies in general. I also touch upon the significance of this work
for fairness in AI by showing how actual causation can be used to improve the
idea of path-specific counterfactual fairness.
| [
{
"version": "v1",
"created": "Mon, 31 Jan 2022 12:32:10 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Feb 2022 17:07:11 GMT"
}
] | 1,644,883,200,000 | [
[
"Beckers",
"Sander",
""
]
] |
2201.13176 | Maurizio Parton | Luca Pasqualini, Gianluca Amato, Marco Fantozzi, Rosa Gini, Alessandro
Marchetti, Carlo Metta, Francesco Morandin, Maurizio Parton | Score vs. Winrate in Score-Based Games: which Reward for Reinforcement
Learning? | Published at 2022 21st IEEE International Conference on Machine
Learning and Applications (ICMLA). This version (v2) is a major revision and
superseeds version v1 | null | 10.1109/ICMLA55696.2022.00099 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the last years, the DeepMind algorithm AlphaZero has become the state of
the art to efficiently tackle perfect information two-player zero-sum games
with a win/lose outcome. However, when the win/lose outcome is decided by a
final score difference, AlphaZero may play score-suboptimal moves because all
winning final positions are equivalent from the win/lose outcome perspective.
This can be an issue, for instance when used for teaching, or when trying to
understand whether there is a better move. Moreover, there is the theoretical
quest for the perfect game. A naive approach would be training an
AlphaZero-like agent to predict score differences instead of win/lose outcomes.
Since the game of Go is deterministic, this should as well produce an
outcome-optimal play. However, it is a folklore belief that "this does not
work".
In this paper, we first provide empirical evidence for this belief. We then
give a theoretical interpretation of this suboptimality in general perfect
information two-player zero-sum game where the complexity of a game like Go is
replaced by the randomness of the environment. We show that an outcome-optimal
policy has a different preference for uncertainty when it is winning or losing.
In particular, when in a losing state, an outcome-optimal agent chooses actions
leading to a higher score variance. We then posit that when approximation is
involved, a deterministic game behaves like a nondeterministic game, where the
score variance is modeled by how uncertain the position is. We validate this
hypothesis in AlphaZero-like software with a human expert.
| [
{
"version": "v1",
"created": "Mon, 31 Jan 2022 12:38:02 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jan 2023 15:24:15 GMT"
}
] | 1,673,308,800,000 | [
[
"Pasqualini",
"Luca",
""
],
[
"Amato",
"Gianluca",
""
],
[
"Fantozzi",
"Marco",
""
],
[
"Gini",
"Rosa",
""
],
[
"Marchetti",
"Alessandro",
""
],
[
"Metta",
"Carlo",
""
],
[
"Morandin",
"Francesco",
""
],
[
"Parton",
"Maurizio",
""
]
] |
2201.13427 | Armen Kostanyan | Armen Kostanyan, Arevik Harmandayan | Fuzzy Segmentations of a String | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article discusses a particular case of the data clustering problem,
where it is necessary to find groups of adjacent text segments of the
appropriate length that match a fuzzy pattern represented as a sequence of
fuzzy properties. To solve this problem, a heuristic algorithm for finding a
sufficiently large number of solutions is proposed. The key idea of the
proposed algorithm is the use of the prefix structure to track the process of
mapping text segments to fuzzy properties.
An important special case of the text segmentation problem is the fuzzy
string matching problem, when adjacent text segments have unit length and,
accordingly, the fuzzy pattern is a sequence of fuzzy properties of text
characters. It is proven that the heuristic segmentation algorithm in this case
finds all text segments that match the fuzzy pattern.
Finally, we consider the problem of a best segmentation of the entire text
based on a fuzzy pattern, which is solved using the dynamic programming method.
Keywords: fuzzy clustering, fuzzy string matching, approximate string
matching
| [
{
"version": "v1",
"created": "Mon, 31 Jan 2022 18:40:03 GMT"
}
] | 1,643,673,600,000 | [
[
"Kostanyan",
"Armen",
""
],
[
"Harmandayan",
"Arevik",
""
]
] |
2202.00294 | Nir Oren | Nir Oren and Bruno Yun and Srdjan Vesic and Murilo Baptista | The Inverse Problem for Argumentation Gradual Semantics | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gradual semantics with abstract argumentation provide each argument with a
score reflecting its acceptability, i.e. how "much" it is attacked by other
arguments. Many different gradual semantics have been proposed in the
literature, each following different principles and producing different
argument rankings. A sub-class of such semantics, the so-called weighted
semantics, takes, in addition to the graph structure, an initial set of weights
over the arguments as input, with these weights affecting the resultant
argument ranking. In this work, we consider the inverse problem over such
weighted semantics. That is, given an argumentation framework and a desired
argument ranking, we ask whether there exist initial weights such that a
particular semantics produces the given ranking. The contribution of this paper
are: (1) an algorithm to answer this problem, (2) a characterisation of the
properties that a gradual semantics must satisfy for the algorithm to operate,
and (3) an empirical evaluation of the proposed algorithm.
| [
{
"version": "v1",
"created": "Tue, 1 Feb 2022 09:46:23 GMT"
}
] | 1,643,760,000,000 | [
[
"Oren",
"Nir",
""
],
[
"Yun",
"Bruno",
""
],
[
"Vesic",
"Srdjan",
""
],
[
"Baptista",
"Murilo",
""
]
] |
2202.00332 | Stefan L\"udtke | Timon Felske, Stefan L\"udtke, Sebastian Bader, Thomas Kirste | Activity Recognition in Assembly Tasks by Bayesian Filtering in
Multi-Hypergraphs | Accepted for presentation at the 2nd GCLR workshop in conjunction
with AAAI 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We study sensor-based human activity recognition in manual work processes
like assembly tasks. In such processes, the system states often have a rich
structure, involving object properties and relations. Thus, estimating the
hidden system state from sensor observations by recursive Bayesian filtering
can be very challenging, due to the combinatorial explosion in the number of
system states. To alleviate this problem, we propose an efficient Bayesian
filtering model for such processes. In our approach, system states are
represented by multi-hypergraphs, and the system dynamics is modeled by graph
rewriting rules. We show a preliminary concept that allows to represent
distributions over multi-hypergraphs more compactly than by full enumeration,
and present an inference algorithm that works directly on this compact
representation. We demonstrate the applicability of the algorithm on a real
dataset.
| [
{
"version": "v1",
"created": "Tue, 1 Feb 2022 11:01:09 GMT"
}
] | 1,643,760,000,000 | [
[
"Felske",
"Timon",
""
],
[
"Lüdtke",
"Stefan",
""
],
[
"Bader",
"Sebastian",
""
],
[
"Kirste",
"Thomas",
""
]
] |
2202.00531 | Bo Liu | Daoming Lyu, Bo Liu, and Jianshu Chen | PRIMA: Planner-Reasoner Inside a Multi-task Reasoning Agent | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We consider the problem of multi-task reasoning (MTR), where an agent can
solve multiple tasks via (first-order) logic reasoning. This capability is
essential for human-like intelligence due to its strong generalizability and
simplicity for handling multiple tasks. However, a major challenge in
developing effective MTR is the intrinsic conflict between reasoning capability
and efficiency. An MTR-capable agent must master a large set of "skills" to
tackle diverse tasks, but executing a particular task at the inference stage
requires only a small subset of immediately relevant skills. How can we
maintain broad reasoning capability and also efficient specific-task
performance? To address this problem, we propose a Planner-Reasoner framework
capable of state-of-the-art MTR capability and high efficiency. The Reasoner
models shareable (first-order) logic deduction rules, from which the Planner
selects a subset to compose into efficient reasoning paths. The entire model is
trained in an end-to-end manner using deep reinforcement learning, and
experimental studies over a variety of domains validate its effectiveness.
| [
{
"version": "v1",
"created": "Tue, 1 Feb 2022 16:22:19 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Feb 2022 01:17:51 GMT"
}
] | 1,644,883,200,000 | [
[
"Lyu",
"Daoming",
""
],
[
"Liu",
"Bo",
""
],
[
"Chen",
"Jianshu",
""
]
] |
2202.00674 | Eduardo M. Vasconcelos | Eduardo M. Vasconcelos | Just Another Method to Compute MTTF from Continuous Time Markov Chain | 3 pages, 1 figure | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The Meantime to Failure is a statistic used to determine how much time a
system spends to enter one of its absorption states. This statistic can be used
in most areas of knowledge. In engineering, for example, can be used as a
measure of equipment reliability, and in business, as a measure of processes
performance. This work presents a method to obtain the Meantime to Failure from
a Continuous Time Markov Chain models. The method is intuitive and is simpler
to be implemented, since, it consists of solving a system of linear equations.
| [
{
"version": "v1",
"created": "Tue, 1 Feb 2022 14:21:25 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Feb 2022 02:38:09 GMT"
}
] | 1,643,932,800,000 | [
[
"Vasconcelos",
"Eduardo M.",
""
]
] |
2202.01030 | Florian W\"orz | Tom Kr\"uger and Jan-Hendrik Lorenz and Florian W\"orz | Too much information: why CDCL solvers need to forget learned clauses | null | null | 10.1371/journal.pone.0272967 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Conflict-driven clause learning (CDCL) is a remarkably successful paradigm
for solving the satisfiability problem of propositional logic. Instead of a
simple depth-first backtracking approach, this kind of solver learns the reason
behind occurring conflicts in the form of additional clauses. However, despite
the enormous success of CDCL solvers, there is still only a limited
understanding of what influences the performance of these solvers in what way.
Considering different measures, this paper demonstrates, quite surprisingly,
that clause learning (without being able to get rid of some clauses) can not
only help the solver but can oftentimes deteriorate the solution process
dramatically. By conducting extensive empirical analysis, we furthermore find
that the runtime distributions of CDCL solvers are multimodal. This
multimodality can be seen as a reason for the deterioration phenomenon
described above. Simultaneously, it also gives an indication of why clause
learning in combination with clause deletion is virtually the de facto standard
of SAT solving, in spite of this phenomenon. As a final contribution, we show
that Weibull mixture distributions can accurately describe the multimodal
distributions. Thus, adding new clauses to a base instance has an inherent
effect of making runtimes long-tailed. This insight provides an explanation as
to why the technique of forgetting clauses is useful in CDCL solvers apart from
the optimization of unit propagation speed.
| [
{
"version": "v1",
"created": "Tue, 1 Feb 2022 10:16:04 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Jun 2022 18:48:48 GMT"
}
] | 1,665,532,800,000 | [
[
"Krüger",
"Tom",
""
],
[
"Lorenz",
"Jan-Hendrik",
""
],
[
"Wörz",
"Florian",
""
]
] |
2202.01040 | Jesse English | Marjorie McShane, Jesse English, Sergei Nirenburg | Knowledge Engineering in the Long Game of Artificial Intelligence: The
Case of Speech Acts | Presented at The Ninth Advances in Cognitive Systems (ACS) Conference
2021 (arXiv:2201.06134) | null | null | ACS2021/04 | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper describes principles and practices of knowledge engineering that
enable the development of holistic language-endowed intelligent agents that can
function across domains and applications, as well as expand their ontological
and lexical knowledge through lifelong learning. For illustration, we focus on
dialog act modeling, a task that has been widely pursued in linguistics,
cognitive modeling, and statistical natural language processing. We describe an
integrative approach grounded in the OntoAgent knowledge-centric cognitive
architecture and highlight the limitations of past approaches that isolate
dialog from other agent functionalities.
| [
{
"version": "v1",
"created": "Wed, 2 Feb 2022 14:05:12 GMT"
}
] | 1,643,846,400,000 | [
[
"McShane",
"Marjorie",
""
],
[
"English",
"Jesse",
""
],
[
"Nirenburg",
"Sergei",
""
]
] |
2202.01108 | Eli A. Meirom | Yuval Atzmon, Eli A. Meirom, Shie Mannor, Gal Chechik | Learning to reason about and to act on physical cascading events | null | Proceedings of the 40th International Conference on Machine
Learning, 2023 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reasoning and interacting with dynamic environments is a fundamental problem
in AI, but it becomes extremely challenging when actions can trigger cascades
of cross-dependent events. We introduce a new supervised learning setup called
{\em Cascade} where an agent is shown a video of a physically simulated dynamic
scene, and is asked to intervene and trigger a cascade of events, such that the
system reaches a "counterfactual" goal. For instance, the agent may be asked to
"Make the blue ball hit the red one, by pushing the green ball". The agent
intervention is drawn from a continuous space, and cascades of events makes the
dynamics highly non-linear.
We combine semantic tree search with an event-driven forward model and devise
an algorithm that learns to search in semantic trees in continuous spaces. We
demonstrate that our approach learns to effectively follow instructions to
intervene in previously unseen complex scenes. It can also reason about
alternative outcomes, when provided an observed cascade of events.
| [
{
"version": "v1",
"created": "Wed, 2 Feb 2022 16:17:42 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Jul 2023 10:55:20 GMT"
}
] | 1,690,243,200,000 | [
[
"Atzmon",
"Yuval",
""
],
[
"Meirom",
"Eli A.",
""
],
[
"Mannor",
"Shie",
""
],
[
"Chechik",
"Gal",
""
]
] |
2202.01123 | Laura Giordano | Laura Giordano and Daniele Theseider Dupr\'e | An ASP approach for reasoning on neural networks under a finitely
many-valued semantics for weighted conditional knowledge bases | Paper presented at the 38th International Conference on Logic
Programming (ICLP 2022), 16 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weighted knowledge bases for description logics with typicality have been
recently considered under a "concept-wise" multipreference semantics (in both
the two-valued and fuzzy case), as the basis of a logical semantics of
MultiLayer Perceptrons (MLPs). In this paper we consider weighted conditional
ALC knowledge bases with typicality in the finitely many-valued case, through
three different semantic constructions. For the boolean fragment LC of ALC we
exploit ASP and "asprin" for reasoning with the concept-wise multipreference
entailment under a phi-coherent semantics, suitable to characterize the
stationary states of MLPs. As a proof of concept, we experiment the proposed
approach for checking properties of trained MLPs.
The paper is under consideration for acceptance in TPLP.
| [
{
"version": "v1",
"created": "Wed, 2 Feb 2022 16:30:28 GMT"
},
{
"version": "v2",
"created": "Mon, 16 May 2022 15:55:44 GMT"
},
{
"version": "v3",
"created": "Tue, 17 May 2022 14:14:30 GMT"
}
] | 1,652,832,000,000 | [
[
"Giordano",
"Laura",
""
],
[
"Dupré",
"Daniele Theseider",
""
]
] |
2202.01256 | Xijun Li | Jianye Hao, Jiawen Lu, Xijun Li, Xialiang Tong, Xiang Xiang, Mingxuan
Yuan and Hankz Hankui Zhuo | Introduction to The Dynamic Pickup and Delivery Problem Benchmark --
ICAPS 2021 Competition | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The Dynamic Pickup and Delivery Problem (DPDP) is an essential problem within
the logistics domain. So far, research on this problem has mainly focused on
using artificial data which fails to reflect the complexity of real-world
problems. In this draft, we would like to introduce a new benchmark from real
business scenarios as well as a simulator supporting the dynamic evaluation.
The benchmark and simulator have been published and successfully supported the
ICAPS 2021 Dynamic Pickup and Delivery Problem competition participated by 152
teams.
| [
{
"version": "v1",
"created": "Wed, 19 Jan 2022 00:52:16 GMT"
}
] | 1,643,932,800,000 | [
[
"Hao",
"Jianye",
""
],
[
"Lu",
"Jiawen",
""
],
[
"Li",
"Xijun",
""
],
[
"Tong",
"Xialiang",
""
],
[
"Xiang",
"Xiang",
""
],
[
"Yuan",
"Mingxuan",
""
],
[
"Zhuo",
"Hankz Hankui",
""
]
] |
2202.01291 | Vladislav Dorofeev | Vladislav Dorofeev, Petro Trokhimchuk | Computer sciences and synthesis: retrospective and perspective | arXiv admin note: substantial text overlap with arXiv:2111.09762 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The problem of synthesis in computer sciences, including cybernetics,
artificial intelligence and system analysis, is analyzed. Main methods of
realization this problem are discussed. Ways of search universal method of
creation universal synthetic science are represented. As example of such
universal method polymetric analysis is given. Perspective of further
development of this research, including application polymetric method for the
resolution main problems of computer sciences, is analyzed too.
| [
{
"version": "v1",
"created": "Wed, 26 Jan 2022 04:42:45 GMT"
}
] | 1,643,932,800,000 | [
[
"Dorofeev",
"Vladislav",
""
],
[
"Trokhimchuk",
"Petro",
""
]
] |
2202.01356 | Yingce Xia | Jinhua Zhu, Yingce Xia, Chang Liu, Lijun Wu, Shufang Xie, Yusong Wang,
Tong Wang, Tao Qin, Wengang Zhou, Houqiang Li, Haiguang Liu, Tie-Yan Liu | Direct Molecular Conformation Generation | Accepted to Transactions on Machine Learning Research (2022) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecular conformation generation aims to generate three-dimensional
coordinates of all the atoms in a molecule and is an important task in
bioinformatics and pharmacology. Previous methods usually first predict the
interatomic distances, the gradients of interatomic distances or the local
structures (e.g., torsion angles) of a molecule, and then reconstruct its 3D
conformation. How to directly generate the conformation without the above
intermediate values is not fully explored. In this work, we propose a method
that directly predicts the coordinates of atoms: (1) the loss function is
invariant to roto-translation of coordinates and permutation of symmetric
atoms; (2) the newly proposed model adaptively aggregates the bond and atom
information and iteratively refines the coordinates of the generated
conformation. Our method achieves the best results on GEOM-QM9 and GEOM-Drugs
datasets. Further analysis shows that our generated conformations have closer
properties (e.g., HOMO-LUMO gap) with the groundtruth conformations. In
addition, our method improves molecular docking by providing better initial
conformations. All the results demonstrate the effectiveness of our method and
the great potential of the direct approach. The code is released at
https://github.com/DirectMolecularConfGen/DMCG
| [
{
"version": "v1",
"created": "Thu, 3 Feb 2022 01:01:58 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Dec 2022 01:29:54 GMT"
}
] | 1,672,617,600,000 | [
[
"Zhu",
"Jinhua",
""
],
[
"Xia",
"Yingce",
""
],
[
"Liu",
"Chang",
""
],
[
"Wu",
"Lijun",
""
],
[
"Xie",
"Shufang",
""
],
[
"Wang",
"Yusong",
""
],
[
"Wang",
"Tong",
""
],
[
"Qin",
"Tao",
""
],
[
"Zhou",
"Wengang",
""
],
[
"Li",
"Houqiang",
""
],
[
"Liu",
"Haiguang",
""
],
[
"Liu",
"Tie-Yan",
""
]
] |
2202.02125 | Pramit Bhattacharyya Mr. | Pramit Bhattacharyya, Raghava Mutharaju | OntoSeer -- A Recommendation System to Improve the Quality of Ontologies | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Building an ontology is not only a time-consuming process, but it is also
confusing, especially for beginners and the inexperienced. Although ontology
developers can take the help of domain experts in building an ontology, they
are not readily available in several cases for a variety of reasons. Ontology
developers have to grapple with several questions related to the choice of
classes, properties, and the axioms that should be included. Apart from this,
there are aspects such as modularity and reusability that should be taken care
of. From among the thousands of publicly available ontologies and vocabularies
in repositories such as Linked Open Vocabularies (LOV) and BioPortal, it is
hard to know the terms (classes and properties) that can be reused in the
development of an ontology. A similar problem exists in implementing the right
set of ontology design patterns (ODPs) from among the several available.
Generally, ontology developers make use of their experience in handling these
issues, and the inexperienced ones have a hard time. In order to bridge this
gap, we propose a tool named OntoSeer, that monitors the ontology development
process and provides suggestions in real-time to improve the quality of the
ontology under development. It can provide suggestions on the naming
conventions to follow, vocabulary to reuse, ODPs to implement, and axioms to be
added to the ontology. OntoSeer has been implemented as a Prot\'eg\'e plug-in.
| [
{
"version": "v1",
"created": "Fri, 4 Feb 2022 13:28:13 GMT"
}
] | 1,644,192,000,000 | [
[
"Bhattacharyya",
"Pramit",
""
],
[
"Mutharaju",
"Raghava",
""
]
] |
2202.02519 | Yongjun Chen | Yongjun Chen, Zhiwei Liu, Jia Li, Julian McAuley, Caiming Xiong | Intent Contrastive Learning for Sequential Recommendation | null | null | 10.1145/3485447.3512090 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Users' interactions with items are driven by various intents (e.g., preparing
for holiday gifts, shopping for fishing equipment, etc.).However, users'
underlying intents are often unobserved/latent, making it challenging to
leverage such latent intents forSequentialrecommendation(SR). To investigate
the benefits of latent intents and leverage them effectively for
recommendation, we proposeIntentContrastiveLearning(ICL), a general learning
paradigm that leverages a latent intent variable into SR. The core idea is to
learn users' intent distribution functions from unlabeled user behavior
sequences and optimize SR models with contrastive self-supervised learning
(SSL) by considering the learned intents to improve recommendation.
Specifically, we introduce a latent variable to represent users' intents and
learn the distribution function of the latent variable via clustering. We
propose to leverage the learned intents into SR models via contrastive SSL,
which maximizes the agreement between a view of sequence and its corresponding
intent. The training is alternated between intent representation learning and
the SR model optimization steps within the generalized expectation-maximization
(EM) framework. Fusing user intent information into SR also improves model
robustness. Experiments conducted on four real-world datasets demonstrate the
superiority of the proposed learning paradigm, which improves performance, and
robustness against data sparsity and noisy interaction issues.
| [
{
"version": "v1",
"created": "Sat, 5 Feb 2022 09:24:13 GMT"
}
] | 1,644,278,400,000 | [
[
"Chen",
"Yongjun",
""
],
[
"Liu",
"Zhiwei",
""
],
[
"Li",
"Jia",
""
],
[
"McAuley",
"Julian",
""
],
[
"Xiong",
"Caiming",
""
]
] |
2202.02698 | Yizhu Jiao | Wensen Jiang, Yizhu Jiao, Qingqin Wang, Chuanming Liang, Lijie Guo,
Yao Zhang, Zhijun Sun, Yun Xiong, Yangyong Zhu | Triangle Graph Interest Network for Click-through Rate Prediction | This paper is accepted by WSDM 2022. Source code:
https://github.com/alibaba/tgin | null | 10.1145/3488560.3498458 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Click-through rate prediction is a critical task in online advertising.
Currently, many existing methods attempt to extract user potential interests
from historical click behavior sequences. However, it is difficult to handle
sparse user behaviors or broaden interest exploration. Recently, some
researchers incorporate the item-item co-occurrence graph as an auxiliary. Due
to the elusiveness of user interests, those works still fail to determine the
real motivation of user click behaviors. Besides, those works are more biased
towards popular or similar commodities. They lack an effective mechanism to
break the diversity restrictions. In this paper, we point out two special
properties of triangles in the item-item graphs for recommendation systems:
Intra-triangle homophily and Inter-triangle heterophiy. Based on this, we
propose a novel and effective framework named Triangle Graph Interest Network
(TGIN). For each clicked item in user behavior sequences, we introduce the
triangles in its neighborhood of the item-item graphs as a supplement. TGIN
regards these triangles as the basic units of user interests, which provide the
clues to capture the real motivation for a user clicking an item. We
characterize every click behavior by aggregating the information of several
interest units to alleviate the elusive motivation problem. The attention
mechanism determines users' preference for different interest units. By
selecting diverse and relative triangles, TGIN brings in novel and
serendipitous items to expand exploration opportunities of user interests.
Then, we aggregate the multi-level interests of historical behavior sequences
to improve CTR prediction. Extensive experiments on both public and industrial
datasets clearly verify the effectiveness of our framework.
| [
{
"version": "v1",
"created": "Sun, 6 Feb 2022 03:48:52 GMT"
}
] | 1,644,278,400,000 | [
[
"Jiang",
"Wensen",
""
],
[
"Jiao",
"Yizhu",
""
],
[
"Wang",
"Qingqin",
""
],
[
"Liang",
"Chuanming",
""
],
[
"Guo",
"Lijie",
""
],
[
"Zhang",
"Yao",
""
],
[
"Sun",
"Zhijun",
""
],
[
"Xiong",
"Yun",
""
],
[
"Zhu",
"Yangyong",
""
]
] |
2202.02734 | Scott McLachlan Dr | Scott McLachlan, Evangelia Kyrimi, Kudakwashe Dube, Norman Fenton and
Burkhard Schafer | The Self-Driving Car: Crossroads at the Bleeding Edge of Artificial
Intelligence and Law | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial intelligence (AI) features are increasingly being embedded in cars
and are central to the operation of self-driving cars (SDC). There is little or
no effort expended towards understanding and assessing the broad legal and
regulatory impact of the decisions made by AI in cars. A comprehensive
literature review was conducted to determine the perceived barriers, benefits
and facilitating factors of SDC in order to help us understand the suitability
and limitations of existing and proposed law and regulation. (1) existing and
proposed laws are largely based on claimed benefits of SDV that are still
mostly speculative and untested; (2) while publicly presented as issues of
assigning blame and identifying who pays where the SDC is involved in an
accident, the barriers broadly intersect with almost every area of society,
laws and regulations; and (3) new law and regulation are most frequently
identified as the primary factor for enabling SDC. Research on assessing the
impact of AI in SDC needs to be broadened beyond negligence and liability to
encompass barriers, benefits and facilitating factors identified in this paper.
Results of this paper are significant in that they point to the need for deeper
comprehension of the broad impact of all existing law and regulations on the
introduction of SDC technology, with a focus on identifying only those areas
truly requiring ongoing legislative attention.
| [
{
"version": "v1",
"created": "Sun, 6 Feb 2022 08:38:30 GMT"
}
] | 1,644,278,400,000 | [
[
"McLachlan",
"Scott",
""
],
[
"Kyrimi",
"Evangelia",
""
],
[
"Dube",
"Kudakwashe",
""
],
[
"Fenton",
"Norman",
""
],
[
"Schafer",
"Burkhard",
""
]
] |
2202.02879 | Auriol Degbelo | Shivam Gupta, Auriol Degbelo | An Empirical Analysis of AI Contributions to Sustainable Cities (SDG11) | to appear in Mazzi, F. and Floridi, L. (eds) The Ethics of Artificial
Intelligence for the Sustainable Development Goals | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial Intelligence (AI) presents opportunities to develop tools and
techniques for addressing some of the major global challenges and deliver
solutions with significant social and economic impacts. The application of AI
has far-reaching implications for the 17 Sustainable Development Goals (SDGs)
in general, and sustainable urban development in particular. However, existing
attempts to understand and use the opportunities offered by AI for SDG 11 have
been explored sparsely, and the shortage of empirical evidence about the
practical application of AI remains. In this chapter, we analyze the
contribution of AI to support the progress of SDG 11 (Sustainable Cities and
Communities). We address the knowledge gap by empirically analyzing the AI
systems (N = 29) from the AIxSDG database and the Community Research and
Development Information Service (CORDIS) database. Our analysis revealed that
AI systems have indeed contributed to advancing sustainable cities in several
ways (e.g., waste management, air quality monitoring, disaster response
management, transportation management), but many projects are still working for
citizens and not with them. This snapshot of AI's impact on SDG11 is inherently
partial, yet useful to advance our understanding as we move towards more mature
systems and research on the impact of AI systems for social good.
| [
{
"version": "v1",
"created": "Sun, 6 Feb 2022 22:30:23 GMT"
}
] | 1,644,278,400,000 | [
[
"Gupta",
"Shivam",
""
],
[
"Degbelo",
"Auriol",
""
]
] |
2202.02886 | Lin Guan | Lin Guan, Sarath Sreedharan, Subbarao Kambhampati | Leveraging Approximate Symbolic Models for Reinforcement Learning via
Skill Diversity | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Creating reinforcement learning (RL) agents that are capable of accepting and
leveraging task-specific knowledge from humans has been long identified as a
possible strategy for developing scalable approaches for solving long-horizon
problems. While previous works have looked at the possibility of using symbolic
models along with RL approaches, they tend to assume that the high-level action
models are executable at low level and the fluents can exclusively characterize
all desirable MDP states. Symbolic models of real world tasks are however often
incomplete. To this end, we introduce Approximate Symbolic-Model Guided
Reinforcement Learning, wherein we will formalize the relationship between the
symbolic model and the underlying MDP that will allow us to characterize the
incompleteness of the symbolic model. We will use these models to extract
high-level landmarks that will be used to decompose the task. At the low level,
we learn a set of diverse policies for each possible task subgoal identified by
the landmark, which are then stitched together. We evaluate our system by
testing on three different benchmark domains and show how even with incomplete
symbolic model information, our approach is able to discover the task structure
and efficiently guide the RL agent towards the goal.
| [
{
"version": "v1",
"created": "Sun, 6 Feb 2022 23:20:30 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Apr 2022 00:28:03 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Jun 2022 21:25:53 GMT"
}
] | 1,655,856,000,000 | [
[
"Guan",
"Lin",
""
],
[
"Sreedharan",
"Sarath",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2202.03047 | Kayalvizhi S | Kayalvizhi S and Thenmozhi D | Data set creation and empirical analysis for detecting signs of
depression from social media postings | 12 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Depression is a common mental illness that has to be detected and treated at
an early stage to avoid serious consequences. There are many methods and
modalities for detecting depression that involves physical examination of the
individual. However, diagnosing mental health using their social media data is
more effective as it avoids such physical examinations. Also, people express
their emotions well in social media, it is desirable to diagnose their mental
health using social media data. Though there are many existing systems that
detects mental illness of a person by analysing their social media data,
detecting the level of depression is also important for further treatment.
Thus, in this research, we developed a gold standard data set that detects the
levels of depression as `not depressed', `moderately depressed' and `severely
depressed' from the social media postings. Traditional learning algorithms were
employed on this data set and an empirical analysis was presented in this
paper. Data augmentation technique was applied to overcome the data imbalance.
Among the several variations that are implemented, the model with Word2Vec
vectorizer and Random Forest classifier on augmented data outperforms the other
variations with a score of 0.877 for both accuracy and F1 measure.
| [
{
"version": "v1",
"created": "Mon, 7 Feb 2022 10:24:33 GMT"
}
] | 1,644,278,400,000 | [
[
"S",
"Kayalvizhi",
""
],
[
"D",
"Thenmozhi",
""
]
] |
2202.03057 | Thomas Pierrot | Thomas Pierrot, Guillaume Richard, Karim Beguir, Antoine Cully | Multi-Objective Quality Diversity Optimization | null | null | 10.1145/3512290.3528823 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this work, we consider the problem of Quality-Diversity (QD) optimization
with multiple objectives. QD algorithms have been proposed to search for a
large collection of both diverse and high-performing solutions instead of a
single set of local optima. Thriving for diversity was shown to be useful in
many industrial and robotics applications. On the other hand, most real-life
problems exhibit several potentially antagonist objectives to be optimized.
Hence being able to optimize for multiple objectives with an appropriate
technique while thriving for diversity is important to many fields. Here, we
propose an extension of the MAP-Elites algorithm in the multi-objective
setting: Multi-Objective MAP-Elites (MOME). Namely, it combines the diversity
inherited from the MAP-Elites grid algorithm with the strength of
multi-objective optimizations by filling each cell with a Pareto Front. As
such, it allows to extract diverse solutions in the descriptor space while
exploring different compromises between objectives. We evaluate our method on
several tasks, from standard optimization problems to robotics simulations. Our
experimental evaluation shows the ability of MOME to provide diverse solutions
while providing global performances similar to standard multi-objective
algorithms.
| [
{
"version": "v1",
"created": "Mon, 7 Feb 2022 10:48:28 GMT"
},
{
"version": "v2",
"created": "Tue, 31 May 2022 08:06:59 GMT"
}
] | 1,654,041,600,000 | [
[
"Pierrot",
"Thomas",
""
],
[
"Richard",
"Guillaume",
""
],
[
"Beguir",
"Karim",
""
],
[
"Cully",
"Antoine",
""
]
] |
2202.03153 | Soumil Rathi | Soumil Rathi | Approaches to Artificial General Intelligence: An Analysis | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper is an analysis of the different methods proposed to achieve AGI,
including Human Brain Emulation, AIXI and Integrated Cognitive Architecture.
First, the definition of AGI as used in this paper has been defined, and its
requirements have been stated. For each proposed method mentioned, the method
in question was summarized and its key processes were detailed, showcasing how
it functioned. Then, each method listed was analyzed, taking various factors
into consideration, such as technological requirements, computational ability,
and adequacy to the requirements. It was concluded that while there are various
methods to achieve AGI that could work, such as Human Brain Emulation and
Integrated Cognitive Architectures, the most promising method to achieve AGI is
Integrated Cognitive Architectures. This is because Human Brain Emulation was
found to require scanning technologies that will most likely not be available
until the 2030s, making it unlikely to be created before then. Moreover,
Integrated Cognitive Architectures has reduced computational requirements and a
suitable functionality for General Intelligence, making it the most likely way
to achieve AGI.
| [
{
"version": "v1",
"created": "Sat, 29 Jan 2022 05:21:09 GMT"
}
] | 1,644,278,400,000 | [
[
"Rathi",
"Soumil",
""
]
] |
2202.03188 | Anu Myne | Anu K. Myne, Kevin J. Leahy, Ryan J. Soklaski | Knowledge-Integrated Informed AI for National Security | null | null | null | Technical Report TR-1272 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The state of artificial intelligence technology has a rich history that dates
back decades and includes two fall-outs before the explosive resurgence of
today, which is credited largely to data-driven techniques. While AI technology
has and continues to become increasingly mainstream with impact across domains
and industries, it's not without several drawbacks, weaknesses, and potential
to cause undesired effects. AI techniques are numerous with many approaches and
variants, but they can be classified simply based on the degree of knowledge
they capture and how much data they require; two broad categories emerge as
prominent across AI to date: (1) techniques that are primarily, and often
solely, data-driven while leveraging little to no knowledge and (2) techniques
that primarily leverage knowledge and depend less on data. Now, a third
category is starting to emerge that leverages both data and knowledge, that
some refer to as "informed AI." This third category can be a game changer
within the national security domain where there is ample scientific and
domain-specific knowledge that stands ready to be leveraged, and where purely
data-driven AI can lead to serious unwanted consequences.
This report shares findings from a thorough exploration of AI approaches that
exploit data as well as principled and/or practical knowledge, which we refer
to as "knowledge-integrated informed AI." Specifically, we review illuminating
examples of knowledge integrated in deep learning and reinforcement learning
pipelines, taking note of the performance gains they provide. We also discuss
an apparent trade space across variants of knowledge-integrated informed AI,
along with observed and prominent issues that suggest worthwhile future
research directions. Most importantly, this report suggests how the advantages
of knowledge-integrated informed AI stand to benefit the national security
domain.
| [
{
"version": "v1",
"created": "Fri, 4 Feb 2022 11:51:44 GMT"
}
] | 1,644,278,400,000 | [
[
"Myne",
"Anu K.",
""
],
[
"Leahy",
"Kevin J.",
""
],
[
"Soklaski",
"Ryan J.",
""
]
] |
2202.03192 | Vacslav Glukhov | Vacslav Glukhov | Reward is not enough: can we liberate AI from the reinforcement learning
paradigm? | 25 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | I present arguments against the hypothesis put forward by Silver, Singh,
Precup, and Sutton (
https://www.sciencedirect.com/science/article/pii/S0004370221000862 ) : reward
maximization is not enough to explain many activities associated with natural
and artificial intelligence including knowledge, learning, perception, social
intelligence, evolution, language, generalisation and imitation. I show such
reductio ad lucrum has its intellectual origins in the political economy of
Homo economicus and substantially overlaps with the radical version of
behaviourism. I show why the reinforcement learning paradigm, despite its
demonstrable usefulness in some practical application, is an incomplete
framework for intelligence -- natural and artificial. Complexities of
intelligent behaviour are not simply second-order complications on top of
reward maximisation. This fact has profound implications for the development of
practically usable, smart, safe and robust artificially intelligent agents.
| [
{
"version": "v1",
"created": "Thu, 3 Feb 2022 18:31:48 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Feb 2022 19:04:08 GMT"
}
] | 1,644,451,200,000 | [
[
"Glukhov",
"Vacslav",
""
]
] |
2202.03196 | Kai Sauerwald | Kai Sauerwald and Gabriele Kern-Isberner and Christoph Beierle | A Conditional Perspective on the Logic of Iterated Belief Contraction | This is a largely extended version of the following conference paper:
Kai Sauerwald, Gabriele Kern-Isberner, Christoph Beierle: A Conditional
Perspective for Iterated Belief Contraction. ECAI 2020: 889-896
https://doi.org/10.3233/FAIA200180 (see also arXiv:1911.08833 ) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this article, we consider iteration principles for contraction, with the
goal of identifying properties for contractions that respect conditional
beliefs. Therefore, we investigate and evaluate four groups of iteration
principles for contraction which consider the dynamics of conditional beliefs.
For all these principles, we provide semantic characterization theorems and
provide formulations by postulates which highlight how the change of beliefs
and of conditional beliefs is constrained, whenever that is possible. The first
group is similar to the syntactic Darwiche-Pearl postulates. As a second group,
we consider semantic postulates for iteration of contraction by Chopra, Ghose,
Meyer and Wong, and by Konieczny and Pino P\'erez, respectively, and we provide
novel syntactic counterparts. Third, we propose a contraction analogue of the
independence condition by Jin and Thielscher. For the fourth group, we consider
natural and moderate contraction by Nayak. Methodically, we make use of
conditionals for contractions, so-called contractionals and furthermore, we
propose and employ the novel notion of $ \alpha $-equivalence for formulating
some of the new postulates.
| [
{
"version": "v1",
"created": "Fri, 4 Feb 2022 10:33:19 GMT"
}
] | 1,644,278,400,000 | [
[
"Sauerwald",
"Kai",
""
],
[
"Kern-Isberner",
"Gabriele",
""
],
[
"Beierle",
"Christoph",
""
]
] |
2202.03246 | Piera Riccio | Piera Riccio, Kristin Bergaust, Boel Christensen-Scheel, Juan-Carlos
De Martin, Maria A. Zuluaga, Stefano Nichele | AI-based artistic representation of emotions from EEG signals: a
discussion on fairness, inclusion, and aesthetics | Accepted to the Politics of the Machines conference 2021 (POM Berlin
2021) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | While Artificial Intelligence (AI) technologies are being progressively
developed, artists and researchers are investigating their role in artistic
practices. In this work, we present an AI-based Brain-Computer Interface (BCI)
in which humans and machines interact to express feelings artistically. This
system and its production of images give opportunities to reflect on the
complexities and range of human emotions and their expressions. In this
discussion, we seek to understand the dynamics of this interaction to reach
better co-existence in fairness, inclusion, and aesthetics.
| [
{
"version": "v1",
"created": "Mon, 7 Feb 2022 14:51:02 GMT"
}
] | 1,644,278,400,000 | [
[
"Riccio",
"Piera",
""
],
[
"Bergaust",
"Kristin",
""
],
[
"Christensen-Scheel",
"Boel",
""
],
[
"De Martin",
"Juan-Carlos",
""
],
[
"Zuluaga",
"Maria A.",
""
],
[
"Nichele",
"Stefano",
""
]
] |
2202.03520 | Mark Dukes Dr | Mark Dukes | Stakeholder utility measures for declarative processes and their use in
process comparisons | null | null | 10.1109/TCSS.2021.3092285 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method for calculating and analyzing stakeholder utilities of
processes that arise in, but are not limited to, the social sciences. These
areas include business process analysis, healthcare workflow analysis and
policy process analysis. This method is quite general and applicable to any
situation in which declarative-type constraints of a modal and/or temporal
nature play a part.
A declarative process is a process in which activities may freely happen
while respecting a set of constraints. For such a process, anything may happen
so long as it is not explicitly forbidden. Declarative processes have been used
and studied as models of business and healthcare workflows by several authors.
In considering a declarative process as a model of some system it is natural to
consider how the process behaves with respect to stakeholders. We derive a
measure for stakeholder utility that can be applied in a very general setting.
This derivation is achieved by listing a collection a properties which we argue
such a stakeholder utility function ought to satisfy, and then using these to
show a very specific form must hold for such a utility. The utility measure
depends on the set of unique traces of the declarative process, and calculating
this set requires a combinatorial analysis of the declarative graph that
represents the process.
This builds on previous work of the author wherein the combinatorial
diversity metrics for declarative processes were derived for use in policy
process analysis. The collection of stakeholder utilities can themselves then
be used to form a metric with which we can compare different declarative
processes to one another. These are illustrated using several examples of
declarative processes that already exist in the literature.
| [
{
"version": "v1",
"created": "Mon, 7 Feb 2022 21:11:13 GMT"
}
] | 1,644,364,800,000 | [
[
"Dukes",
"Mark",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.