id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2109.10129 | Simon St{\aa}hlberg | Simon St{\aa}hlberg, Blai Bonet, Hector Geffner | Learning General Optimal Policies with Graph Neural Networks: Expressive
Power, Transparency, and Limits | Proceedings of the 32nd International Conference on Automated
Planning and Scheduling (ICAPS-22) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | It has been recently shown that general policies for many classical planning
domains can be expressed and learned in terms of a pool of features defined
from the domain predicates using a description logic grammar. At the same time,
most description logics correspond to a fragment of $k$-variable counting logic
($C_k$) for $k=2$, that has been shown to provide a tight characterization of
the expressive power of graph neural networks. In this work, we make use of
these results to understand the power and limits of using graph neural networks
(GNNs) for learning optimal general policies over a number of tractable
planning domains where such policies are known to exist. For this, we train a
simple GNN in a supervised manner to approximate the optimal value function
$V^{*}(s)$ of a number of sample states $s$. As predicted by the theory, it is
observed that general optimal policies are obtained in domains where general
optimal value functions can be defined with $C_2$ features but not in those
requiring more expressive $C_3$ features. In addition, it is observed that the
features learned are in close correspondence with the features needed to
express $V^{*}$ in closed form. The theory and the analysis of the domains let
us understand the features that are actually learned as well as those that
cannot be learned in this way, and let us move in a principled manner from a
combinatorial optimization approach to learning general policies to a
potentially, more robust and scalable approach based on deep learning.
| [
{
"version": "v1",
"created": "Tue, 21 Sep 2021 12:22:29 GMT"
},
{
"version": "v2",
"created": "Fri, 6 May 2022 13:52:20 GMT"
}
] | 1,652,054,400,000 | [
[
"Ståhlberg",
"Simon",
""
],
[
"Bonet",
"Blai",
""
],
[
"Geffner",
"Hector",
""
]
] |
2109.10285 | Youssef Achenchabe | Youssef Achenchabe, Alexis Bondu, Antoine Cornu\'ejols, Vincent
Lemaire | Early and Revocable Time Series Classification | submitted to ACML'21 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Many approaches have been proposed for early classification of time series in
light of itssignificance in a wide range of applications including healthcare,
transportation and fi-nance. Until now, the early classification problem has
been dealt with by considering onlyirrevocable decisions. This paper introduces
a new problem calledearly and revocabletimeseries classification, where the
decision maker can revoke its earlier decisions based on thenew available
measurements. In order to formalize and tackle this problem, we propose anew
cost-based framework and derive two new approaches from it. The first approach
doesnot consider explicitly the cost of changing decision, while the second one
does. Exten-sive experiments are conducted to evaluate these approaches on a
large benchmark of realdatasets. The empirical results obtained convincingly
show (i) that the ability of revok-ing decisions significantly improves
performance over the irrevocable regime, and (ii) thattaking into account the
cost of changing decision brings even better results in
general.Keywords:revocable decisions, cost estimation, online decision making
| [
{
"version": "v1",
"created": "Tue, 21 Sep 2021 16:09:11 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Sep 2021 16:16:49 GMT"
}
] | 1,632,355,200,000 | [
[
"Achenchabe",
"Youssef",
""
],
[
"Bondu",
"Alexis",
""
],
[
"Cornuéjols",
"Antoine",
""
],
[
"Lemaire",
"Vincent",
""
]
] |
2109.10547 | Fu Sun | Fu Sun, Feng-Lin Li, Ruize Wang, Qianglong Chen, Xingyi Cheng, Ji
Zhang | K-AID: Enhancing Pre-trained Language Models with Domain Knowledge for
Question Answering | CIKM 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Knowledge enhanced pre-trained language models (K-PLMs) are shown to be
effective for many public tasks in the literature but few of them have been
successfully applied in practice. To address this problem, we propose K-AID, a
systematic approach that includes a low-cost knowledge acquisition process for
acquiring domain knowledge, an effective knowledge infusion module for
improving model performance, and a knowledge distillation component for
reducing the model size and deploying K-PLMs on resource-restricted devices
(e.g., CPU) for real-world application. Importantly, instead of capturing
entity knowledge like the majority of existing K-PLMs, our approach captures
relational knowledge, which contributes to better-improving sentence-level text
classification and text matching tasks that play a key role in question
answering (QA). We conducted a set of experiments on five text classification
tasks and three text matching tasks from three domains, namely E-commerce,
Government, and Film&TV, and performed online A/B tests in E-commerce.
Experimental results show that our approach is able to achieve substantial
improvement on sentence-level question answering tasks and bring beneficial
business value in industrial settings.
| [
{
"version": "v1",
"created": "Wed, 22 Sep 2021 07:19:08 GMT"
}
] | 1,632,355,200,000 | [
[
"Sun",
"Fu",
""
],
[
"Li",
"Feng-Lin",
""
],
[
"Wang",
"Ruize",
""
],
[
"Chen",
"Qianglong",
""
],
[
"Cheng",
"Xingyi",
""
],
[
"Zhang",
"Ji",
""
]
] |
2109.10633 | Krysia Broda | Krysia Broda and Fariba Sadri and Stephen Butler | Reactive Answer Set Programming | Under consideration in Theory and Practice of Logic Programming
(TPLP) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Logic Production System (LPS) is a logic-based framework for modelling
reactive behaviour. Based on abductive logic programming, it combines reactive
rules with logic programs, a database and a causal theory that specifies
transitions between the states of the database. This paper proposes a
systematic mapping of the Kernel of this framework (called KELPS) into an
answer set program (ASP). For this purpose a new variant of KELPS with finite
models, called $n$-distance KELPS, is introduced. A formal definition of the
mapping from this $n$-distance KELPS to ASP is given and proven sound and
complete. The Answer Set Programming paradigm allows to capture additional
behaviours to the basic reactivity of KELPS, in particular proactive,
preemptive and prospective behaviours. These are all discussed and illustrated
with examples. Then a hybrid framework is proposed that integrates KELPS and
ASP, allowing to combine the strengths of both paradigms. Under consideration
in Theory and Practice of Logic Programming (TPLP).
| [
{
"version": "v1",
"created": "Wed, 22 Sep 2021 10:10:14 GMT"
}
] | 1,632,355,200,000 | [
[
"Broda",
"Krysia",
""
],
[
"Sadri",
"Fariba",
""
],
[
"Butler",
"Stephen",
""
]
] |
2109.10637 | Wenjun Li | Susobhan Ghosh, Pradeep Varakantham, Aniket Bhatkhande, Tamanna Ahmad,
Anish Andheria, Wenjun Li, Aparna Taneja, Divy Thakkar, Milind Tambe | Facilitating human-wildlife cohabitation through conflict prediction | 7 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With increasing world population and expanded use of forests as cohabited
regions, interactions and conflicts with wildlife are increasing, leading to
large-scale loss of lives (animal and human) and livelihoods (economic). While
community knowledge is valuable, forest officials and conservation
organisations can greatly benefit from predictive analysis of human-wildlife
conflict, leading to targeted interventions that can potentially help save
lives and livelihoods. However, the problem of prediction is a complex
socio-technical problem in the context of limited data in low-resource regions.
Identifying the "right" features to make accurate predictions of conflicts at
the required spatial granularity using a sparse conflict training dataset} is
the key challenge that we address in this paper. Specifically, we do an
illustrative case study on human-wildlife conflicts in the Bramhapuri Forest
Division in Chandrapur, Maharashtra, India. Most existing work has considered
human-wildlife conflicts in protected areas and to the best of our knowledge,
this is the first effort at prediction of human-wildlife conflicts in
unprotected areas and using those predictions for deploying interventions on
the ground.
| [
{
"version": "v1",
"created": "Wed, 22 Sep 2021 10:30:06 GMT"
}
] | 1,632,355,200,000 | [
[
"Ghosh",
"Susobhan",
""
],
[
"Varakantham",
"Pradeep",
""
],
[
"Bhatkhande",
"Aniket",
""
],
[
"Ahmad",
"Tamanna",
""
],
[
"Andheria",
"Anish",
""
],
[
"Li",
"Wenjun",
""
],
[
"Taneja",
"Aparna",
""
],
[
"Thakkar",
"Divy",
""
],
[
"Tambe",
"Milind",
""
]
] |
2109.10716 | Chiara Ghidini | Chiara Ghidini, Marco Rospocher, Luciano Serafini | A formalisation of BPMN in Description Logics | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper we present a textual description, in terms of Description
Logics, of the BPMN Ontology, which provides a clear semantic formalisation of
the structural components of the Business Process Modelling Notation (BPMN),
based on the latest stable BPMN specifications from OMG [BPMN Version 1.1 --
January 2008]. The development of the ontology was guided by the description of
the complete set of BPMN Element Attributes and Types contained in Annex B of
the BPMN specifications.
| [
{
"version": "v1",
"created": "Wed, 22 Sep 2021 13:17:28 GMT"
}
] | 1,632,355,200,000 | [
[
"Ghidini",
"Chiara",
""
],
[
"Rospocher",
"Marco",
""
],
[
"Serafini",
"Luciano",
""
]
] |
2109.11223 | Matteo Martinelli | Marco Lippi, Stefano Mariani, Matteo Martinelli and Franco Zambonelli | Individual and Collective Autonomous Development | 8 pages, 2 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The increasing complexity and unpredictability of many ICT scenarios let us
envision that future systems will have to dynamically learn how to act and
adapt to face evolving situations with little or no a priori knowledge, both at
the level of individual components and at the collective level. In other words,
such systems should become able to autonomously develop models of themselves
and of their environment. Autonomous development includes: learning models of
own capabilities; learning how to act purposefully towards the achievement of
specific goals; and learning how to act collectively, i.e., accounting for the
presence of others. In this paper, we introduce the vision of autonomous
development in ICT systems, by framing its key concepts and by illustrating
suitable application domains. Then, we overview the many research areas that
are contributing or can potentially contribute to the realization of the
vision, and identify some key research challenges.
| [
{
"version": "v1",
"created": "Thu, 23 Sep 2021 09:11:24 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Oct 2021 10:35:45 GMT"
}
] | 1,633,392,000,000 | [
[
"Lippi",
"Marco",
""
],
[
"Mariani",
"Stefano",
""
],
[
"Martinelli",
"Matteo",
""
],
[
"Zambonelli",
"Franco",
""
]
] |
2109.11668 | Malek Mouhoub | Malek Mouhoub, Hamad Al Marri and Eisa Alanazi | Exact Learning of Qualitative Constraint Networks from Membership
Queries | 18 pages, 8 figures and 8 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | A Qualitative Constraint Network (QCN) is a constraint graph for representing
problems under qualitative temporal and spatial relations, among others. More
formally, a QCN includes a set of entities, and a list of qualitative
constraints defining the possible scenarios between these entities. These
latter constraints are expressed as disjunctions of binary relations capturing
the (incomplete) knowledge between the involved entities. QCNs are very
effective in representing a wide variety of real-world applications, including
scheduling and planning, configuration and Geographic Information Systems
(GIS). It is however challenging to elicit, from the user, the QCN representing
a given problem. To overcome this difficulty in practice, we propose a new
algorithm for learning, through membership queries, a QCN from a non expert. In
this paper, membership queries are asked in order to elicit temporal or spatial
relationships between pairs of temporal or spatial entities. In order to
improve the time performance of our learning algorithm in practice, constraint
propagation, through transitive closure, as well as ordering heuristics, are
enforced. The goal here is to reduce the number of membership queries needed to
reach the target QCN. In order to assess the practical effect of constraint
propagation and ordering heuristics, we conducted several experiments on
randomly generated temporal and spatial constraint network instances. The
results of the experiments are very encouraging and promising.
| [
{
"version": "v1",
"created": "Thu, 23 Sep 2021 22:25:37 GMT"
}
] | 1,632,700,800,000 | [
[
"Mouhoub",
"Malek",
""
],
[
"Marri",
"Hamad Al",
""
],
[
"Alanazi",
"Eisa",
""
]
] |
2109.12179 | Malek Mouhoub | Sultan Ahmed and Malek Mouhoub | Constrained Optimization with Qualitative Preferences | 27 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The Conditional Preference Network (CP-net) graphically represents user's
qualitative and conditional preference statements under the ceteris paribus
interpretation. The constrained CP-net is an extension of the CP-net, to a set
of constraints. The existing algorithms for solving the constrained CP-net
require the expensive dominance testing operation. We propose three approaches
to tackle this challenge. In our first solution, we alter the constrained
CP-net by eliciting additional relative importance statements between
variables, in order to have a total order over the outcomes. We call this new
model, the constrained Relative Importance Network (constrained CPR-net).
Consequently, We show that the Constrained CPR-net has one single optimal
outcome (assuming the constrained CPR-net is consistent) that we can obtain
without dominance testing. In our second solution, we extend the Lexicographic
Preference Tree (LP-tree) to a set of constraints. Then, we propose a recursive
backtrack search algorithm, that we call Search-LP, to find the most preferable
outcome. We prove that the first feasible outcome returned by Search-LP
(without dominance testing) is also preferable to any other feasible outcome.
Finally, in our third solution, we preserve the semantics of the CP-net and
propose a divide and conquer algorithm that compares outcomes according to
dominance testing.
| [
{
"version": "v1",
"created": "Fri, 24 Sep 2021 20:28:34 GMT"
}
] | 1,632,787,200,000 | [
[
"Ahmed",
"Sultan",
""
],
[
"Mouhoub",
"Malek",
""
]
] |
2109.12624 | Huaduo Wang | Huaduo Wang, Farhad Shakerin, Gopal Gupta | A Clustering and Demotion Based Algorithm for Inductive Learning of
Default Theories | arXiv admin note: text overlap with arXiv:1808.00629 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a clustering- and demotion-based algorithm called Kmeans-FOLD to
induce nonmonotonic logic programs from positive and negative examples. Our
algorithm improves upon-and is inspired by-the FOLD algorithm. The FOLD
algorithm itself is an improvement over the FOIL algorithm. Our algorithm
generates a more concise logic program compared to the FOLD algorithm. Our
algorithm uses the K-means based clustering method to cluster the input
positive samples before applying the FOLD algorithm. Positive examples that are
covered by the partially learned program in intermediate steps are not
discarded as in the FOLD algorithm, rather they are demoted, i.e., their
weights are reduced in subsequent iterations of the algorithm. Our experiments
on the UCI dataset show that a combination of K-Means clustering and our
demotion strategy produces significant improvement for datasets with more than
one cluster of positive examples. The resulting induced program is also more
concise and therefore easier to understand compared to the FOLD and ALEPH
systems, two state of the art inductive logic programming (ILP) systems.
| [
{
"version": "v1",
"created": "Sun, 26 Sep 2021 14:50:18 GMT"
}
] | 1,632,787,200,000 | [
[
"Wang",
"Huaduo",
""
],
[
"Shakerin",
"Farhad",
""
],
[
"Gupta",
"Gopal",
""
]
] |
2109.12691 | Micha{\l} Opanowicz | Micha{\l} Opanowicz | Applying supervised and reinforcement learning methods to create
neural-network-based agents for playing StarCraft II | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recently, multiple approaches for creating agents for playing various complex
real-time computer games such as StarCraft II or Dota 2 were proposed, however,
they either embed a significant amount of expert knowledge into the agent or
use a prohibitively large for most researchers amount of computational
resources. We propose a neural network architecture for playing the full
two-player match of StarCraft II trained with general-purpose supervised and
reinforcement learning, that can be trained on a single consumer-grade PC with
a single GPU. We also show that our implementation achieves a non-trivial
performance when compared to the in-game scripted bots. We make no simplifying
assumptions about the game except for playing on a single chosen map, and we
use very little expert knowledge. In principle, our approach can be applied to
any RTS game with small modifications. While our results are far behind the
state-of-the-art large-scale approaches in terms of the final performance, we
believe our work can serve as a solid baseline for other small-scale
experiments.
| [
{
"version": "v1",
"created": "Sun, 26 Sep 2021 20:08:10 GMT"
}
] | 1,632,787,200,000 | [
[
"Opanowicz",
"Michał",
""
]
] |
2109.12755 | Wlodek Zadrozny | Wlodek W. Zadrozny | Abstraction, Reasoning and Deep Learning: A Study of the "Look and Say"
Sequence | 12 pages; 5 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The ability to abstract, count, and use System~2 reasoning are well-known
manifestations of intelligence and understanding. In this paper, we argue,
using the example of the ``Look and Say" puzzle, that although deep neural
networks can exhibit high `competence' (as measured by accuracy) when trained
on large data sets (2 million examples in our case), they do not show any sign
on the deeper understanding of the problem, or what D. Dennett calls
`comprehension'. We report on two sets experiments: first, computing the next
element of the sequence, and ,then, the previous element. We view both problems
as building a translator from one set of tokens to another. We apply both
standard LSTMs and Transformer/Attention-based neural networks, using publicly
available machine translation software. We observe that despite the amazing
accuracy, the performance of the trained programs on the actual L\&S sequence
is bad, and shows no understanding of the principles behind the sequences. The
ramifications of this finding include: (1) from the cognitive science
perspective, we argue that we need better mathematical models of abstraction;
(2) the universality of neural networks should be re-examined for functions
acting on discrete data sets; (3) we hypothesize topology can provide a
definition of without the reference to the concept of distance.
| [
{
"version": "v1",
"created": "Mon, 27 Sep 2021 01:41:37 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Mar 2022 14:20:48 GMT"
}
] | 1,647,907,200,000 | [
[
"Zadrozny",
"Wlodek W.",
""
]
] |
2109.13178 | Marcin Pietrasik | Marcin Pietrasik, Marek Reformat | Path Based Hierarchical Clustering on Knowledge Graphs | 3 pages, 2 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Knowledge graphs have emerged as a widely adopted medium for storing
relational data, making methods for automatically reasoning with them highly
desirable. In this paper, we present a novel approach for inducing a hierarchy
of subject clusters, building upon our earlier work done in taxonomy induction.
Our method first constructs a tag hierarchy before assigning subjects to
clusters on this hierarchy. We quantitatively demonstrate our method's ability
to induce a coherent cluster hierarchy on three real-world datasets.
| [
{
"version": "v1",
"created": "Mon, 27 Sep 2021 16:42:43 GMT"
}
] | 1,632,787,200,000 | [
[
"Pietrasik",
"Marcin",
""
],
[
"Reformat",
"Marek",
""
]
] |
2109.13392 | Volker Tresp | Volker Tresp, Sahand Sharifzadeh, Hang Li, Dario Konopatzki, Yunpu Ma | The Tensor Brain: A Unified Theory of Perception, Memory and Semantic
Decoding | Neural Computation, Volume 35, Issue 2, February 2023 | Neural Computation, Volume 35, Issue 2, February 2023 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present a unified computational theory of an agent's perception and
memory. In our model, perception, episodic memory, and semantic memory are
realized by different operational modes of the oscillating interactions between
a symbolic index layer and a subsymbolic representation layer. The two layers
form a bilayer tensor network (BTN). Although memory appears to be about the
past, its main purpose is to support the agent in the present and the future.
Recent episodic memory provides the agent with a sense of the here and now.
Remote episodic memory retrieves relevant past experiences to provide
information about possible future scenarios. This aids the agent in
decision-making. "Future" episodic memory, based on expected future events,
guides planning and action. Semantic memory retrieves specific information,
which is not delivered by current perception, and defines priors for future
observations. We argue that it is important for the agent to encode individual
entities, not just classes and attributes. We demonstrate that a form of
self-supervised learning can acquire new concepts and refine existing ones. We
test our model on a standard benchmark data set, which we expanded to contain
richer representations for attributes, classes, and individuals. Our key
hypothesis is that obtaining a better understanding of perception and memory is
a crucial prerequisite to comprehending human-level intelligence.
| [
{
"version": "v1",
"created": "Mon, 27 Sep 2021 23:32:44 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Oct 2021 17:08:26 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Oct 2022 15:12:00 GMT"
},
{
"version": "v4",
"created": "Wed, 12 Oct 2022 17:26:49 GMT"
},
{
"version": "v5",
"created": "Mon, 17 Oct 2022 20:42:08 GMT"
},
{
"version": "v6",
"created": "Sun, 22 Jan 2023 20:22:16 GMT"
}
] | 1,674,518,400,000 | [
[
"Tresp",
"Volker",
""
],
[
"Sharifzadeh",
"Sahand",
""
],
[
"Li",
"Hang",
""
],
[
"Konopatzki",
"Dario",
""
],
[
"Ma",
"Yunpu",
""
]
] |
2109.13893 | Brais Mu\~niz Castro | Pedro Cabalar, Brais Mu\~niz, Gilberto P\'erez, Francisco Su\'arez | Explainable Machine Larning for liver transplantation | 5 pages, 7 listings, two tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this work, we present a flexible method for explaining, in human readable
terms, the predictions made by decision trees used as decision support in liver
transplantation. The decision trees have been obtained through machine learning
applied on a dataset collected at the liver transplantation unit at the
Coru\~na University Hospital Center and are used to predict long term (five
years) survival after transplantation. The method we propose is based on the
representation of the decision tree as a set of rules in a logic program (LP)
that is further annotated with text messages. This logic program is then
processed using the tool xclingo (based on Answer Set Programming) that allows
building compound explanations depending on the annotation text and the rules
effectively fired when a given input is provided. We explore two alternative LP
encodings: one in which rules respect the tree structure (more convenient to
reflect the learning process) and one where each rule corresponds to a
(previously simplified) tree path (more readable for decision making).
| [
{
"version": "v1",
"created": "Tue, 28 Sep 2021 17:45:07 GMT"
}
] | 1,632,873,600,000 | [
[
"Cabalar",
"Pedro",
""
],
[
"Muñiz",
"Brais",
""
],
[
"Pérez",
"Gilberto",
""
],
[
"Suárez",
"Francisco",
""
]
] |
2109.13978 | Kin-Ho Lam | Kin-Ho Lam, Zhengxian Lin, Jed Irvine, Jonathan Dodge, Zeyad T
Shureih, Roli Khanna, Minsuk Kahng, Alan Fern | Identifying Reasoning Flaws in Planning-Based RL Using Tree Explanations | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Enabling humans to identify potential flaws in an agent's decision making is
an important Explainable AI application. We consider identifying such flaws in
a planning-based deep reinforcement learning (RL) agent for a complex real-time
strategy game. In particular, the agent makes decisions via tree search using a
learned model and evaluation function over interpretable states and actions.
This gives the potential for humans to identify flaws at the level of reasoning
steps in the tree, even if the entire reasoning process is too complex to
understand. However, it is unclear whether humans will be able to identify such
flaws due to the size and complexity of trees. We describe a user interface and
case study, where a small group of AI experts and developers attempt to
identify reasoning flaws due to inaccurate agent learning. Overall, the
interface allowed the group to identify a number of significant flaws of
varying types, demonstrating the promise of this approach.
| [
{
"version": "v1",
"created": "Tue, 28 Sep 2021 18:39:03 GMT"
}
] | 1,632,960,000,000 | [
[
"Lam",
"Kin-Ho",
""
],
[
"Lin",
"Zhengxian",
""
],
[
"Irvine",
"Jed",
""
],
[
"Dodge",
"Jonathan",
""
],
[
"Shureih",
"Zeyad T",
""
],
[
"Khanna",
"Roli",
""
],
[
"Kahng",
"Minsuk",
""
],
[
"Fern",
"Alan",
""
]
] |
2109.14381 | Catholijn Jonker | Catholijn M. Jonker and Jan Treur | From Organisational Structure to Organisational Behaviour Formalisation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To understand how an organisational structure relates to organisational
behaviour is an interesting fundamental challenge in the area of organisation
modelling. Specifications of organisational structure usually have a
diagrammatic form that abstracts from more detailed dynamics. Dynamic
properties of agent systems, on the other hand, are often specified in the form
of a set of logical formulae in some temporal language. This paper addresses
the question how these two perspectives can be combined in one framework. It is
shown how for different aggregation levels and other elements within an
organisation structure, sets of dynamic properties can be specified.
Organisational structure provides a structure of (interlevel) relationships
between these multiple sets of dynamic properties. Thus organisational
structure is reflected in the formalisation of the dynamics of organisational
behaviour. To illustrate the effectiveness of the approach a formal foundation
is presented for the integrated specification of both structure and behaviour
of an AGR organisation model.
| [
{
"version": "v1",
"created": "Wed, 29 Sep 2021 12:32:10 GMT"
}
] | 1,632,960,000,000 | [
[
"Jonker",
"Catholijn M.",
""
],
[
"Treur",
"Jan",
""
]
] |
2109.14732 | Maximilian Heinrich | Maximilian Heinrich | The MatrixX Solver For Argumentation Frameworks | Part of ICCMA 2021 proceedings | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | MatrixX is a solver for Abstract Argumentation Frameworks. Offensive and
defensive properties of an Argumentation Framework are notated in a matrix
style. Rows and columns of this matrix are systematically reduced by the
solver. This procedure is implemented through the use of hash maps in order to
accelerate calculation time. MatrixX works for stable and complete semantics
and was designed for the ICCMA 2021 competition.
| [
{
"version": "v1",
"created": "Wed, 29 Sep 2021 21:43:00 GMT"
}
] | 1,633,046,400,000 | [
[
"Heinrich",
"Maximilian",
""
]
] |
2109.15316 | Arnaud Fickinger | Arnaud Fickinger, Hengyuan Hu, Brandon Amos, Stuart Russell, Noam
Brown | Scalable Online Planning via Reinforcement Learning Fine-Tuning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Lookahead search has been a critical component of recent AI successes, such
as in the games of chess, go, and poker. However, the search methods used in
these games, and in many other settings, are tabular. Tabular search methods do
not scale well with the size of the search space, and this problem is
exacerbated by stochasticity and partial observability. In this work we replace
tabular search with online model-based fine-tuning of a policy neural network
via reinforcement learning, and show that this approach outperforms
state-of-the-art search algorithms in benchmark settings. In particular, we use
our search algorithm to achieve a new state-of-the-art result in self-play
Hanabi, and show the generality of our algorithm by also showing that it
outperforms tabular search in the Atari game Ms. Pacman.
| [
{
"version": "v1",
"created": "Thu, 30 Sep 2021 17:59:11 GMT"
}
] | 1,633,046,400,000 | [
[
"Fickinger",
"Arnaud",
""
],
[
"Hu",
"Hengyuan",
""
],
[
"Amos",
"Brandon",
""
],
[
"Russell",
"Stuart",
""
],
[
"Brown",
"Noam",
""
]
] |
2110.00828 | Mohammad Dehghani | Tahereh Saheb, Mohammad Dehghani | Artificial intelligence for Sustainable Energy: A Contextual Topic
Modeling and Content Analysis | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parallel to the rising debates over sustainable energy and artificial
intelligence solutions, the world is currently discussing the ethics of
artificial intelligence and its possible negative effects on society and the
environment. In these arguments, sustainable AI is proposed, which aims at
advancing the pathway toward sustainability, such as sustainable energy. In
this paper, we offered a novel contextual topic modeling combining LDA, BERT,
and Clustering. We then combined these computational analyses with content
analysis of related scientific publications to identify the main scholarly
topics, sub-themes, and cross-topic themes within scientific research on
sustainable AI in energy. Our research identified eight dominant topics
including sustainable buildings, AI-based DSSs for urban water management,
climate artificial intelligence, Agriculture 4, the convergence of AI with IoT,
AI-based evaluation of renewable technologies, smart campus and engineering
education, and AI-based optimization. We then recommended 14 potential future
research strands based on the observed theoretical gaps. Theoretically, this
analysis contributes to the existing literature on sustainable AI and
sustainable energy, and practically, it intends to act as a general guide for
energy engineers and scientists, AI scientists, and social scientists to widen
their knowledge of sustainability in AI and energy convergence research.
| [
{
"version": "v1",
"created": "Sat, 2 Oct 2021 15:51:51 GMT"
}
] | 1,633,392,000,000 | [
[
"Saheb",
"Tahereh",
""
],
[
"Dehghani",
"Mohammad",
""
]
] |
2110.00898 | Dieqiao Feng | Dieqiao Feng, Carla P. Gomes, Bart Selman | A Novel Automated Curriculum Strategy to Solve Hard Sokoban Planning
Instances | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, we have witnessed tremendous progress in deep reinforcement
learning (RL) for tasks such as Go, Chess, video games, and robot control.
Nevertheless, other combinatorial domains, such as AI planning, still pose
considerable challenges for RL approaches. The key difficulty in those domains
is that a positive reward signal becomes {\em exponentially rare} as the
minimal solution length increases. So, an RL approach loses its training
signal. There has been promising recent progress by using a curriculum-driven
learning approach that is designed to solve a single hard instance. We present
a novel {\em automated} curriculum approach that dynamically selects from a
pool of unlabeled training instances of varying task complexity guided by our
{\em difficulty quantum momentum} strategy. We show how the smoothness of the
task hardness impacts the final learning results. In particular, as the size of
the instance pool increases, the ``hardness gap'' decreases, which facilitates
a smoother automated curriculum based learning process. Our automated
curriculum approach dramatically improves upon the previous approaches. We show
our results on Sokoban, which is a traditional PSPACE-complete planning problem
and presents a great challenge even for specialized solvers. Our RL agent can
solve hard instances that are far out of reach for any previous
state-of-the-art Sokoban solver. In particular, our approach can uncover plans
that require hundreds of steps, while the best previous search methods would
take many years of computing time to solve such instances. In addition, we show
that we can further boost the RL performance with an intricate coupling of our
automated curriculum approach with a curiosity-driven search strategy and a
graph neural net representation.
| [
{
"version": "v1",
"created": "Sun, 3 Oct 2021 00:44:50 GMT"
}
] | 1,633,392,000,000 | [
[
"Feng",
"Dieqiao",
""
],
[
"Gomes",
"Carla P.",
""
],
[
"Selman",
"Bart",
""
]
] |
2110.01232 | Raul Sena Ferreira | Raul Sena Ferreira (LAAS), Jean Arlat (LAAS), Jeremie Guiochet (LAAS),
H\'el\`ene Waeselynck (LAAS) | Benchmarking Safety Monitors for Image Classifiers with Machine Learning | null | 26th IEEE Pacific Rim International Symposium on Dependable
Computing (PRDC 2021), IEEE, Dec 2021, Perth, Australia | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-accurate machine learning (ML) image classifiers cannot guarantee that
they will not fail at operation. Thus, their deployment in safety-critical
applications such as autonomous vehicles is still an open issue. The use of
fault tolerance mechanisms such as safety monitors is a promising direction to
keep the system in a safe state despite errors of the ML classifier. As the
prediction from the ML is the core information directly impacting safety, many
works are focusing on monitoring the ML model itself. Checking the efficiency
of such monitors in the context of safety-critical applications is thus a
significant challenge. Therefore, this paper aims at establishing a baseline
framework for benchmarking monitors for ML image classifiers. Furthermore, we
propose a framework covering the entire pipeline, from data generation to
evaluation. Our approach measures monitor performance with a broader set of
metrics than usually proposed in the literature. Moreover, we benchmark three
different monitor approaches in 79 benchmark datasets containing five
categories of out-of-distribution data for image classifiers: class novelty,
noise, anomalies, distributional shifts, and adversarial attacks. Our results
indicate that these monitors are no more accurate than a random monitor. We
also release the code of all experiments for reproducibility.
| [
{
"version": "v1",
"created": "Mon, 4 Oct 2021 07:52:23 GMT"
}
] | 1,633,392,000,000 | [
[
"Ferreira",
"Raul Sena",
"",
"LAAS"
],
[
"Arlat",
"Jean",
"",
"LAAS"
],
[
"Guiochet",
"Jeremie",
"",
"LAAS"
],
[
"Waeselynck",
"Hélène",
"",
"LAAS"
]
] |
2110.01322 | Raphaela Butz | Raphaela Butz, Ren\'ee Schulz, Arjen Hommersom, Marko van Eekelen | What is understandable in Bayesian network explanations? | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Explaining predictions from Bayesian networks, for example to physicians, is
non-trivial. Various explanation methods for Bayesian network inference have
appeared in literature, focusing on different aspects of the underlying
reasoning. While there has been a lot of technical research, there is very
little known about how well humans actually understand these explanations. In
this paper, we present ongoing research in which four different explanation
approaches were compared through a survey by asking a group of human
participants to interpret the explanations.
| [
{
"version": "v1",
"created": "Mon, 4 Oct 2021 11:05:36 GMT"
}
] | 1,633,392,000,000 | [
[
"Butz",
"Raphaela",
""
],
[
"Schulz",
"Renée",
""
],
[
"Hommersom",
"Arjen",
""
],
[
"van Eekelen",
"Marko",
""
]
] |
2110.01434 | Matthias Samwald | Kathrin Blagec, Adriano Barbosa-Silva, Simon Ott, Matthias Samwald | A curated, ontology-based, large-scale knowledge graph of artificial
intelligence tasks and benchmarks | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Research in artificial intelligence (AI) is addressing a growing number of
tasks through a rapidly growing number of models and methodologies. This makes
it difficult to keep track of where novel AI methods are successfully -- or
still unsuccessfully -- applied, how progress is measured, how different
advances might synergize with each other, and how future research should be
prioritized.
To help address these issues, we created the Intelligence Task Ontology and
Knowledge Graph (ITO), a comprehensive, richly structured and manually curated
resource on artificial intelligence tasks, benchmark results and performance
metrics. The current version of ITO contain 685,560 edges, 1,100 classes
representing AI processes and 1,995 properties representing performance
metrics.
The goal of ITO is to enable precise and network-based analyses of the global
landscape of AI tasks and capabilities. ITO is based on technologies that allow
for easy integration and enrichment with external data, automated inference and
continuous, collaborative expert curation of underlying ontological models. We
make the ITO dataset and a collection of Jupyter notebooks utilising ITO openly
available.
| [
{
"version": "v1",
"created": "Mon, 4 Oct 2021 13:25:53 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Oct 2021 09:07:34 GMT"
}
] | 1,633,564,800,000 | [
[
"Blagec",
"Kathrin",
""
],
[
"Barbosa-Silva",
"Adriano",
""
],
[
"Ott",
"Simon",
""
],
[
"Samwald",
"Matthias",
""
]
] |
2110.01776 | Luciano da Fontoura Costa | Luciano da F. Costa | An Ample Approach to Data and Modeling | 33 pages, 24 figures. A working manuscript | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In the present work, we describe a framework for modeling how models can be
built that integrates concepts and methods from a wide range of fields. The
information schism between the real-world and that which can be gathered and
considered by any individual information processing agent is characterized and
discussed, followed by the presentation of a series of the adopted requisites
while developing the modeling approach. The issue of mapping from datasets into
models is subsequently addressed, as well as some of the respectively implied
difficulties and limitations. Based on these considerations, an approach to
meta modeling how models are built is then progressively developed. First, the
reference M* meta model framework is presented, which relies critically in
associating whole datasets and respective models in terms of a strict
equivalence relation. Among the interesting features of this model are its
ability to bridge the gap between data and modeling, as well as paving the way
to an algebra of both data and models which can be employed to combine models
into hierarchical manner. After illustrating the M* model in terms of patterns
derived from regular lattices, the reported modeling approach continues by
discussing how sampling issues, error and overlooked data can be addressed,
leading to the $M^{<\epsilon>}$ variant, illustrated respectively to number
theory. The situation in which the data needs to be represented in terms of
respective probability densities is treated next, yielding the $M^{<\sigma>}$
meta model, which is then illustrated respectively to a real-world dataset
(iris flowers data). Several considerations about how the developed framework
can provide insights about data clustering, complexity, collaborative research,
deep learning, and creativity are then presented, followed by overall
conclusions.
| [
{
"version": "v1",
"created": "Tue, 5 Oct 2021 01:26:09 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Oct 2021 23:07:37 GMT"
}
] | 1,634,169,600,000 | [
[
"Costa",
"Luciano da F.",
""
]
] |
2110.01831 | Michael Timothy Bennett | Michael Timothy Bennett, Yoshihiro Maruyama | The Artificial Scientist: Logicist, Emergentist, and Universalist
Approaches to Artificial General Intelligence | Accepted to the 14th Conference on Artificial General Intelligence | Proceedings of the 14th International Conference on Artificial
General Intelligence. 2021. Lecture Notes in Computer Science, vol 13154.
Springer. pp. 45-54 | 10.1007/978-3-030-93758-4_6 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We attempt to define what is necessary to construct an Artificial Scientist,
explore and evaluate several approaches to artificial general intelligence
(AGI) which may facilitate this, conclude that a unified or hybrid approach is
necessary and explore two theories that satisfy this requirement to some
degree.
| [
{
"version": "v1",
"created": "Tue, 5 Oct 2021 05:58:23 GMT"
}
] | 1,714,435,200,000 | [
[
"Bennett",
"Michael Timothy",
""
],
[
"Maruyama",
"Yoshihiro",
""
]
] |
2110.01834 | Andrea Loreggia | Marianna Bergamaschi Ganapini, Murray Campbell, Francesco Fabiano,
Lior Horesh, Jon Lenchner, Andrea Loreggia, Nicholas Mattei, Francesca Rossi,
Biplav Srivastava and Kristen Brent Venable | Thinking Fast and Slow in AI: the Role of Metacognition | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI systems have seen dramatic advancement in recent years, bringing many
applications that pervade our everyday life. However, we are still mostly
seeing instances of narrow AI: many of these recent developments are typically
focused on a very limited set of competencies and goals, e.g., image
interpretation, natural language processing, classification, prediction, and
many others. Moreover, while these successes can be accredited to improved
algorithms and techniques, they are also tightly linked to the availability of
huge datasets and computational power. State-of-the-art AI still lacks many
capabilities that would naturally be included in a notion of (human)
intelligence.
We argue that a better study of the mechanisms that allow humans to have
these capabilities can help us understand how to imbue AI systems with these
competencies. We focus especially on D. Kahneman's theory of thinking fast and
slow, and we propose a multi-agent AI architecture where incoming problems are
solved by either system 1 (or "fast") agents, that react by exploiting only
past experience, or by system 2 (or "slow") agents, that are deliberately
activated when there is the need to reason and search for optimal solutions
beyond what is expected from the system 1 agent. Both kinds of agents are
supported by a model of the world, containing domain knowledge about the
environment, and a model of "self", containing information about past actions
of the system and solvers' skills.
| [
{
"version": "v1",
"created": "Tue, 5 Oct 2021 06:05:38 GMT"
}
] | 1,633,478,400,000 | [
[
"Ganapini",
"Marianna Bergamaschi",
""
],
[
"Campbell",
"Murray",
""
],
[
"Fabiano",
"Francesco",
""
],
[
"Horesh",
"Lior",
""
],
[
"Lenchner",
"Jon",
""
],
[
"Loreggia",
"Andrea",
""
],
[
"Mattei",
"Nicholas",
""
],
[
"Rossi",
"Francesca",
""
],
[
"Srivastava",
"Biplav",
""
],
[
"Venable",
"Kristen Brent",
""
]
] |
2110.01835 | Michael Timothy Bennett | Michael Timothy Bennett | Compression, The Fermi Paradox and Artificial Super-Intelligence | Short paper accepted to the 14th Conference on Artificial General
Intelligence | Proceedings of the 14th International Conference on Artificial
General Intelligence. 2021. Lecture Notes in Computer Science, vol 13154.
Springer. pp. 41-44 | 10.1007/978-3-030-93758-4_5 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The following briefly discusses possible difficulties in communication with
and control of an AGI (artificial general intelligence), building upon an
explanation of The Fermi Paradox and preceding work on symbol emergence and
artificial general intelligence. The latter suggests that to infer what someone
means, an agent constructs a rationale for the observed behaviour of others.
Communication then requires two agents labour under similar compulsions and
have similar experiences (construct similar solutions to similar tasks). Any
non-human intelligence may construct solutions such that any rationale for
their behaviour (and thus the meaning of their signals) is outside the scope of
what a human is inclined to notice or comprehend. Further, the more compressed
a signal, the closer it will appear to random noise. Another intelligence may
possess the ability to compress information to the extent that, to us, their
signals would appear indistinguishable from noise (an explanation for The Fermi
Paradox). To facilitate predictive accuracy an AGI would tend to more
compressed representations of the world, making any rationale for their
behaviour more difficult to comprehend for the same reason. Communication with
and control of an AGI may subsequently necessitate not only human-like
compulsions and experiences, but imposed cognitive impairment.
| [
{
"version": "v1",
"created": "Tue, 5 Oct 2021 06:17:02 GMT"
}
] | 1,714,435,200,000 | [
[
"Bennett",
"Michael Timothy",
""
]
] |
2110.01909 | Simon Vandevelde | Simon Vandevelde, Victor Verreet, Luc De Raedt and Joost Vennekens | A Table-Based Representation for Probabilistic Logic: Preliminary
Results | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present Probabilistic Decision Model and Notation (pDMN), a probabilistic
extension of Decision Model and Notation (DMN). DMN is a modeling notation for
deterministic decision logic, which intends to be user-friendly and low in
complexity. pDMN extends DMN with probabilistic reasoning, predicates,
functions, quantification, and a new hit policy. At the same time, it aims to
retain DMN's user-friendliness to allow its usage by domain experts without the
help of IT staff. pDMN models can be unambiguously translated into ProbLog
programs to answer user queries. ProbLog is a probabilistic extension of Prolog
flexibly enough to model and reason over any pDMN model.
| [
{
"version": "v1",
"created": "Tue, 5 Oct 2021 10:01:31 GMT"
}
] | 1,633,478,400,000 | [
[
"Vandevelde",
"Simon",
""
],
[
"Verreet",
"Victor",
""
],
[
"De Raedt",
"Luc",
""
],
[
"Vennekens",
"Joost",
""
]
] |
2110.01990 | Pietro Totis | Pietro Totis, Angelika Kimmig, Luc De Raedt | SMProbLog: Stable Model Semantics in ProbLog and its Applications in
Argumentation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce SMProbLog, a generalization of the probabilistic logic
programming language ProbLog. A ProbLog program defines a distribution over
logic programs by specifying for each clause the probability that it belongs to
a randomly sampled program, and these probabilities are mutually independent.
The semantics of ProbLog is given by the success probability of a query, which
corresponds to the probability that the query succeeds in a randomly sampled
program. It is well-defined when each random sample uniquely determines the
truth values of all logical atoms. Argumentation problems, however, represent
an interesting practical application where this is not always the case.
SMProbLog generalizes the semantics of ProbLog to the setting where multiple
truth assignments are possible for a randomly sampled program, and implements
the corresponding algorithms for both inference and learning tasks. We then
show how this novel framework can be used to reason about probabilistic
argumentation problems. Therefore, the key contribution of this paper are: a
more general semantics for ProbLog programs, its implementation into a
probabilistic programming framework for both inference and parameter learning,
and a novel approach to probabilistic argumentation problems based on such
framework.
| [
{
"version": "v1",
"created": "Tue, 5 Oct 2021 12:29:22 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Oct 2021 07:32:20 GMT"
}
] | 1,633,651,200,000 | [
[
"Totis",
"Pietro",
""
],
[
"Kimmig",
"Angelika",
""
],
[
"De Raedt",
"Luc",
""
]
] |
2110.02027 | Jun Xia | Jun Xia, Lirong Wu, Ge Wang, Jintao Chen, Stan Z.Li | ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning | Accetpted at ICML 2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contrastive Learning (CL) has emerged as a dominant technique for
unsupervised representation learning which embeds augmented versions of the
anchor close to each other (positive samples) and pushes the embeddings of
other samples (negatives) apart. As revealed in recent studies, CL can benefit
from hard negatives (negatives that are most similar to the anchor). However,
we observe limited benefits when we adopt existing hard negative mining
techniques of other domains in Graph Contrastive Learning (GCL). We perform
both experimental and theoretical analysis on this phenomenon and find it can
be attributed to the message passing of Graph Neural Networks (GNNs). Unlike CL
in other domains, most hard negatives are potentially false negatives
(negatives that share the same class with the anchor) if they are selected
merely according to the similarities between anchor and themselves, which will
undesirably push away the samples of the same class. To remedy this deficiency,
we propose an effective method, dubbed \textbf{ProGCL}, to estimate the
probability of a negative being true one, which constitutes a more suitable
measure for negatives' hardness together with similarity. Additionally, we
devise two schemes (i.e., \textbf{ProGCL-weight} and \textbf{ProGCL-mix}) to
boost the performance of GCL. Extensive experiments demonstrate that ProGCL
brings notable and consistent improvements over base GCL methods and yields
multiple state-of-the-art results on several unsupervised benchmarks or even
exceeds the performance of supervised ones. Also, ProGCL is readily pluggable
into various negatives-based GCL methods for performance improvement. We
release the code at
\textcolor{magenta}{\url{https://github.com/junxia97/ProGCL}}.
| [
{
"version": "v1",
"created": "Tue, 5 Oct 2021 13:15:59 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jun 2022 03:36:30 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Jun 2022 02:24:02 GMT"
}
] | 1,655,251,200,000 | [
[
"Xia",
"Jun",
""
],
[
"Wu",
"Lirong",
""
],
[
"Wang",
"Ge",
""
],
[
"Chen",
"Jintao",
""
],
[
"Li",
"Stan Z.",
""
]
] |
2110.02325 | Avi Pfeffer | Avi Pfeffer, Michael Harradon, Joseph Campolongo, Sanja Cvijic | Unifying AI Algorithms with Probabilistic Programming using Implicitly
Defined Representations | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce Scruff, a new framework for developing AI systems using
probabilistic programming. Scruff enables a variety of representations to be
included, such as code with stochastic choices, neural networks, differential
equations, and constraint systems. These representations are defined implicitly
using a set of standardized operations that can be performed on them.
General-purpose algorithms are then implemented using these operations,
enabling generalization across different representations. Zero, one, or more
operation implementations can be provided for any given representation, giving
algorithms the flexibility to use the most appropriate available
implementations for their purposes and enabling representations to be used in
ways that suit their capabilities. In this paper, we explain the general
approach of implicitly defined representations and provide a variety of
examples of representations at varying degrees of abstraction. We also show how
a relatively small set of operations can serve to unify a variety of AI
algorithms. Finally, we discuss how algorithms can use policies to choose which
operation implementations to use during execution.
| [
{
"version": "v1",
"created": "Tue, 5 Oct 2021 19:49:30 GMT"
}
] | 1,633,564,800,000 | [
[
"Pfeffer",
"Avi",
""
],
[
"Harradon",
"Michael",
""
],
[
"Campolongo",
"Joseph",
""
],
[
"Cvijic",
"Sanja",
""
]
] |
2110.02450 | Samuel Alexander | Samuel Allen Alexander, Marcus Hutter | Reward-Punishment Symmetric Universal Intelligence | 11 pages, accepted to AGI-21 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Can an agent's intelligence level be negative? We extend the Legg-Hutter
agent-environment framework to include punishments and argue for an affirmative
answer to that question. We show that if the background encodings and Universal
Turing Machine (UTM) admit certain Kolmogorov complexity symmetries, then the
resulting Legg-Hutter intelligence measure is symmetric about the origin. In
particular, this implies reward-ignoring agents have Legg-Hutter intelligence 0
according to such UTMs.
| [
{
"version": "v1",
"created": "Wed, 6 Oct 2021 01:47:11 GMT"
}
] | 1,633,564,800,000 | [
[
"Alexander",
"Samuel Allen",
""
],
[
"Hutter",
"Marcus",
""
]
] |
2110.02480 | Christian Muise | Christian Muise, Vaishak Belle, Paolo Felli, Sheila McIlraith, Tim
Miller, Adrian R. Pearce, Liz Sonenberg | Efficient Multi-agent Epistemic Planning: Teaching Planners About Nested
Belief | Published in Special Issue of the Artificial Intelligence Journal
(AIJ) on Epistemic Planning | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Many AI applications involve the interaction of multiple autonomous agents,
requiring those agents to reason about their own beliefs, as well as those of
other agents. However, planning involving nested beliefs is known to be
computationally challenging. In this work, we address the task of synthesizing
plans that necessitate reasoning about the beliefs of other agents. We plan
from the perspective of a single agent with the potential for goals and actions
that involve nested beliefs, non-homogeneous agents, co-present observations,
and the ability for one agent to reason as if it were another. We formally
characterize our notion of planning with nested belief, and subsequently
demonstrate how to automatically convert such problems into problems that
appeal to classical planning technology for solving efficiently. Our approach
represents an important step towards applying the well-established field of
automated planning to the challenging task of planning involving nested beliefs
of multiple agents.
| [
{
"version": "v1",
"created": "Wed, 6 Oct 2021 03:24:01 GMT"
}
] | 1,633,564,800,000 | [
[
"Muise",
"Christian",
""
],
[
"Belle",
"Vaishak",
""
],
[
"Felli",
"Paolo",
""
],
[
"McIlraith",
"Sheila",
""
],
[
"Miller",
"Tim",
""
],
[
"Pearce",
"Adrian R.",
""
],
[
"Sonenberg",
"Liz",
""
]
] |
2110.02610 | Simon Vandevelde | Simon Vandevelde, Bram Aerts and Joost Vennekens | Tackling the DM Challenges with cDMN: A Tight Integration of DMN and
Constraint Reasoning | Under consideration in Theory and Practice of Logic Programming
(TPLP). arXiv admin note: substantial text overlap with arXiv:2005.09998 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Knowledge-based AI typically depends on a knowledge engineer to construct a
formal model of domain knowledge -- but what if domain experts could do this
themselves? This paper describes an extension to the Decision Model and
Notation (DMN) standard, called Constraint Decision Model and Notation (cDMN).
DMN is a user-friendly, table-based notation for decision logic, which allows
domain experts to model simple decision procedures without the help of IT
staff. cDMN aims to enlarge the expressiveness of DMN in order to model more
complex domain knowledge, while retaining DMN's goal of being understandable by
domain experts. We test cDMN by solving the most complex challenges posted on
the DM Community website. We compare our own cDMN solutions to the solutions
that have been submitted to the website and find that our approach is
competitive. Moreover, cDMN is able to solve more challenges than any other
approach.
| [
{
"version": "v1",
"created": "Wed, 6 Oct 2021 09:29:52 GMT"
}
] | 1,633,564,800,000 | [
[
"Vandevelde",
"Simon",
""
],
[
"Aerts",
"Bram",
""
],
[
"Vennekens",
"Joost",
""
]
] |
2110.02640 | Lican Huang | Minghe Kong and Lican Huang | Bach Style Music Authoring System based on Deep Learning | 8 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | With the continuous improvement in various aspects in the field of artificial
intelligence, the momentum of artificial intelligence with deep learning
capabilities into the field of music is coming. The research purpose of this
paper is to design a Bach style music authoring system based on deep learning.
We use a LSTM neural network to train serialized and standardized music feature
data. By repeated experiments, we find the optimal LSTM model which can
generate imitation of Bach music. Finally the generated music is
comprehensively evaluated in the form of online audition and Turing test. The
repertoires which the music generation system constructed in this article are
very close to the style of Bach's original music, and it is relatively
difficult for ordinary people to distinguish the musics Bach authored and AI
created.
| [
{
"version": "v1",
"created": "Wed, 6 Oct 2021 10:30:09 GMT"
}
] | 1,633,564,800,000 | [
[
"Kong",
"Minghe",
""
],
[
"Huang",
"Lican",
""
]
] |
2110.03223 | Ayush Raina | Ayush Raina, Lucas Puentes, Jonathan Cagan, Christopher McComb | Goal-Directed Design Agents: Integrating Visual Imitation with One-Step
Lookahead Optimization for Generative Design | null | J. Mech. Des. Dec 2021, 143(12): 124501 (6 pages) | 10.1115/1.4051013 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Engineering design problems often involve large state and action spaces along
with highly sparse rewards. Since an exhaustive search of those spaces is not
feasible, humans utilize relevant domain knowledge to condense the search
space. Previously, deep learning agents (DLAgents) were introduced to use
visual imitation learning to model design domain knowledge. This note builds on
DLAgents and integrates them with one-step lookahead search to develop
goal-directed agents capable of enhancing learned strategies for sequentially
generating designs. Goal-directed DLAgents can employ human strategies learned
from data along with optimizing an objective function. The visual imitation
network from DLAgents is composed of a convolutional encoder-decoder network,
acting as a rough planning step that is agnostic to feedback. Meanwhile, the
lookahead search identifies the fine-tuned design action guided by an
objective. These design agents are trained on an unconstrained truss design
problem that is modeled as a sequential, action-based configuration design
problem. The agents are then evaluated on two versions of the problem: the
original version used for training and an unseen constrained version with an
obstructed construction space. The goal-directed agents outperform the human
designers used to train the network as well as the previous objective-agnostic
versions of the agent in both scenarios. This illustrates a design agent
framework that can efficiently use feedback to not only enhance learned design
strategies but also adapt to unseen design problems.
| [
{
"version": "v1",
"created": "Thu, 7 Oct 2021 07:13:20 GMT"
}
] | 1,633,996,800,000 | [
[
"Raina",
"Ayush",
""
],
[
"Puentes",
"Lucas",
""
],
[
"Cagan",
"Jonathan",
""
],
[
"McComb",
"Christopher",
""
]
] |
2110.03276 | Zijing Yang | Zijing Yang, Jiabo Ye, Linlin Wang, Xin Lin, Liang He | Inferring Substitutable and Complementary Products with Knowledge-Aware
Path Reasoning based on Dynamic Policy Network | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Inferring the substitutable and complementary products for a given product is
an essential and fundamental concern for the recommender system. To achieve
this, existing approaches take advantage of the knowledge graphs to learn more
evidences for inference, whereas they often suffer from invalid reasoning for
lack of elegant decision making strategies. Therefore, we propose a novel
Knowledge-Aware Path Reasoning (KAPR) model which leverages the dynamic policy
network to make explicit reasoning over knowledge graphs, for inferring the
substitutable and complementary relationships. Our contributions can be
highlighted as three aspects. Firstly, we model this inference scenario as a
Markov Decision Process in order to accomplish a knowledge-aware path reasoning
over knowledge graphs. Secondly,we integrate both structured and unstructured
knowledge to provide adequate evidences for making accurate decision-making.
Thirdly, we evaluate our model on a series of real-world datasets, achieving
competitive performance compared with state-of-the-art approaches. Our code is
released on https://gitee.com/yangzijing flower/kapr/tree/master.
| [
{
"version": "v1",
"created": "Thu, 7 Oct 2021 09:00:36 GMT"
}
] | 1,633,651,200,000 | [
[
"Yang",
"Zijing",
""
],
[
"Ye",
"Jiabo",
""
],
[
"Wang",
"Linlin",
""
],
[
"Lin",
"Xin",
""
],
[
"He",
"Liang",
""
]
] |
2110.03320 | Swagatam Haldar | Swagatam Haldar, Deepak Vijaykeerthy, Diptikalyan Saha | Automated Testing of AI Models | 5 pages, 3 Figures, 4 Tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The last decade has seen tremendous progress in AI technology and
applications. With such widespread adoption, ensuring the reliability of the AI
models is crucial. In past, we took the first step of creating a testing
framework called AITEST for metamorphic properties such as fairness, robustness
properties for tabular, time-series, and text classification models. In this
paper, we extend the capability of the AITEST tool to include the testing
techniques for Image and Speech-to-text models along with interpretability
testing for tabular models. These novel extensions make AITEST a comprehensive
framework for testing AI models.
| [
{
"version": "v1",
"created": "Thu, 7 Oct 2021 10:30:18 GMT"
}
] | 1,633,651,200,000 | [
[
"Haldar",
"Swagatam",
""
],
[
"Vijaykeerthy",
"Deepak",
""
],
[
"Saha",
"Diptikalyan",
""
]
] |
2110.03395 | Arseny Skryagin | Arseny Skryagin, Wolfgang Stammer, Daniel Ochs, Devendra Singh Dhami,
Kristian Kersting | SLASH: Embracing Probabilistic Circuits into Neural Answer Set
Programming | 18 pages, 7 figures and 6 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of combining the robustness of neural networks and the expressivity
of symbolic methods has rekindled the interest in neuro-symbolic AI. Recent
advancements in neuro-symbolic AI often consider specifically-tailored
architectures consisting of disjoint neural and symbolic components, and thus
do not exhibit desired gains that can be achieved by integrating them into a
unifying framework. We introduce SLASH -- a novel deep probabilistic
programming language (DPPL). At its core, SLASH consists of
Neural-Probabilistic Predicates (NPPs) and logical programs which are united
via answer set programming. The probability estimates resulting from NPPs act
as the binding element between the logical program and raw input data, thereby
allowing SLASH to answer task-dependent logical queries. This allows SLASH to
elegantly integrate the symbolic and neural components in a unified framework.
We evaluate SLASH on the benchmark data of MNIST addition as well as novel
tasks for DPPLs such as missing data prediction and set prediction with
state-of-the-art performance, thereby showing the effectiveness and generality
of our method.
| [
{
"version": "v1",
"created": "Thu, 7 Oct 2021 12:35:55 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Oct 2021 17:25:00 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Nov 2021 09:08:36 GMT"
},
{
"version": "v4",
"created": "Tue, 23 Nov 2021 13:47:56 GMT"
}
] | 1,637,712,000,000 | [
[
"Skryagin",
"Arseny",
""
],
[
"Stammer",
"Wolfgang",
""
],
[
"Ochs",
"Daniel",
""
],
[
"Dhami",
"Devendra Singh",
""
],
[
"Kersting",
"Kristian",
""
]
] |
2110.03461 | Simyung Chang | Simyung Chang, KiYoon Yoo, Jiho Jang and Nojun Kwak | Self-Evolutionary Optimization for Pareto Front Learning | 16 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Multi-task learning (MTL), which aims to improve performance by learning
multiple tasks simultaneously, inherently presents an optimization challenge
due to multiple objectives. Hence, multi-objective optimization (MOO)
approaches have been proposed for multitasking problems. Recent MOO methods
approximate multiple optimal solutions (Pareto front) with a single unified
model, which is collectively referred to as Pareto front learning (PFL). In
this paper, we show that PFL can be re-formulated into another MOO problem with
multiple objectives, each of which corresponds to different preference weights
for the tasks. We leverage an evolutionary algorithm (EA) to propose a method
for PFL called self-evolutionary optimization (SEO) by directly maximizing the
hypervolume. By using SEO, the neural network learns to approximate the Pareto
front conditioned on multiple hyper-parameters that drastically affect the
hypervolume. Then, by generating a population of approximations simply by
inferencing the network, the hyper-parameters of the network can be optimized
by EA. Utilizing SEO for PFL, we also introduce self-evolutionary Pareto
networks (SEPNet), enabling the unified model to approximate the entire Pareto
front set that maximizes the hypervolume. Extensive experimental results
confirm that SEPNet can find a better Pareto front than the current
state-of-the-art methods while minimizing the increase in model size and
training cost.
| [
{
"version": "v1",
"created": "Thu, 7 Oct 2021 13:38:57 GMT"
}
] | 1,633,651,200,000 | [
[
"Chang",
"Simyung",
""
],
[
"Yoo",
"KiYoon",
""
],
[
"Jang",
"Jiho",
""
],
[
"Kwak",
"Nojun",
""
]
] |
2110.03468 | Qianli Zhou | Qianli Zhou, Yusheng Huang, Yong Deng | Belief Evolution Network-based Probability Transformation and Fusion | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Smets proposes the Pignistic Probability Transformation (PPT) as the decision
layer in the Transferable Belief Model (TBM), which argues when there is no
more information, we have to make a decision using a Probability Mass Function
(PMF). In this paper, the Belief Evolution Network (BEN) and the full causality
function are proposed by introducing causality in Hierarchical Hypothesis Space
(HHS). Based on BEN, we interpret the PPT from an information fusion view and
propose a new Probability Transformation (PT) method called Full Causality
Probability Transformation (FCPT), which has better performance under
Bi-Criteria evaluation. Besides, we heuristically propose a new probability
fusion method based on FCPT. Compared with Dempster Rule of Combination (DRC),
the proposed method has more reasonable result when fusing same evidence.
| [
{
"version": "v1",
"created": "Thu, 7 Oct 2021 13:48:36 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Jul 2022 02:20:42 GMT"
}
] | 1,658,188,800,000 | [
[
"Zhou",
"Qianli",
""
],
[
"Huang",
"Yusheng",
""
],
[
"Deng",
"Yong",
""
]
] |
2110.03524 | Naveen Raman | Naveen Raman, Sanket Shah, John Dickerson | Data-Driven Methods for Balancing Fairness and Efficiency in
Ride-Pooling | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Rideshare and ride-pooling platforms use artificial intelligence-based
matching algorithms to pair riders and drivers. However, these platforms can
induce inequality either through an unequal income distribution or disparate
treatment of riders. We investigate two methods to reduce forms of inequality
in ride-pooling platforms: (1) incorporating fairness constraints into the
objective function and (2) redistributing income to drivers to reduce income
fluctuation and inequality. To evaluate our solutions, we use the New York City
taxi data set. For the first method, we find that optimizing for driver-side
fairness outperforms state-of-the-art models on the number of riders serviced,
both in the worst-off neighborhood and overall, showing that optimizing for
fairness can assist profitability in certain circumstances. For the second
method, we explore income redistribution as a way to combat income inequality
by having drivers keep an $r$ fraction of their income, and contributing the
rest to a redistribution pool. For certain values of $r$, most drivers earn
near their Shapley value, while still incentivizing drivers to maximize value,
thereby avoiding the free-rider problem and reducing income variability. The
first method can be extended to many definitions of fairness and the second
method provably improves fairness without affecting profitability.
| [
{
"version": "v1",
"created": "Thu, 7 Oct 2021 14:53:37 GMT"
}
] | 1,633,651,200,000 | [
[
"Raman",
"Naveen",
""
],
[
"Shah",
"Sanket",
""
],
[
"Dickerson",
"John",
""
]
] |
2110.03613 | Mohammad Motamedi | Mohammad Motamedi, Nikolay Sakharnykh, and Tim Kaldewey | A Data-Centric Approach for Training Deep Neural Networks with Less Data | 5 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While the availability of large datasets is perceived to be a key requirement
for training deep neural networks, it is possible to train such models with
relatively little data. However, compensating for the absence of large datasets
demands a series of actions to enhance the quality of the existing samples and
to generate new ones. This paper summarizes our winning submission to the
"Data-Centric AI" competition. We discuss some of the challenges that arise
while training with a small dataset, offer a principled approach for systematic
data quality enhancement, and propose a GAN-based solution for synthesizing new
data points. Our evaluations indicate that the dataset generated by the
proposed pipeline offers 5% accuracy improvement while being significantly
smaller than the baseline.
| [
{
"version": "v1",
"created": "Thu, 7 Oct 2021 16:41:52 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Oct 2021 21:18:07 GMT"
}
] | 1,635,811,200,000 | [
[
"Motamedi",
"Mohammad",
""
],
[
"Sakharnykh",
"Nikolay",
""
],
[
"Kaldewey",
"Tim",
""
]
] |
2110.03643 | Laura Giordano | Laura Giordano | From Weighted Conditionals of Multilayer Perceptrons to Gradual
Argumentation and Back | 21 pages. arXiv admin note: text overlap with arXiv:2106.00390 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fuzzy multipreference semantics has been recently proposed for weighted
conditional knowledge bases, and used to develop a logical semantics for
Multilayer Perceptrons, by regarding a deep neural network (after training) as
a weighted conditional knowledge base. This semantics, in its different
variants, suggests some gradual argumentation semantics, which are related to
the family of the gradual semantics studied by Amgoud and Doder. The
relationships between weighted conditional knowledge bases and MLPs extend to
the proposed gradual semantics to capture the stationary states of MPs, in
agreement with previous results on the relationship between argumentation
frameworks and neural networks. The paper also suggests a simple way to extend
the proposed semantics to deal attacks/supports by a boolean combination of
arguments, based on the fuzzy semantics of weighted conditionals, as well as an
approach for defeasible reasoning over a weighted argumentation graph, building
on the proposed gradual semantics.
| [
{
"version": "v1",
"created": "Thu, 7 Oct 2021 17:33:10 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Oct 2021 09:02:14 GMT"
}
] | 1,635,292,800,000 | [
[
"Giordano",
"Laura",
""
]
] |
2110.03754 | Patrizio Bellan | Patrizio Bellan, Mauro Dragoni, Chiara Ghidini, Han van der Aa, Simone
Paolo Ponzetto | Process Extraction from Text: Benchmarking the State of the Art and
Paving the Way for Future Challenges | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The extraction of process models from text refers to the problem of turning
the information contained in an unstructured textual process descriptions into
a formal representation,i.e.,a process model. Several automated approaches have
been proposed to tackle this problem, but they are highly heterogeneous in
scope and underlying assumptions,i.e., differences in input, target output, and
data used in their evaluation.As a result, it is currently unclear how well
existing solutions are able to solve the model-extraction problem and how they
compare to each other.We overcome this issue by comparing 10 state-of-the-art
approaches for model extraction in a systematic manner, covering both
qualitative and quantitative aspects.The qualitative evaluation compares the
analysis of the primary studies on: 1 the main characteristics of each
solution;2 the type of process model elements extracted from the input data;3
the experimental evaluation performed to evaluate the proposed framework.The
results show a heterogeneity of techniques, elements extracted and evaluations
conducted, that are often impossible to compare.To overcome this difficulty we
propose a quantitative comparison of the tools proposed by the papers on the
unifying task of process model entity and relation extraction so as to be able
to compare them directly.The results show three distinct groups of tools in
terms of performance, with no tool obtaining very good scores and also serious
limitations.Moreover, the proposed evaluation pipeline can be considered a
reference task on a well-defined dataset and metrics that can be used to
compare new tools. The paper also presents a reflection on the results of the
qualitative and quantitative evaluation on the limitations and challenges that
the community needs to address in the future to produce significant advances in
this area.
| [
{
"version": "v1",
"created": "Thu, 7 Oct 2021 19:12:24 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Oct 2023 11:11:16 GMT"
}
] | 1,698,278,400,000 | [
[
"Bellan",
"Patrizio",
""
],
[
"Dragoni",
"Mauro",
""
],
[
"Ghidini",
"Chiara",
""
],
[
"van der Aa",
"Han",
""
],
[
"Ponzetto",
"Simone Paolo",
""
]
] |
2110.03760 | Ayush Raina | Ayush Raina, Jonathan Cagan, Christopher McComb | Design Strategy Network: A deep hierarchical framework to represent
generative design strategies in complex action spaces | Published in Journal of Mechanical Design | J. Mech. Des. Feb 2022, 144(2): 021404 (12 pages) | 10.1115/1.4052566 | Volume 144 Issue 2 | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Generative design problems often encompass complex action spaces that may be
divergent over time, contain state-dependent constraints, or involve hybrid
(discrete and continuous) domains. To address those challenges, this work
introduces Design Strategy Network (DSN), a data-driven deep hierarchical
framework that can learn strategies over these arbitrary complex action spaces.
The hierarchical architecture decomposes every action decision into first
predicting a preferred spatial region in the design space and then outputting a
probability distribution over a set of possible actions from that region. This
framework comprises a convolutional encoder to work with image-based design
state representations, a multi-layer perceptron to predict a spatial region,
and a weight-sharing network to generate a probability distribution over
unordered set-based inputs of feasible actions. Applied to a truss design
study, the framework learns to predict the actions of human designers in the
study, capturing their truss generation strategies in the process. Results show
that DSNs significantly outperform non-hierarchical methods of policy
representation, demonstrating their superiority in complex action space
problems.
| [
{
"version": "v1",
"created": "Thu, 7 Oct 2021 19:29:40 GMT"
}
] | 1,634,169,600,000 | [
[
"Raina",
"Ayush",
""
],
[
"Cagan",
"Jonathan",
""
],
[
"McComb",
"Christopher",
""
]
] |
2110.03875 | Haiyang Xiong | Jinyin Chen, Haiyang Xiong, Haibin Zheng, Jian Zhang, Guodong Jiang
and Yi Liu | Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction | 11 pages,6 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Dynamic link prediction (DLP) makes graph prediction based on historical
information. Since most DLP methods are highly dependent on the training data
to achieve satisfying prediction performance, the quality of the training data
is crucial. Backdoor attacks induce the DLP methods to make wrong prediction by
the malicious training data, i.e., generating a subgraph sequence as the
trigger and embedding it to the training data. However, the vulnerability of
DLP toward backdoor attacks has not been studied yet. To address the issue, we
propose a novel backdoor attack framework on DLP, denoted as Dyn-Backdoor.
Specifically, Dyn-Backdoor generates diverse initial-triggers by a generative
adversarial network (GAN). Then partial links of the initial-triggers are
selected to form a trigger set, according to the gradient information of the
attack discriminator in the GAN, so as to reduce the size of triggers and
improve the concealment of the attack. Experimental results show that
Dyn-Backdoor launches successful backdoor attacks on the state-of-the-art DLP
models with success rate more than 90%. Additionally, we conduct a possible
defense against Dyn-Backdoor to testify its resistance in defensive settings,
highlighting the needs of defenses for backdoor attacks on DLP.
| [
{
"version": "v1",
"created": "Fri, 8 Oct 2021 03:08:35 GMT"
}
] | 1,633,910,400,000 | [
[
"Chen",
"Jinyin",
""
],
[
"Xiong",
"Haiyang",
""
],
[
"Zheng",
"Haibin",
""
],
[
"Zhang",
"Jian",
""
],
[
"Jiang",
"Guodong",
""
],
[
"Liu",
"Yi",
""
]
] |
2110.03939 | Shiyu Huang | Shiyu Huang, Bin Wang, Dong Li, Jianye Hao, Ting Chen, Jun Zhu | Ranking Cost: Building An Efficient and Scalable Circuit Routing Planner
with Evolution-Based Optimization | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Circuit routing has been a historically challenging problem in designing
electronic systems such as very large-scale integration (VLSI) and printed
circuit boards (PCBs). The main challenge is that connecting a large number of
electronic components under specific design rules involves a very large search
space. Early solutions are typically designed with hard-coded heuristics, which
suffer from problems of non-optimal solutions and lack of flexibility for new
design needs. Although a few learning-based methods have been proposed
recently, they are typically cumbersome and hard to extend to large-scale
applications. In this work, we propose a new algorithm for circuit routing,
named Ranking Cost, which innovatively combines search-based methods (i.e., A*
algorithm) and learning-based methods (i.e., Evolution Strategies) to form an
efficient and trainable router. In our method, we introduce a new set of
variables called cost maps, which can help the A* router to find out proper
paths to achieve the global objective. We also train a ranking parameter, which
can produce the ranking order and further improve the performance of our
method. Our algorithm is trained in an end-to-end manner and does not use any
artificial data or human demonstration. In the experiments, we compare with the
sequential A* algorithm and a canonical reinforcement learning approach, and
results show that our method outperforms these baselines with higher
connectivity rates and better scalability.
| [
{
"version": "v1",
"created": "Fri, 8 Oct 2021 07:22:45 GMT"
}
] | 1,633,910,400,000 | [
[
"Huang",
"Shiyu",
""
],
[
"Wang",
"Bin",
""
],
[
"Li",
"Dong",
""
],
[
"Hao",
"Jianye",
""
],
[
"Chen",
"Ting",
""
],
[
"Zhu",
"Jun",
""
]
] |
2110.04041 | Marta Garnelo | Marta Garnelo, Wojciech Marian Czarnecki, Siqi Liu, Dhruva Tirumala,
Junhyuk Oh, Gauthier Gidel, Hado van Hasselt, David Balduzzi | Pick Your Battles: Interaction Graphs as Population-Level Objectives for
Strategic Diversity | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Strategic diversity is often essential in games: in multi-player games, for
example, evaluating a player against a diverse set of strategies will yield a
more accurate estimate of its performance. Furthermore, in games with
non-transitivities diversity allows a player to cover several winning
strategies. However, despite the significance of strategic diversity, training
agents that exhibit diverse behaviour remains a challenge. In this paper we
study how to construct diverse populations of agents by carefully structuring
how individuals within a population interact. Our approach is based on
interaction graphs, which control the flow of information between agents during
training and can encourage agents to specialise on different strategies,
leading to improved overall performance. We provide evidence for the importance
of diversity in multi-agent training and analyse the effect of applying
different interaction graphs on the training trajectories, diversity and
performance of populations in a range of games. This is an extended version of
the long abstract published at AAMAS.
| [
{
"version": "v1",
"created": "Fri, 8 Oct 2021 11:29:52 GMT"
}
] | 1,633,910,400,000 | [
[
"Garnelo",
"Marta",
""
],
[
"Czarnecki",
"Wojciech Marian",
""
],
[
"Liu",
"Siqi",
""
],
[
"Tirumala",
"Dhruva",
""
],
[
"Oh",
"Junhyuk",
""
],
[
"Gidel",
"Gauthier",
""
],
[
"van Hasselt",
"Hado",
""
],
[
"Balduzzi",
"David",
""
]
] |
2110.04439 | Xuejiao Tang | Xin Huang, Xuejiao Tang, Wenbin Zhang, Shichao Pei, Ji Zhang, Mingli
Zhang, Zhen Liu, Ruijun Chen and Yiyi Huang | A Generic Knowledge Based Medical Diagnosis Expert System | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | In this paper, we design and implement a generic medical knowledge based
system (MKBS) for identifying diseases from several symptoms. In this system,
some important aspects like knowledge bases system, knowledge representation,
inference engine have been addressed. The system asks users different questions
and inference engines will use the certainty factor to prune out low possible
solutions. The proposed disease diagnosis system also uses a graphical user
interface (GUI) to facilitate users to interact with the expert system. Our
expert system is generic and flexible, which can be integrated with any rule
bases system in disease diagnosis.
| [
{
"version": "v1",
"created": "Sat, 9 Oct 2021 03:08:03 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Oct 2021 19:47:29 GMT"
},
{
"version": "v3",
"created": "Sat, 23 Oct 2021 00:43:23 GMT"
},
{
"version": "v4",
"created": "Tue, 26 Oct 2021 16:42:49 GMT"
},
{
"version": "v5",
"created": "Sun, 29 Jan 2023 01:42:52 GMT"
}
] | 1,675,123,200,000 | [
[
"Huang",
"Xin",
""
],
[
"Tang",
"Xuejiao",
""
],
[
"Zhang",
"Wenbin",
""
],
[
"Pei",
"Shichao",
""
],
[
"Zhang",
"Ji",
""
],
[
"Zhang",
"Mingli",
""
],
[
"Liu",
"Zhen",
""
],
[
"Chen",
"Ruijun",
""
],
[
"Huang",
"Yiyi",
""
]
] |
2110.04507 | Shiyu Huang | Shiyu Huang, Wenze Chen, Longfei Zhang, Shizhen Xu, Ziyang Li,
Fengming Zhu, Deheng Ye, Ting Chen, Jun Zhu | TiKick: Towards Playing Multi-agent Football Full Games from
Single-agent Demonstrations | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Deep reinforcement learning (DRL) has achieved super-human performance on
complex video games (e.g., StarCraft II and Dota II). However, current DRL
systems still suffer from challenges of multi-agent coordination, sparse
rewards, stochastic environments, etc. In seeking to address these challenges,
we employ a football video game, e.g., Google Research Football (GRF), as our
testbed and develop an end-to-end learning-based AI system (denoted as TiKick)
to complete this challenging task. In this work, we first generated a large
replay dataset from the self-playing of single-agent experts, which are
obtained from league training. We then developed a distributed learning system
and new offline algorithms to learn a powerful multi-agent AI from the fixed
single-agent dataset. To the best of our knowledge, Tikick is the first
learning-based AI system that can take over the multi-agent Google Research
Football full game, while previous work could either control a single agent or
experiment on toy academic scenarios. Extensive experiments further show that
our pre-trained model can accelerate the training process of the modern
multi-agent algorithm and our method achieves state-of-the-art performances on
various academic scenarios.
| [
{
"version": "v1",
"created": "Sat, 9 Oct 2021 08:34:58 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Oct 2021 05:25:00 GMT"
},
{
"version": "v3",
"created": "Sat, 16 Oct 2021 07:47:25 GMT"
},
{
"version": "v4",
"created": "Tue, 19 Oct 2021 08:41:27 GMT"
},
{
"version": "v5",
"created": "Tue, 30 Nov 2021 10:06:39 GMT"
}
] | 1,638,316,800,000 | [
[
"Huang",
"Shiyu",
""
],
[
"Chen",
"Wenze",
""
],
[
"Zhang",
"Longfei",
""
],
[
"Xu",
"Shizhen",
""
],
[
"Li",
"Ziyang",
""
],
[
"Zhu",
"Fengming",
""
],
[
"Ye",
"Deheng",
""
],
[
"Chen",
"Ting",
""
],
[
"Zhu",
"Jun",
""
]
] |
2110.04649 | Bharat Prakash | Bharat Prakash, Nicholas Waytowich, Tim Oates, Tinoosh Mohsenin | Interactive Hierarchical Guidance using Language | Presented at AI-HRI symposium as part of AAAI-FSS 2021
(arXiv:2109.10836) | null | null | AIHRI/2021/45 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning has been successful in many tasks ranging from robotic
control, games, energy management etc. In complex real world environments with
sparse rewards and long task horizons, sample efficiency is still a major
challenge. Most complex tasks can be easily decomposed into high-level planning
and low level control. Therefore, it is important to enable agents to leverage
the hierarchical structure and decompose bigger tasks into multiple smaller
sub-tasks. We introduce an approach where we use language to specify sub-tasks
and a high-level planner issues language commands to a low level controller.
The low-level controller executes the sub-tasks based on the language commands.
Our experiments show that this method is able to solve complex long horizon
planning tasks with limited human supervision. Using language has added benefit
of interpretability and ability for expert humans to take over the high-level
planning task and provide language commands if necessary.
| [
{
"version": "v1",
"created": "Sat, 9 Oct 2021 21:34:32 GMT"
}
] | 1,633,996,800,000 | [
[
"Prakash",
"Bharat",
""
],
[
"Waytowich",
"Nicholas",
""
],
[
"Oates",
"Tim",
""
],
[
"Mohsenin",
"Tinoosh",
""
]
] |
2110.05028 | Nicolas Heist | Nicolas Heist and Heiko Paulheim | The CaLiGraph Ontology as a Challenge for OWL Reasoners | Winner of the Dataset Track of the Semantic Reasoning Evaluation
Challenge at the International Semantic Web Conference (ISWC), 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | CaLiGraph is a large-scale cross-domain knowledge graph generated from
Wikipedia by exploiting the category system, list pages, and other list
structures in Wikipedia, containing more than 15 million typed entities and
around 10 million relation assertions. Other than knowledge graphs such as
DBpedia and YAGO, whose ontologies are comparably simplistic, CaLiGraph also
has a rich ontology, comprising more than 200,000 class restrictions. Those two
properties - a large A-box and a rich ontology - make it an interesting
challenge for benchmarking reasoners. In this paper, we show that a reasoning
task which is particularly relevant for CaLiGraph, i.e., the materialization of
owl:hasValue constraints into assertions between individuals and between
individuals and literals, is insufficiently supported by available reasoning
systems. We provide differently sized benchmark subsets of CaLiGraph, which can
be used for performance analysis of reasoning systems.
| [
{
"version": "v1",
"created": "Mon, 11 Oct 2021 06:47:07 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jan 2022 10:21:44 GMT"
}
] | 1,641,340,800,000 | [
[
"Heist",
"Nicolas",
""
],
[
"Paulheim",
"Heiko",
""
]
] |
2110.05690 | Junzhe Zhang | Junzhe Zhang, Jin Tian, Elias Bareinboim | Partial Counterfactual Identification from Observational and
Experimental Data | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | This paper investigates the problem of bounding counterfactual queries from
an arbitrary collection of observational and experimental distributions and
qualitative knowledge about the underlying data-generating model represented in
the form of a causal diagram. We show that all counterfactual distributions in
an arbitrary structural causal model (SCM) could be generated by a canonical
family of SCMs with the same causal diagram where unobserved (exogenous)
variables are discrete with a finite domain. Utilizing the canonical SCMs, we
translate the problem of bounding counterfactuals into that of polynomial
programming whose solution provides optimal bounds for the counterfactual
query. Solving such polynomial programs is in general computationally
expensive. We therefore develop effective Monte Carlo algorithms to approximate
the optimal bounds from an arbitrary combination of observational and
experimental data. Our algorithms are validated extensively on synthetic and
real-world datasets.
| [
{
"version": "v1",
"created": "Tue, 12 Oct 2021 02:21:30 GMT"
}
] | 1,634,083,200,000 | [
[
"Zhang",
"Junzhe",
""
],
[
"Tian",
"Jin",
""
],
[
"Bareinboim",
"Elias",
""
]
] |
2110.05743 | Shulin Cao | Shulin Cao, Jiaxin Shi, Zijun Yao, Xin Lv, Jifan Yu, Lei Hou, Juanzi
Li, Zhiyuan Liu, Jinghui Xiao | Program Transfer for Answering Complex Questions over Knowledge Bases | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Program induction for answering complex questions over knowledge bases (KBs)
aims to decompose a question into a multi-step program, whose execution against
the KB produces the final answer. Learning to induce programs relies on a large
number of parallel question-program pairs for the given KB. However, for most
KBs, the gold program annotations are usually lacking, making learning
difficult. In this paper, we propose the approach of program transfer, which
aims to leverage the valuable program annotations on the rich-resourced KBs as
external supervision signals to aid program induction for the low-resourced KBs
that lack program annotations. For program transfer, we design a novel
two-stage parsing framework with an efficient ontology-guided pruning strategy.
First, a sketch parser translates the question into a high-level program
sketch, which is the composition of functions. Second, given the question and
sketch, an argument parser searches the detailed arguments from the KB for
functions. During the searching, we incorporate the KB ontology to prune the
search space. The experiments on ComplexWebQuestions and WebQuestionSP show
that our method outperforms SOTA methods significantly, demonstrating the
effectiveness of program transfer and our framework. Our codes and datasets can
be obtained from https://github.com/THU-KEG/ProgramTransfer.
| [
{
"version": "v1",
"created": "Tue, 12 Oct 2021 05:25:30 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Oct 2021 13:40:50 GMT"
},
{
"version": "v3",
"created": "Thu, 10 Mar 2022 15:16:34 GMT"
}
] | 1,646,956,800,000 | [
[
"Cao",
"Shulin",
""
],
[
"Shi",
"Jiaxin",
""
],
[
"Yao",
"Zijun",
""
],
[
"Lv",
"Xin",
""
],
[
"Yu",
"Jifan",
""
],
[
"Hou",
"Lei",
""
],
[
"Li",
"Juanzi",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Xiao",
"Jinghui",
""
]
] |
2110.06477 | Kai Wang | Kai Wang, Zhonghao Wang, Mo Yu, Humphrey Shi | Feudal Reinforcement Learning by Reading Manuals | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reading to act is a prevalent but challenging task which requires the ability
to reason from a concise instruction. However, previous works face the semantic
mismatch between the low-level actions and the high-level language descriptions
and require the human-designed curriculum to work properly. In this paper, we
present a Feudal Reinforcement Learning (FRL) model consisting of a manager
agent and a worker agent. The manager agent is a multi-hop plan generator
dealing with high-level abstract information and generating a series of
sub-goals in a backward manner. The worker agent deals with the low-level
perceptions and actions to achieve the sub-goals one by one. In comparison, our
FRL model effectively alleviate the mismatching between text-level inference
and low-level perceptions and actions; and is general to various forms of
environments, instructions and manuals; and our multi-hop plan generator can
significantly boost for challenging tasks where multi-step reasoning form the
texts is critical to resolve the instructed goals. We showcase our approach
achieves competitive performance on two challenging tasks, Read to Fight
Monsters (RTFM) and Messenger, without human-designed curriculum learning.
| [
{
"version": "v1",
"created": "Wed, 13 Oct 2021 03:50:15 GMT"
}
] | 1,634,169,600,000 | [
[
"Wang",
"Kai",
""
],
[
"Wang",
"Zhonghao",
""
],
[
"Yu",
"Mo",
""
],
[
"Shi",
"Humphrey",
""
]
] |
2110.06536 | Julia Kiseleva | Julia Kiseleva, Ziming Li, Mohammad Aliannejadi, Shrestha Mohanty,
Maartje ter Hoeve, Mikhail Burtsev, Alexey Skrynnik, Artem Zholus, Aleksandr
Panov, Kavya Srinet, Arthur Szlam, Yuxuan Sun, Katja Hofmann, Michel Galley,
Ahmed Awadallah | NeurIPS 2021 Competition IGLU: Interactive Grounded Language
Understanding in a Collaborative Environment | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Human intelligence has the remarkable ability to adapt to new tasks and
environments quickly. Starting from a very young age, humans acquire new skills
and learn how to solve new tasks either by imitating the behavior of others or
by following provided natural language instructions. To facilitate research in
this direction, we propose IGLU: Interactive Grounded Language Understanding in
a Collaborative Environment. The primary goal of the competition is to approach
the problem of how to build interactive agents that learn to solve a task while
provided with grounded natural language instructions in a collaborative
environment. Understanding the complexity of the challenge, we split it into
sub-tasks to make it feasible for participants.
This research challenge is naturally related, but not limited, to two fields
of study that are highly relevant to the NeurIPS community: Natural Language
Understanding and Generation (NLU/G) and Reinforcement Learning (RL).
Therefore, the suggested challenge can bring two communities together to
approach one of the important challenges in AI. Another important aspect of the
challenge is the dedication to perform a human-in-the-loop evaluation as a
final evaluation for the agents developed by contestants.
| [
{
"version": "v1",
"created": "Wed, 13 Oct 2021 07:13:44 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Oct 2021 01:11:15 GMT"
}
] | 1,634,515,200,000 | [
[
"Kiseleva",
"Julia",
""
],
[
"Li",
"Ziming",
""
],
[
"Aliannejadi",
"Mohammad",
""
],
[
"Mohanty",
"Shrestha",
""
],
[
"ter Hoeve",
"Maartje",
""
],
[
"Burtsev",
"Mikhail",
""
],
[
"Skrynnik",
"Alexey",
""
],
[
"Zholus",
"Artem",
""
],
[
"Panov",
"Aleksandr",
""
],
[
"Srinet",
"Kavya",
""
],
[
"Szlam",
"Arthur",
""
],
[
"Sun",
"Yuxuan",
""
],
[
"Hofmann",
"Katja",
""
],
[
"Galley",
"Michel",
""
],
[
"Awadallah",
"Ahmed",
""
]
] |
2110.07033 | Livio Robaldo | Livio Robaldo and Kolawole J. Adebayo | Compliance checking in reified IO logic via SHACL | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Reified Input/Output (I/O) logic[21] has been recently proposed to model
real-world norms in terms of the logic in [11]. This is massively grounded on
the notion of reification, and it has specifically designed to model meaning of
natural language sentences, such as the ones occurring in existing legislation.
This paper presents a methodology to carry out compliance checking on reified
I/O logic formulae. These are translated in SHACL (Shapes Constraint Language)
shapes, a recent W3C recommendation to validate and reason with RDF
triplestores. Compliance checking is then enforced by validating RDF graphs
describing states of affairs with respect to these SHACL shapes.
| [
{
"version": "v1",
"created": "Wed, 13 Oct 2021 21:09:47 GMT"
}
] | 1,634,256,000,000 | [
[
"Robaldo",
"Livio",
""
],
[
"Adebayo",
"Kolawole J.",
""
]
] |
2110.07710 | Livio Robaldo | Ilaria Angela Amantea, Livio Robaldo, Emilio Sulis, Guido Boella,
Guido Governatori | Semi-automated checking for regulatory compliance in e-Health | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | One of the main issues of every business process is to be compliant with
legal rules. This work presents a methodology to check in a semi-automated way
the regulatory compliance of a business process. We analyse an e-Health
hospital service in particular: the Hospital at Home (HaH) service. The paper
shows, at first, the analysis of the hospital business using the Business
Process Management and Notation (BPMN) standard language, then, the
formalization in Defeasible Deontic Logic (DDL) of some rules of the European
General Data Protection Regulation (GDPR). The aim is to show how to combine a
set of tasks of a business with a set of rules to be compliant with, using a
tool.
| [
{
"version": "v1",
"created": "Thu, 14 Oct 2021 20:58:02 GMT"
}
] | 1,634,515,200,000 | [
[
"Amantea",
"Ilaria Angela",
""
],
[
"Robaldo",
"Livio",
""
],
[
"Sulis",
"Emilio",
""
],
[
"Boella",
"Guido",
""
],
[
"Governatori",
"Guido",
""
]
] |
2110.08068 | Peter Nightingale | Miquel Bofill and Jordi Coll and Peter Nightingale and Josep Suy and
Felix Ulrich-Oltean and Mateu Villaret | SAT Encodings for Pseudo-Boolean Constraints Together With At-Most-One
Constraints | null | null | 10.1016/j.artint.2021.103604 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | When solving a combinatorial problem using propositional satisfiability
(SAT), the encoding of the problem is of vital importance. We study encodings
of Pseudo-Boolean (PB) constraints, a common type of arithmetic constraint that
appears in a wide variety of combinatorial problems such as timetabling,
scheduling, and resource allocation. In some cases PB constraints occur
together with at-most-one (AMO) constraints over subsets of their variables
(forming PB(AMO) constraints). Recent work has shown that taking account of
AMOs when encoding PB constraints using decision diagrams can produce a
dramatic improvement in solver efficiency. In this paper we extend the approach
to other state-of-the-art encodings of PB constraints, developing several new
encodings for PB(AMO) constraints. Also, we present a more compact and
efficient version of the popular Generalized Totalizer encoding, named Reduced
Generalized Totalizer. This new encoding is also adapted for PB(AMO)
constraints for a further gain. Our experiments show that the encodings of
PB(AMO) constraints can be substantially smaller than those of PB constraints.
PB(AMO) encodings allow many more instances to be solved within a time limit,
and solving time is improved by more than one order of magnitude in some cases.
We also observed that there is no single overall winner among the considered
encodings, but efficiency of each encoding may depend on PB(AMO)
characteristics such as the magnitude of coefficient values.
| [
{
"version": "v1",
"created": "Fri, 15 Oct 2021 12:53:01 GMT"
}
] | 1,634,515,200,000 | [
[
"Bofill",
"Miquel",
""
],
[
"Coll",
"Jordi",
""
],
[
"Nightingale",
"Peter",
""
],
[
"Suy",
"Josep",
""
],
[
"Ulrich-Oltean",
"Felix",
""
],
[
"Villaret",
"Mateu",
""
]
] |
2110.08318 | Harsha Kokel | Harsha Kokel, Arjun Manoharan, Sriraam Natarajan, Balaraman Ravindran,
Prasad Tadepalli | Dynamic probabilistic logic models for effective abstractions in RL | Accepted at StarAI 2021 (held in conjunction with IJCLR 2021) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State abstraction enables sample-efficient learning and better task transfer
in complex reinforcement learning environments. Recently, we proposed RePReL
(Kokel et al. 2021), a hierarchical framework that leverages a relational
planner to provide useful state abstractions for learning. We present a brief
overview of this framework and the use of a dynamic probabilistic logic model
to design these state abstractions. Our experiments show that RePReL not only
achieves better performance and efficient learning on the task at hand but also
demonstrates better generalization to unseen tasks.
| [
{
"version": "v1",
"created": "Fri, 15 Oct 2021 18:53:04 GMT"
}
] | 1,634,601,600,000 | [
[
"Kokel",
"Harsha",
""
],
[
"Manoharan",
"Arjun",
""
],
[
"Natarajan",
"Sriraam",
""
],
[
"Ravindran",
"Balaraman",
""
],
[
"Tadepalli",
"Prasad",
""
]
] |
2110.08343 | Evgeny Osipov | Evgeny Osipov, Sachin Kahawala, Dilantha Haputhanthri, Thimal
Kempitiya, Daswin De Silva, Damminda Alahakoon, Denis Kleyko | Hyperseed: Unsupervised Learning with Vector Symbolic Architectures | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by recent innovations in biologically-inspired neuromorphic
hardware, this article presents a novel unsupervised machine learning algorithm
named Hyperseed that draws on the principles of Vector Symbolic Architectures
(VSA) for fast learning of a topology preserving feature map of unlabelled
data. It relies on two major operations of VSA, binding and bundling. The
algorithmic part of Hyperseed is expressed within Fourier Holographic Reduced
Representations model, which is specifically suited for implementation on
spiking neuromorphic hardware. The two primary contributions of the Hyperseed
algorithm are, few-shot learning and a learning rule based on single vector
operation. These properties are empirically evaluated on synthetic datasets as
well as on illustrative benchmark use-cases, IRIS classification, and a
language identification task using n-gram statistics. The results of these
experiments confirm the capabilities of Hyperseed and its applications in
neuromorphic hardware.
| [
{
"version": "v1",
"created": "Fri, 15 Oct 2021 20:05:43 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Sep 2022 09:55:31 GMT"
}
] | 1,664,496,000,000 | [
[
"Osipov",
"Evgeny",
""
],
[
"Kahawala",
"Sachin",
""
],
[
"Haputhanthri",
"Dilantha",
""
],
[
"Kempitiya",
"Thimal",
""
],
[
"De Silva",
"Daswin",
""
],
[
"Alahakoon",
"Damminda",
""
],
[
"Kleyko",
"Denis",
""
]
] |
2110.08423 | Elias Khalil | Elias B. Khalil, Pashootan Vaezipoor, Bistra Dilkina | Finding Backdoors to Integer Programs: A Monte Carlo Tree Search
Framework | Published in the Proceedings of AAAI 2022 | Proceedings of the AAAI Conference on Artificial Intelligence.
Vol. 36. No. 4. 2022 | 10.1609/aaai.v36i4.20293 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In Mixed Integer Linear Programming (MIP), a (strong) backdoor is a "small"
subset of an instance's integer variables with the following property: in a
branch-and-bound procedure, the instance can be solved to global optimality by
branching only on the variables in the backdoor. Constructing datasets of
pre-computed backdoors for widely used MIP benchmark sets or particular problem
families can enable new questions around novel structural properties of a MIP,
or explain why a problem that is hard in theory can be solved efficiently in
practice. Existing algorithms for finding backdoors rely on sampling candidate
variable subsets in various ways, an approach which has demonstrated the
existence of backdoors for some instances from MIPLIB2003 and MIPLIB2010.
However, these algorithms fall short of consistently succeeding at the task due
to an imbalance between exploration and exploitation. We propose BaMCTS, a
Monte Carlo Tree Search framework for finding backdoors to MIPs. Extensive
algorithmic engineering, hybridization with traditional MIP concepts, and close
integration with the CPLEX solver have enabled our method to outperform
baselines on MIPLIB2017 instances, finding backdoors more frequently and more
efficiently.
| [
{
"version": "v1",
"created": "Sat, 16 Oct 2021 00:36:53 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jul 2022 19:23:40 GMT"
}
] | 1,657,497,600,000 | [
[
"Khalil",
"Elias B.",
""
],
[
"Vaezipoor",
"Pashootan",
""
],
[
"Dilkina",
"Bistra",
""
]
] |
2110.08480 | Rafid Ameer Mahmud | Rafid Ameer Mahmud, Fahim Faisal, Saaduddin Mahmud, Md. Mosaddek Khan | Learning Cooperation and Online Planning Through Simulation and Graph
Convolutional Network | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Multi-agent Markov Decision Process (MMDP) has been an effective way of
modelling sequential decision making algorithms for multi-agent cooperative
environments. A number of algorithms based on centralized and decentralized
planning have been developed in this domain. However, dynamically changing
environment, coupled with exponential size of the state and joint action space,
make it difficult for these algorithms to provide both efficiency and
scalability. Recently, Centralized planning algorithm FV-MCTS-MP and
decentralized planning algorithm \textit{Alternate maximization with
Behavioural Cloning} (ABC) have achieved notable performance in solving MMDPs.
However, they are not capable of adapting to dynamically changing environments
and accounting for the lack of communication among agents, respectively.
Against this background, we introduce a simulation based online planning
algorithm, that we call SiCLOP, for multi-agent cooperative environments.
Specifically, SiCLOP tailors Monte Carlo Tree Search (MCTS) and uses
Coordination Graph (CG) and Graph Neural Network (GCN) to learn cooperation and
provides real time solution of a MMDP problem. It also improves scalability
through an effective pruning of action space. Additionally, unlike FV-MCTS-MP
and ABC, SiCLOP supports transfer learning, which enables learned agents to
operate in different environments. We also provide theoretical discussion about
the convergence property of our algorithm within the context of multi-agent
settings. Finally, our extensive empirical results show that SiCLOP
significantly outperforms the state-of-the-art online planning algorithms.
| [
{
"version": "v1",
"created": "Sat, 16 Oct 2021 05:54:32 GMT"
}
] | 1,634,601,600,000 | [
[
"Mahmud",
"Rafid Ameer",
""
],
[
"Faisal",
"Fahim",
""
],
[
"Mahmud",
"Saaduddin",
""
],
[
"Khan",
"Md. Mosaddek",
""
]
] |
2110.08653 | Wei Li | Wei Li | Learning UI Navigation through Demonstrations composed of Macro Actions | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We have developed a framework to reliably build agents capable of UI
navigation. The state space is simplified from raw-pixels to a set of UI
elements extracted from screen understanding, such as OCR and icon detection.
The action space is restricted to the UI elements plus a few global actions.
Actions can be customized for tasks and each action is a sequence of basic
operations conditioned on status checks. With such a design, we are able to
train DQfD and BC agents with a small number of demonstration episodes. We
propose demo augmentation that significantly reduces the required number of
human demonstrations. We made a customization of DQfD to allow demos collected
on screenshots to facilitate the demo coverage of rare cases. Demos are only
collected for the failed cases during the evaluation of the previous version of
the agent. With 10s of iterations looping over evaluation, demo collection, and
training, the agent reaches a 98.7\% success rate on the search task in an
environment of 80+ apps and websites where initial states and viewing
parameters are randomized.
| [
{
"version": "v1",
"created": "Sat, 16 Oct 2021 20:29:41 GMT"
}
] | 1,634,601,600,000 | [
[
"Li",
"Wei",
""
]
] |
2110.08963 | Akshay Dharmavaram | Akshay Dharmavaram, Tejus Gupta, Jiachen Li, Katia P. Sycara | SS-MAIL: Self-Supervised Multi-Agent Imitation Learning | Pre-Print | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The current landscape of multi-agent expert imitation is broadly dominated by
two families of algorithms - Behavioral Cloning (BC) and Adversarial Imitation
Learning (AIL). BC approaches suffer from compounding errors, as they ignore
the sequential decision-making nature of the trajectory generation problem.
Furthermore, they cannot effectively model multi-modal behaviors. While AIL
methods solve the issue of compounding errors and multi-modal policy training,
they are plagued with instability in their training dynamics. In this work, we
address this issue by introducing a novel self-supervised loss that encourages
the discriminator to approximate a richer reward function. We employ our method
to train a graph-based multi-agent actor-critic architecture that learns a
centralized policy, conditioned on a learned latent interaction graph. We show
that our method (SS-MAIL) outperforms prior state-of-the-art methods on
real-world prediction tasks, as well as on custom-designed synthetic
experiments. We prove that SS-MAIL is part of the family of AIL methods by
providing a theoretical connection to cost-regularized apprenticeship learning.
Moreover, we leverage the self-supervised formulation to introduce a novel
teacher forcing-based curriculum (Trajectory Forcing) that improves sample
efficiency by progressively increasing the length of the generated trajectory.
The SS-MAIL framework improves multi-agent imitation capabilities by
stabilizing the policy training, improving the reward shaping capabilities, as
well as providing the ability for modeling multi-modal trajectories.
| [
{
"version": "v1",
"created": "Mon, 18 Oct 2021 01:17:50 GMT"
}
] | 1,634,601,600,000 | [
[
"Dharmavaram",
"Akshay",
""
],
[
"Gupta",
"Tejus",
""
],
[
"Li",
"Jiachen",
""
],
[
"Sycara",
"Katia P.",
""
]
] |
2110.09152 | Tanya Braun | Tanya Braun, Stefan Fischer, Florian Lau, Ralf M\"oller | Lifting DecPOMDPs for Nanoscale Systems -- A Work in Progress | Accepted at the Tenth International Workshop on Statistical
Relational AI (StarAI-2021) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | DNA-based nanonetworks have a wide range of promising use cases, especially
in the field of medicine. With a large set of agents, a partially observable
stochastic environment, and noisy observations, such nanoscale systems can be
modelled as a decentralised, partially observable, Markov decision process
(DecPOMDP). As the agent set is a dominating factor, this paper presents (i)
lifted DecPOMDPs, partitioning the agent set into sets of indistinguishable
agents, reducing the worst-case space required, and (ii) a nanoscale medical
system as an application. Future work turns to solving and implementing lifted
DecPOMDPs.
| [
{
"version": "v1",
"created": "Mon, 18 Oct 2021 10:14:00 GMT"
}
] | 1,634,601,600,000 | [
[
"Braun",
"Tanya",
""
],
[
"Fischer",
"Stefan",
""
],
[
"Lau",
"Florian",
""
],
[
"Möller",
"Ralf",
""
]
] |
2110.09197 | Marcel Gehrke | Marcel Gehrke | On the Completeness and Complexity of the Lifted Dynamic Junction Tree
Algorithm | StaRAI 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | For static lifted inference algorithms, completeness, i.e., domain
liftability, is extensively studied. However, so far no domain liftability
results for temporal lifted inference algorithms exist. In this paper, we close
this gap. More precisely, we contribute the first completeness and complexity
analysis for a temporal lifted algorithm, the socalled lifted dynamic junction
tree algorithm (LDJT), which is the only exact lifted temporal inference
algorithm out there. To handle temporal aspects efficiently, LDJT uses
conditional independences to proceed in time, leading to restrictions w.r.t.
elimination orders. We show that these restrictions influence the domain
liftability results and show that one particular case while proceeding in time,
has to be excluded from FO12 . Additionally, for the complexity of LDJT, we
prove that the lifted width is in even more cases smaller than the
corresponding treewidth in comparison to static inference.
| [
{
"version": "v1",
"created": "Mon, 18 Oct 2021 11:36:06 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Oct 2021 12:13:08 GMT"
},
{
"version": "v3",
"created": "Fri, 31 May 2024 14:15:38 GMT"
}
] | 1,717,372,800,000 | [
[
"Gehrke",
"Marcel",
""
]
] |
2110.09240 | Nardine Osman | Carles Sierra and Nardine Osman and Pablo Noriega and Jordi
Sabater-Mir and Antoni Perell\'o | Value alignment: a formal approach | accepted paper at the Responsible Artificial Intelligence Agents
Workshop, of the 18th International Conference on Autonomous Agents and
MultiAgent Systems (AAMAS 2019) | Responsible Artificial Intelligence Agents Workshop (RAIA) at
AAMAS 2019 | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | principles that should govern autonomous AI systems. It essentially states
that a system's goals and behaviour should be aligned with human values. But
how to ensure value alignment? In this paper we first provide a formal model to
represent values through preferences and ways to compute value aggregations;
i.e. preferences with respect to a group of agents and/or preferences with
respect to sets of values. Value alignment is then defined, and computed, for a
given norm with respect to a given value through the increase/decrease that it
results in the preferences of future states of the world. We focus on norms as
it is norms that govern behaviour, and as such, the alignment of a given system
with a given value will be dictated by the norms the system follows.
| [
{
"version": "v1",
"created": "Mon, 18 Oct 2021 12:40:04 GMT"
}
] | 1,707,350,400,000 | [
[
"Sierra",
"Carles",
""
],
[
"Osman",
"Nardine",
""
],
[
"Noriega",
"Pablo",
""
],
[
"Sabater-Mir",
"Jordi",
""
],
[
"Perelló",
"Antoni",
""
]
] |
2110.09378 | Tan Viet Tuyen Nguyen | Nguyen Tan Viet Tuyen, Oya Celiktutan | Forecasting Nonverbal Social Signals during Dyadic Interactions with
Generative Adversarial Neural Networks | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We are approaching a future where social robots will progressively become
widespread in many aspects of our daily lives, including education, healthcare,
work, and personal use. All of such practical applications require that humans
and robots collaborate in human environments, where social interaction is
unavoidable. Along with verbal communication, successful social interaction is
closely coupled with the interplay between nonverbal perception and action
mechanisms, such as observation of gaze behaviour and following their
attention, coordinating the form and function of hand gestures. Humans perform
nonverbal communication in an instinctive and adaptive manner, with no effort.
For robots to be successful in our social landscape, they should therefore
engage in social interactions in a humanlike way, with increasing levels of
autonomy. In particular, nonverbal gestures are expected to endow social robots
with the capability of emphasizing their speech, or showing their intentions.
Motivated by this, our research sheds a light on modeling human behaviors in
social interactions, specifically, forecasting human nonverbal social signals
during dyadic interactions, with an overarching goal of developing robotic
interfaces that can learn to imitate human dyadic interactions. Such an
approach will ensure the messages encoded in the robot gestures could be
perceived by interacting partners in a facile and transparent manner, which
could help improve the interacting partner perception and makes the social
interaction outcomes enhanced.
| [
{
"version": "v1",
"created": "Mon, 18 Oct 2021 15:01:32 GMT"
}
] | 1,634,601,600,000 | [
[
"Tuyen",
"Nguyen Tan Viet",
""
],
[
"Celiktutan",
"Oya",
""
]
] |
2110.09624 | Eric Horvitz | Eric Horvitz and John Breese | Ideal Partition of Resources for Metareasoning | 12 pages, 5 figures. January 1990 technical report on principles of
metareasoning and bounded optimality | null | null | Report-no: KSL-90-26, Computer Science Department, Stanford
University | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | We can achieve significant gains in the value of computation by metareasoning
about the nature or extent of base-level problem solving before executing a
solution. However, resources that are irrevocably committed to metareasoning
are not available for executing a solution. Thus, it is important to determine
the portion of resources we wish to apply to metareasoning and control versus
to the execution of a solution plan. Recent research on rational agency has
highlighted the importance of limiting the consumption of resources by
metareasoning machinery. We shall introduce the metareasoning-partition
problem--the problem of ideally apportioning costly reasoning resources to
planning a solution versus applying resource to executing a solution to a
problem. We exercise prototypical metareasoning-partition models to probe the
relationships between time allocated to metareasoning and to execution for
different problem classes. Finally, we examine the value of metareasoning in
the context of our functional analyses.
| [
{
"version": "v1",
"created": "Mon, 18 Oct 2021 21:20:26 GMT"
}
] | 1,634,688,000,000 | [
[
"Horvitz",
"Eric",
""
],
[
"Breese",
"John",
""
]
] |
2110.09829 | Ilir Kola | Ilir Kola, Pradeep K. Murukannaiah, Catholijn M. Jonker, M. Birna van
Riemsdijk | Towards Social Situation Awareness in Support Agents | 8 pages, 1 figure | null | 10.1109/MIS.2022.3163625 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial agents that support people in their daily activities (e.g.,
virtual coaches and personal assistants) are increasingly prevalent. Since many
daily activities are social in nature, support agents should understand a
user's social situation to offer comprehensive support. However, there are no
systematic approaches for developing support agents that are social situation
aware. We identify key requirements for a support agent to be social situation
aware and propose steps to realize those requirements. These steps are
presented through a conceptual architecture centered on two key ideas: (1)
conceptualizing social situation awareness as an instantiation of `general'
situation awareness, and (2) using situation taxonomies for such instantiation.
This enables support agents to represent a user's social situation, comprehend
its meaning, and assess its impact on the user's behavior. We discuss empirical
results supporting the effectiveness of the proposed approach and illustrate
how the architecture can be used in support agents through two use cases.
| [
{
"version": "v1",
"created": "Tue, 19 Oct 2021 10:35:46 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Oct 2021 06:20:46 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Apr 2022 08:55:03 GMT"
}
] | 1,649,116,800,000 | [
[
"Kola",
"Ilir",
""
],
[
"Murukannaiah",
"Pradeep K.",
""
],
[
"Jonker",
"Catholijn M.",
""
],
[
"van Riemsdijk",
"M. Birna",
""
]
] |
2110.09978 | Michael R. Douglas | Michael R. Douglas, Michael Simkin, Omri Ben-Eliezer, Tianqi Wu, Peter
Chin, Trung V. Dang and Andrew Wood | What is Learned in Knowledge Graph Embeddings? | 16 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A knowledge graph (KG) is a data structure which represents entities and
relations as the vertices and edges of a directed graph with edge types. KGs
are an important primitive in modern machine learning and artificial
intelligence. Embedding-based models, such as the seminal TransE [Bordes et
al., 2013] and the recent PairRE [Chao et al., 2020] are among the most popular
and successful approaches for representing KGs and inferring missing edges
(link completion). Their relative success is often credited in the literature
to their ability to learn logical rules between the relations.
In this work, we investigate whether learning rules between relations is
indeed what drives the performance of embedding-based methods. We define motif
learning and two alternative mechanisms, network learning (based only on the
connectivity of the KG, ignoring the relation types), and unstructured
statistical learning (ignoring the connectivity of the graph). Using
experiments on synthetic KGs, we show that KG models can learn motifs and how
this ability is degraded by non-motif (noise) edges. We propose tests to
distinguish the contributions of the three mechanisms to performance, and apply
them to popular KG benchmarks. We also discuss an issue with the standard
performance testing protocol and suggest an improvement.
To appear in the proceedings of Complex Networks 2021.
| [
{
"version": "v1",
"created": "Tue, 19 Oct 2021 13:52:11 GMT"
}
] | 1,634,688,000,000 | [
[
"Douglas",
"Michael R.",
""
],
[
"Simkin",
"Michael",
""
],
[
"Ben-Eliezer",
"Omri",
""
],
[
"Wu",
"Tianqi",
""
],
[
"Chin",
"Peter",
""
],
[
"Dang",
"Trung V.",
""
],
[
"Wood",
"Andrew",
""
]
] |
2110.10007 | Kebing Jin | Kebing Jin, Hankz Hankui Zhuo, Zhanhao Xiao, Hai Wan, Subbarao
Kambhampati | Gradient-Based Mixed Planning with Symbolic and Numeric Action
Parameters | 41 pages, 22 figures. Accepted by Artificial Intelligence | null | 10.1016/j.artint.2022.103789 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Dealing with planning problems with both logical relations and numeric
changes in real-world dynamic environments is challenging. Existing numeric
planning systems for the problem often discretize numeric variables or impose
convex constraints on numeric variables, which harms the performance when
solving problems. In this paper, we propose a novel algorithm framework to
solve numeric planning problems mixed with logical relations and numeric
changes based on gradient descent. We cast the numeric planning with logical
relations and numeric changes as an optimization problem. Specifically, we
extend syntax to allow parameters of action models to be either objects or
real-valued numbers, which enhances the ability to model real-world numeric
effects. Based on the extended modeling language, we propose a gradient-based
framework to simultaneously optimize numeric parameters and compute appropriate
actions to form candidate plans. The gradient-based framework is composed of an
algorithmic heuristic module based on propositional operations to select
actions and generate constraints for gradient descent, an algorithmic
transition module to update states to next ones, and a loss module to compute
loss. We repeatedly minimize loss by updating numeric parameters and compute
candidate plans until it converges into a valid plan for the planning problem.
In the empirical study, we exhibit that our algorithm framework is both
effective and efficient in solving planning problems mixed with logical
relations and numeric changes, especially when the problems contain obstacles
and non-linear numeric effects.
| [
{
"version": "v1",
"created": "Tue, 19 Oct 2021 14:21:19 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Oct 2022 08:12:33 GMT"
}
] | 1,665,446,400,000 | [
[
"Jin",
"Kebing",
""
],
[
"Zhuo",
"Hankz Hankui",
""
],
[
"Xiao",
"Zhanhao",
""
],
[
"Wan",
"Hai",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2110.10144 | Zijian Zhang | Zijian Zhang, Koustav Rudra, Avishek Anand | FaxPlainAC: A Fact-Checking Tool Based on EXPLAINable Models with HumAn
Correction in the Loop | 5 pages, 4 figures, accepted as a DEMO paper in CIKM 2021 | CIKM 2021 | 10.1145/3459637.3481985 | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Fact-checking on the Web has become the main mechanism through which we
detect the credibility of the news or information. Existing fact-checkers
verify the authenticity of the information (support or refute the claim) based
on secondary sources of information. However, existing approaches do not
consider the problem of model updates due to constantly increasing training
data due to user feedback. It is therefore important to conduct user studies to
correct models' inference biases and improve the model in a life-long learning
manner in the future according to the user feedback. In this paper, we present
FaxPlainAC, a tool that gathers user feedback on the output of explainable
fact-checking models. FaxPlainAC outputs both the model decision, i.e., whether
the input fact is true or not, along with the supporting/refuting evidence
considered by the model. Additionally, FaxPlainAC allows for accepting user
feedback both on the prediction and explanation. Developed in Python,
FaxPlainAC is designed as a modular and easily deployable tool. It can be
integrated with other downstream tasks and allowing for fact-checking human
annotation gathering and life-long learning.
| [
{
"version": "v1",
"created": "Sun, 12 Sep 2021 13:38:24 GMT"
}
] | 1,634,688,000,000 | [
[
"Zhang",
"Zijian",
""
],
[
"Rudra",
"Koustav",
""
],
[
"Anand",
"Avishek",
""
]
] |
2110.10284 | Ellie Y. Cheng | Ellie Y. Cheng, Todd Millstein, Guy Van den Broeck, Steven Holtzen | flip-hoisting: Exploiting Repeated Parameters in Discrete Probabilistic
Programs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many of today's probabilistic programming languages (PPLs) have brittle
inference performance: the performance of the underlying inference algorithm is
very sensitive to the precise way in which the probabilistic program is
written. A standard way of addressing this challenge in traditional programming
languages is via program optimizations, which seek to unburden the programmer
from writing low-level performant code, freeing them to work at a higher-level
of abstraction. The arsenal of applicable program optimizations for PPLs to
choose from is scarce in comparison to traditional programs; few of today's
PPLs offer significant forms of automated program optimization. In this work we
develop a new family of program optimizations specific to discrete-valued
knowledge compilation based PPLs. We identify a particular form of program
structure unique to these PPLs that tangibly affects exact inference
performance in these programs: redundant random variables -- variables with
repeated parameters and inconsistent path conditions. We develop a new program
analysis and associated optimization called flip-hoisting that identifies these
redundancies and optimizes them into a single random variable. We show that
flip-hoisting yields inference speedups of up to 60% on applications of
probabilistic programs such as Bayesian networks and probabilistic
verification.
| [
{
"version": "v1",
"created": "Tue, 19 Oct 2021 22:04:26 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Mar 2022 02:44:48 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Mar 2022 02:13:25 GMT"
},
{
"version": "v4",
"created": "Sun, 19 Feb 2023 20:53:59 GMT"
},
{
"version": "v5",
"created": "Tue, 21 Feb 2023 01:52:23 GMT"
}
] | 1,677,024,000,000 | [
[
"Cheng",
"Ellie Y.",
""
],
[
"Millstein",
"Todd",
""
],
[
"Broeck",
"Guy Van den",
""
],
[
"Holtzen",
"Steven",
""
]
] |
2110.10374 | Shilun Li | Shilun Li, Veronica Peng | Playing 2048 With Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The game of 2048 is a highly addictive game. It is easy to learn the game,
but hard to master as the created game revealed that only about 1% games out of
hundreds million ever played have been won. In this paper, we would like to
explore reinforcement learning techniques to win 2048. The approaches we have
took include deep Q-learning and beam search, with beam search reaching 2048
28.5 of time.
| [
{
"version": "v1",
"created": "Wed, 20 Oct 2021 05:02:31 GMT"
}
] | 1,634,860,800,000 | [
[
"Li",
"Shilun",
""
],
[
"Peng",
"Veronica",
""
]
] |
2110.10474 | Shen Li | Ran Cheng, Chao Chen, Longfei Xu, Shen Li, Lei Wang, Hengbin Cui,
Kaikui Liu, Xiaolong Li | R4: A Framework for Route Representation and Route Recommendation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Route recommendation is significant in navigation service. Two major
challenges for route recommendation are route representation and user
representation. Different from items that can be identified by unique IDs in
traditional recommendation, routes are combinations of links (i.e., a road
segment and its following action like turning left) and the number of
combinations could be close to infinite. Besides, the representation of a route
changes under different scenarios. These facts result in severe sparsity of
routes, which increases the difficulty of route representation. Moreover, link
attribute deficiencies and errors affect preciseness of route representation.
Because of the sparsity of routes, the interaction data between users and
routes are also sparse. This makes it not easy to acquire user representation
from historical user-item interactions as traditional recommendations do. To
address these issues, we propose a novel learning framework R4. In R4, we
design a sparse & dense network to obtain representations of routes. The sparse
unit learns link ID embeddings and aggregates them to represent a route, which
captures implicit route characteristics and subsequently alleviates problems
caused by link attribute deficiencies and errors. The dense unit extracts
implicit local features of routes from link attributes. For user
representation, we utilize a series of historical navigation to extract user
preference. R4 achieves remarkable performance in both offline and online
experiments.
| [
{
"version": "v1",
"created": "Wed, 20 Oct 2021 10:21:08 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Oct 2021 03:06:33 GMT"
}
] | 1,635,206,400,000 | [
[
"Cheng",
"Ran",
""
],
[
"Chen",
"Chao",
""
],
[
"Xu",
"Longfei",
""
],
[
"Li",
"Shen",
""
],
[
"Wang",
"Lei",
""
],
[
"Cui",
"Hengbin",
""
],
[
"Liu",
"Kaikui",
""
],
[
"Li",
"Xiaolong",
""
]
] |
2110.10482 | Yun Luo | Zihan Liu, Yun Luo, Zelin Zang, Stan Z. Li | Surrogate Representation Learning with Isometric Mapping for Gray-box
Graph Adversarial Attacks | null | WSDM22: Proceedings of the Fifteenth ACM International Conference
on Web Search and Data Mining February 2022 | 10.1145/3488560.3498481 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gray-box graph attacks aim at disrupting the performance of the victim model
by using inconspicuous attacks with limited knowledge of the victim model. The
parameters of the victim model and the labels of the test nodes are invisible
to the attacker. To obtain the gradient on the node attributes or graph
structure, the attacker constructs an imaginary surrogate model trained under
supervision. However, there is a lack of discussion on the training of
surrogate models and the robustness of provided gradient information. The
general node classification model loses the topology of the nodes on the graph,
which is, in fact, an exploitable prior for the attacker. This paper
investigates the effect of representation learning of surrogate models on the
transferability of gray-box graph adversarial attacks. To reserve the topology
in the surrogate embedding, we propose Surrogate Representation Learning with
Isometric Mapping (SRLIM). By using Isometric mapping method, our proposed
SRLIM can constrain the topological structure of nodes from the input layer to
the embedding space, that is, to maintain the similarity of nodes in the
propagation process. Experiments prove the effectiveness of our approach
through the improvement in the performance of the adversarial attacks generated
by the gradient-based attacker in untargeted poisoning gray-box setups.
| [
{
"version": "v1",
"created": "Wed, 20 Oct 2021 10:47:34 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Oct 2021 12:39:18 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Feb 2022 09:56:01 GMT"
}
] | 1,645,574,400,000 | [
[
"Liu",
"Zihan",
""
],
[
"Luo",
"Yun",
""
],
[
"Zang",
"Zelin",
""
],
[
"Li",
"Stan Z.",
""
]
] |
2110.11482 | Mario Angelelli | Mario Angelelli, Massimiliano Gervasi | Representations of epistemic uncertainty and awareness in data-driven
strategies | 32 pages, 4 figures. Improved exposition and corrected misprints.
Comments are welcome! | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The diffusion of AI and big data is reshaping decision-making processes by
increasing the amount of information that supports decisions while reducing
direct interaction with data and empirical evidence. This paradigm shift
introduces new sources of uncertainty, as limited data observability results in
ambiguity and a lack of interpretability. The need for the proper analysis of
data-driven strategies motivates the search for new models that can describe
this type of bounded access to knowledge. This contribution presents a novel
theoretical model for uncertainty in knowledge representation and its transfer
mediated by agents. We provide a dynamical description of knowledge states by
endowing our model with a structure to compare and combine them. Specifically,
an update is represented through combinations, and its explainability is based
on its consistency in different dimensional representations. We look at
inequivalent knowledge representations in terms of multiplicity of inferences,
preference relations, and information measures. Furthermore, we define a formal
analogy with two scenarios that illustrate non-classical uncertainty in terms
of ambiguity (Ellsberg's model) and reasoning about knowledge mediated by other
agents observing data (Wigner's friend). Finally, we discuss some implications
of the proposed model for data-driven strategies, with special attention to
reasoning under uncertainty about business value dimensions and the design of
measurement tools for their assessment.
| [
{
"version": "v1",
"created": "Thu, 21 Oct 2021 21:18:21 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Nov 2021 12:30:37 GMT"
},
{
"version": "v3",
"created": "Mon, 29 Nov 2021 15:57:14 GMT"
},
{
"version": "v4",
"created": "Sun, 13 Aug 2023 08:07:00 GMT"
},
{
"version": "v5",
"created": "Thu, 17 Aug 2023 09:34:43 GMT"
},
{
"version": "v6",
"created": "Thu, 16 Nov 2023 13:58:11 GMT"
},
{
"version": "v7",
"created": "Sun, 19 Nov 2023 15:00:11 GMT"
}
] | 1,700,524,800,000 | [
[
"Angelelli",
"Mario",
""
],
[
"Gervasi",
"Massimiliano",
""
]
] |
2110.11567 | Yongquan Yang | Yongquan Yang | Logical Assessment Formula and Its Principles for Evaluations with
Inaccurate Ground-Truth Labels | This is the final published version (25 pages). Knowl Inf Syst (2024) | null | 10.1007/s10115-023-02047-6 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evaluations with accurate ground-truth labels (AGTLs) have been widely
employed to assess predictive models for artificial intelligence applications.
However, in some specific fields, such as medical histopathology whole slide
image analysis, it is quite usual the situation that AGTLs are difficult to be
precisely defined or even do not exist. To alleviate this situation, we propose
logical assessment formula (LAF) and reveal its principles for evaluations with
inaccurate ground-truth labels (IAGTLs) via logical reasoning under
uncertainty. From the revealed principles of LAF, we summarize the
practicability of LAF: 1) LAF can be applied for evaluations with IAGTLs on a
more difficult task, able to act like usual strategies for evaluations with
AGTLs reasonably; 2) LAF can be applied for evaluations with IAGTLs from the
logical perspective on an easier task, unable to act like usual strategies for
evaluations with AGTLs confidently.
| [
{
"version": "v1",
"created": "Fri, 22 Oct 2021 03:18:01 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Aug 2022 08:19:42 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Dec 2022 03:23:09 GMT"
},
{
"version": "v4",
"created": "Sun, 7 Jan 2024 05:18:54 GMT"
}
] | 1,705,449,600,000 | [
[
"Yang",
"Yongquan",
""
]
] |
2110.12053 | Joaqu\'in Arias | Joaqu\'in Arias, Manuel Carro, Gopal Gupta | Towards Dynamic Consistency Checking in Goal-directed Predicate Answer
Set Programming | Submitted to PADL'22. arXiv admin note: text overlap with
arXiv:2106.14566 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Goal-directed evaluation of Answer Set Programs is gaining traction thanks to
its amenability to create AI systems that can, due to the evaluation mechanism
used, generate explanations and justifications. s(CASP) is one of these systems
and has been already used to write reasoning systems in several fields. It
provides enhanced expressiveness w.r.t. other ASP systems due to its ability to
use constraints, data structures, and unbound variables natively. However, the
performance of existing s(CASP) implementations is not on par with other ASP
systems: model consistency is checked once models have been generated, in
keeping with the generate-and-test paradigm. In this work, we present a
variation of the top-down evaluation strategy, termed Dynamic Consistency
Checking, which interleaves model generation and consistency checking. This
makes it possible to determine when a literal is not compatible with the
denials associated to the global constraints in the program, prune the current
execution branch, and choose a different alternative. This strategy is
specially (but not exclusively) relevant in problems with a high combinatorial
component. We have experimentally observed speedups of up to 90x w.r.t. the
standard versions of s(CASP).
| [
{
"version": "v1",
"created": "Fri, 22 Oct 2021 20:38:48 GMT"
}
] | 1,635,206,400,000 | [
[
"Arias",
"Joaquín",
""
],
[
"Carro",
"Manuel",
""
],
[
"Gupta",
"Gopal",
""
]
] |
2110.14378 | Nanyi Fei | Nanyi Fei, Zhiwu Lu, Yizhao Gao, Guoxing Yang, Yuqi Huo, Jingyuan Wen,
Haoyu Lu, Ruihua Song, Xin Gao, Tao Xiang, Hao Sun and Ji-Rong Wen | Towards artificial general intelligence via a multimodal foundation
model | Published by Nature Communications, see
https://www.nature.com/articles/s41467-022-30761-2 | null | 10.1038/s41467-022-30761-2 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The fundamental goal of artificial intelligence (AI) is to mimic the core
cognitive activities of human. Despite tremendous success in the AI research,
most of existing methods have only single-cognitive ability. To overcome this
limitation and take a solid step towards artificial general intelligence (AGI),
we develop a foundation model pre-trained with huge multimodal data, which can
be quickly adapted for various downstream cognitive tasks. To achieve this
goal, we propose to pre-train our foundation model by self-supervised learning
with weak semantic correlation data crawled from the Internet and show that
promising results can be obtained on a wide range of downstream tasks.
Particularly, with the developed model-interpretability tools, we demonstrate
that strong imagination ability is now possessed by our foundation model. We
believe that our work makes a transformative stride towards AGI, from our
common practice of "weak or narrow AI" to that of "strong or generalized AI".
| [
{
"version": "v1",
"created": "Wed, 27 Oct 2021 12:25:21 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2022 12:02:30 GMT"
}
] | 1,654,732,800,000 | [
[
"Fei",
"Nanyi",
""
],
[
"Lu",
"Zhiwu",
""
],
[
"Gao",
"Yizhao",
""
],
[
"Yang",
"Guoxing",
""
],
[
"Huo",
"Yuqi",
""
],
[
"Wen",
"Jingyuan",
""
],
[
"Lu",
"Haoyu",
""
],
[
"Song",
"Ruihua",
""
],
[
"Gao",
"Xin",
""
],
[
"Xiang",
"Tao",
""
],
[
"Sun",
"Hao",
""
],
[
"Wen",
"Ji-Rong",
""
]
] |
2110.14450 | Jie Luo | Tengwei Song, Jie Luo, Lei Huang | Rot-Pro: Modeling Transitivity by Projection in Knowledge Graph
Embedding | 10 pages, 6 figures, to be published in NeurIPS 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge graph embedding models learn the representations of entities and
relations in the knowledge graphs for predicting missing links (relations)
between entities. Their effectiveness are deeply affected by the ability of
modeling and inferring different relation patterns such as symmetry, asymmetry,
inversion, composition and transitivity. Although existing models are already
able to model many of these relations patterns, transitivity, a very common
relation pattern, is still not been fully supported. In this paper, we first
theoretically show that the transitive relations can be modeled with
projections. We then propose the Rot-Pro model which combines the projection
and relational rotation together. We prove that Rot-Pro can infer all the above
relation patterns. Experimental results show that the proposed Rot-Pro model
effectively learns the transitivity pattern and achieves the state-of-the-art
results on the link prediction task in the datasets containing transitive
relations.
| [
{
"version": "v1",
"created": "Wed, 27 Oct 2021 14:13:40 GMT"
}
] | 1,635,379,200,000 | [
[
"Song",
"Tengwei",
""
],
[
"Luo",
"Jie",
""
],
[
"Huang",
"Lei",
""
]
] |
2110.14535 | Stefan B\"ohm | Stefan B\"ohm, Martin Neumayer, Oliver Kramer, Alexander Schiendorfer,
Alois Knoll | Comparing Heuristics, Constraint Optimization, and Reinforcement
Learning for an Industrial 2D Packing Problem | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Cutting and Packing problems are occurring in different industries with a
direct impact on the revenue of businesses. Generally, the goal in Cutting and
Packing is to assign a set of smaller objects to a set of larger objects. To
solve Cutting and Packing problems, practitioners can resort to heuristic and
exact methodologies. Lately, machine learning is increasingly used for solving
such problems. This paper considers a 2D packing problem from the furniture
industry, where a set of wooden workpieces must be assigned to different
modules of a trolley in the most space-saving way. We present an experimental
setup to compare heuristics, constraint optimization, and deep reinforcement
learning for the given problem. The used methodologies and their results get
collated in terms of their solution quality and runtime. In the given use case
a greedy heuristic produces optimal results and outperforms the other
approaches in terms of runtime. Constraint optimization also produces optimal
results but requires more time to perform. The deep reinforcement learning
approach did not always produce optimal or even feasible solutions. While we
assume this could be remedied with more training, considering the good results
with the heuristic, deep reinforcement learning seems to be a bad fit for the
given use case.
| [
{
"version": "v1",
"created": "Wed, 27 Oct 2021 15:47:47 GMT"
}
] | 1,635,379,200,000 | [
[
"Böhm",
"Stefan",
""
],
[
"Neumayer",
"Martin",
""
],
[
"Kramer",
"Oliver",
""
],
[
"Schiendorfer",
"Alexander",
""
],
[
"Knoll",
"Alois",
""
]
] |
2110.14870 | Francis Indaheng | Francis Indaheng, Edward Kim, Kesav Viswanadha, Jay Shenoy, Jinkyu
Kim, Daniel J. Fremont, Sanjit A. Seshia | A Scenario-Based Platform for Testing Autonomous Vehicle Behavior
Prediction Models in Simulation | Accepted to the NeurIPS 2021 Workshop on Machine Learning for
Autonomous Driving | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Behavior prediction remains one of the most challenging tasks in the
autonomous vehicle (AV) software stack. Forecasting the future trajectories of
nearby agents plays a critical role in ensuring road safety, as it equips AVs
with the necessary information to plan safe routes of travel. However, these
prediction models are data-driven and trained on data collected in real life
that may not represent the full range of scenarios an AV can encounter. Hence,
it is important that these prediction models are extensively tested in various
test scenarios involving interactive behaviors prior to deployment. To support
this need, we present a simulation-based testing platform which supports (1)
intuitive scenario modeling with a probabilistic programming language called
Scenic, (2) specifying a multi-objective evaluation metric with a partial
priority ordering, (3) falsification of the provided metric, and (4)
parallelization of simulations for scalable testing. As a part of the platform,
we provide a library of 25 Scenic programs that model challenging test
scenarios involving interactive traffic participant behaviors. We demonstrate
the effectiveness and the scalability of our platform by testing a trained
behavior prediction model and searching for failure scenarios.
| [
{
"version": "v1",
"created": "Thu, 28 Oct 2021 03:30:49 GMT"
},
{
"version": "v2",
"created": "Sun, 14 Nov 2021 02:57:38 GMT"
}
] | 1,637,020,800,000 | [
[
"Indaheng",
"Francis",
""
],
[
"Kim",
"Edward",
""
],
[
"Viswanadha",
"Kesav",
""
],
[
"Shenoy",
"Jay",
""
],
[
"Kim",
"Jinkyu",
""
],
[
"Fremont",
"Daniel J.",
""
],
[
"Seshia",
"Sanjit A.",
""
]
] |
2110.15058 | Adam Faci | Adam Faci (LFI, TRT), Marie-Jeanne Lesot (LFI), Claire Laudy (TRT) | cgSpan: Pattern Mining in Conceptual Graphs | null | Proc. of the Int. Conf. on Artificial Intelligence and Soft
Computing (ICAISC2021), Jun 2021, Zakopane, Poland | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conceptual Graphs (CGs) are a graph-based knowledge representation formalism.
In this paper we propose cgSpan a CG frequent pattern mining algorithm. It
extends the DMGM-GSM algorithm that takes taxonomy-based labeled graphs as
input; it includes three more kinds of knowledge of the CG formalism: (a) the
fixed arity of relation nodes, handling graphs of neighborhoods centered on
relations rather than graphs of nodes, (b) the signatures, avoiding patterns
with concept types more general than the maximal types specified in signatures
and (c) the inference rules, applying them during the pattern mining process.
The experimental study highlights that cgSpan is a functional CG Frequent
Pattern Mining algorithm and that including CGs specificities results in a
faster algorithm with more expressive results and less redundancy with
vocabulary.
| [
{
"version": "v1",
"created": "Tue, 26 Oct 2021 14:28:06 GMT"
}
] | 1,635,465,600,000 | [
[
"Faci",
"Adam",
"",
"LFI, TRT"
],
[
"Lesot",
"Marie-Jeanne",
"",
"LFI"
],
[
"Laudy",
"Claire",
"",
"TRT"
]
] |
2110.15214 | Marco Wilhelm | Marco Wilhelm, Diana Howey, Gabriele Kern-Isberner, Kai Sauerwald,
Christoph Beierle | Conditional Inference and Activation of Knowledge Entities in ACT-R | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Activation-based conditional inference applies conditional reasoning to
ACT-R, a cognitive architecture developed to formalize human reasoning. The
idea of activation-based conditional inference is to determine a reasonable
subset of a conditional belief base in order to draw inductive inferences in
time. Central to activation-based conditional inference is the activation
function which assigns to the conditionals in the belief base a degree of
activation mainly based on the conditional's relevance for the current query
and its usage history. Therewith, our approach integrates several aspects of
human reasoning into expert systems such as focusing, forgetting, and
remembering.
| [
{
"version": "v1",
"created": "Thu, 28 Oct 2021 15:33:19 GMT"
}
] | 1,635,465,600,000 | [
[
"Wilhelm",
"Marco",
""
],
[
"Howey",
"Diana",
""
],
[
"Kern-Isberner",
"Gabriele",
""
],
[
"Sauerwald",
"Kai",
""
],
[
"Beierle",
"Christoph",
""
]
] |
2111.00004 | Jianqin Zhou | Jianqin Zhou, Sichun Yang, Xifeng Wang and Wanquan Liu | Granule Description based on Compound Concepts | 16 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Concise granule descriptions for definable granules and approaching
descriptions for indefinable granules are challenging and important issues in
granular computing. The concept with only common attributes has been
intensively studied. To investigate the granules with some special needs, we
propose a novel type of compound concepts in this paper, i.e.,
common-and-necessary concept. Based on the definitions of concept-forming
operations, the logical formulas are derived for each of the following types of
concepts: formal concept, object-induced three-way concept, object oriented
concept and common-and-necessary concept. Furthermore, by utilizing the logical
relationship among various concepts, we have derived concise and unified
equivalent conditions for definable granules and approaching descriptions for
indefinable granules for all four kinds of concepts.
| [
{
"version": "v1",
"created": "Fri, 29 Oct 2021 01:56:29 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jan 2022 04:55:10 GMT"
}
] | 1,641,772,800,000 | [
[
"Zhou",
"Jianqin",
""
],
[
"Yang",
"Sichun",
""
],
[
"Wang",
"Xifeng",
""
],
[
"Liu",
"Wanquan",
""
]
] |
2111.00375 | Sameer Khanna | Sameer Khanna | Conical Classification For Computationally Efficient One-Class Topic
Determination | Findings in Empirical Methods in Natural Language Processing 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | As the Internet grows in size, so does the amount of text based information
that exists. For many application spaces it is paramount to isolate and
identify texts that relate to a particular topic. While one-class
classification would be ideal for such analysis, there is a relative lack of
research regarding efficient approaches with high predictive power. By noting
that the range of documents we wish to identify can be represented as positive
linear combinations of the Vector Space Model representing our text, we propose
Conical classification, an approach that allows us to identify if a document is
of a particular topic in a computationally efficient manner. We also propose
Normal Exclusion, a modified version of Bi-Normal Separation that makes it more
suitable within the one-class classification context. We show in our analysis
that our approach not only has higher predictive power on our datasets, but is
also faster to compute.
| [
{
"version": "v1",
"created": "Sun, 31 Oct 2021 01:27:12 GMT"
}
] | 1,635,811,200,000 | [
[
"Khanna",
"Sameer",
""
]
] |
2111.00419 | Deliang Wang | Deliang Wang, Yu Lu, Qinggang Meng, Penghe Chen | Interpreting Deep Knowledge Tracing Model on EdNet Dataset | This paper has been accepted and presented in AAAI 2021 Workshop on
AI Education | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | With more deep learning techniques being introduced into the knowledge
tracing domain, the interpretability issue of the knowledge tracing models has
aroused researchers' attention. Our previous study(Lu et al. 2020) on building
and interpreting the KT model mainly adopts the ASSISTment dataset(Feng,
Heffernan, and Koedinger 2009),, whose size is relatively small. In this work,
we perform the similar tasks but on a large and newly available dataset, called
EdNet(Choi et al. 2020). The preliminary experiment results show the
effectiveness of the interpreting techniques, while more questions and tasks
are worthy to be further explored and accomplished.
| [
{
"version": "v1",
"created": "Sun, 31 Oct 2021 07:18:59 GMT"
}
] | 1,635,811,200,000 | [
[
"Wang",
"Deliang",
""
],
[
"Lu",
"Yu",
""
],
[
"Meng",
"Qinggang",
""
],
[
"Chen",
"Penghe",
""
]
] |
2111.00424 | Seokjun Kim | Seokjun Kim, Jaeeun Jang, Hyeoncheol Kim | All-In-One: Artificial Association Neural Networks | Model Agnostic, structurally free, graph neural networks, neural data
structure, recursive neural networks | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most deep learning models are limited to specific datasets or tasks because
of network structures using fixed layers. In this paper, we discuss the
differences between existing neural networks and real human neurons, propose
association networks to connect existing models, and describe multiple types of
deep learning exercises performed using a single structure. Further, we propose
a new neural data structure that can express all basic models of existing
neural networks in a tree structure. We also propose an approach in which
information propagates from leaf to a root node using the proposed recursive
convolution approach (i.e., depth-first convolution) and feed-forward
propagation is performed. Thus, we design a ``data-based,'' as opposed to a
``model-based,'' neural network. In experiments conducted, we compared the
learning performances of the models specializing in specific domains with those
of models simultaneously learning various domains using an association network.
The model learned well without significant performance degradation compared to
that for models performing individual learning. In addition, the performance
results were similar to those of the special case models; the output of the
tree contained all information from the tree. Finally, we developed a theory
for using arbitrary input data and learning all data simultaneously.
| [
{
"version": "v1",
"created": "Sun, 31 Oct 2021 07:58:00 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Nov 2021 13:34:25 GMT"
},
{
"version": "v3",
"created": "Mon, 22 Nov 2021 14:56:40 GMT"
},
{
"version": "v4",
"created": "Mon, 6 Dec 2021 13:17:40 GMT"
},
{
"version": "v5",
"created": "Mon, 13 Dec 2021 18:04:22 GMT"
},
{
"version": "v6",
"created": "Tue, 14 Dec 2021 17:27:18 GMT"
},
{
"version": "v7",
"created": "Mon, 27 Dec 2021 17:45:44 GMT"
},
{
"version": "v8",
"created": "Sun, 29 Jan 2023 10:36:09 GMT"
}
] | 1,675,123,200,000 | [
[
"Kim",
"Seokjun",
""
],
[
"Jang",
"Jaeeun",
""
],
[
"Kim",
"Hyeoncheol",
""
]
] |
2111.00506 | Mrinal Rawat | Mrinal Rawat, Ramya Hebbalaguppe, Lovekesh Vig | PnPOOD : Out-Of-Distribution Detection for Text Classification via Plug
andPlay Data Augmentation | null | null | null | Accepted in Uncertainty in Deep Learning, ICML'21 | cs.AI | http://creativecommons.org/licenses/by/4.0/ | While Out-of-distribution (OOD) detection has been well explored in computer
vision, there have been relatively few prior attempts in OOD detection for NLP
classification. In this paper we argue that these prior attempts do not fully
address the OOD problem and may suffer from data leakage and poor calibration
of the resulting models. We present PnPOOD, a data augmentation technique to
perform OOD detection via out-of-domain sample generation using the recently
proposed Plug and Play Language Model (Dathathri et al., 2020). Our method
generates high quality discriminative samples close to the class boundaries,
resulting in accurate OOD detection at test time. We demonstrate that our model
outperforms prior models on OOD sample detection, and exhibits lower
calibration error on the 20 newsgroup text and Stanford Sentiment Treebank
dataset (Lang, 1995; Socheret al., 2013). We further highlight an important
data leakage issue with datasets used in prior attempts at OOD detection, and
share results on a new dataset for OOD detection that does not suffer from the
same problem.
| [
{
"version": "v1",
"created": "Sun, 31 Oct 2021 14:02:26 GMT"
}
] | 1,636,416,000,000 | [
[
"Rawat",
"Mrinal",
""
],
[
"Hebbalaguppe",
"Ramya",
""
],
[
"Vig",
"Lovekesh",
""
]
] |
2111.00783 | Ramya Bygari | Ramya Bygari, Aayush Gupta, Shashwat Raghuvanshi, Aakanksha Bapna,
Birendra Sahu | An AI-powered Smart Routing Solution for Payment Systems | 9 pages, 10 figures, Accepted at IEEE Big Data Conference -
https://bigdataieee.org/BigData2021/index.html | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | In the current era of digitization, online payment systems are attracting
considerable interest. Improving the efficiency of a payment system is
important since it has a substantial impact on revenues for businesses. A
gateway is an integral component of a payment system through which every
transaction is routed. In an online payment system, payment processors
integrate with these gateways by means of various configurations such as
pricing, methods, risk checks, etc. These configurations are called terminals.
Each gateway can have multiple terminals associated with it. Routing a payment
transaction through the best terminal is crucial to increase the probability of
a payment transaction being successful. Machine learning (ML) and artificial
intelligence (AI) techniques can be used to accurately predict the best
terminals based on their previous performance and various payment-related
attributes. We have devised a pipeline consisting of static and dynamic
modules. The static module does the initial filtering of the terminals using
static rules and a logistic regression model that predicts gateway downtimes.
Subsequently, the dynamic module computes a lot of novel features based on
success rate, payment attributes, time lag, etc. to model the terminal
behaviour accurately. These features are updated using an adaptive time decay
rate algorithm in real-time using a feedback loop and passed to a random forest
classifier to predict the success probabilities for every terminal. This
pipeline is currently in production at Razorpay routing millions of
transactions through it in real-time and has given a 4-6\% improvement in
success rate across all payment methods (credit card, debit card, UPI, net
banking). This has made our payment system more resilient to performance drops,
which has improved the user experience, instilled more trust in the merchants,
and boosted the revenue of the business.
| [
{
"version": "v1",
"created": "Mon, 1 Nov 2021 09:33:02 GMT"
}
] | 1,635,811,200,000 | [
[
"Bygari",
"Ramya",
""
],
[
"Gupta",
"Aayush",
""
],
[
"Raghuvanshi",
"Shashwat",
""
],
[
"Bapna",
"Aakanksha",
""
],
[
"Sahu",
"Birendra",
""
]
] |
2111.00787 | Yu Liu | Yu Liu, Jingtao Ding, Yong Li | Knowledge-driven Site Selection via Urban Knowledge Graph | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Site selection determines optimal locations for new stores, which is of
crucial importance to business success. Especially, the wide application of
artificial intelligence with multi-source urban data makes intelligent site
selection promising. However, existing data-driven methods heavily rely on
feature engineering, facing the issues of business generalization and complex
relationship modeling. To get rid of the dilemma, in this work, we borrow ideas
from knowledge graph (KG), and propose a knowledge-driven model for site
selection, short for KnowSite. Specifically, motivated by distilled knowledge
and rich semantics in KG, we firstly construct an urban KG (UrbanKG) with
cities' key elements and semantic relationships captured. Based on UrbanKG, we
employ pre-training techniques for semantic representations, which are fed into
an encoder-decoder structure for site decisions. With multi-relational message
passing and relation path-based attention mechanism developed, KnowSite
successfully reveals the relationship between various businesses and site
selection criteria. Extensive experiments on two datasets demonstrate that
KnowSite outperforms representative baselines with both effectiveness and
explainability achieved.
| [
{
"version": "v1",
"created": "Mon, 1 Nov 2021 09:36:38 GMT"
}
] | 1,635,811,200,000 | [
[
"Liu",
"Yu",
""
],
[
"Ding",
"Jingtao",
""
],
[
"Li",
"Yong",
""
]
] |
2111.00826 | Ana Lucic | Ana Lucic, Maurits Bleeker, Sami Jullien, Samarth Bhargav, Maarten de
Rijke | Reproducibility as a Mechanism for Teaching Fairness, Accountability,
Confidentiality, and Transparency in Artificial Intelligence | Accepted to the AAAI Symposium on Educational Advances in AI (EAAI
2022) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we explain the setup for a technical, graduate-level course on
Fairness, Accountability, Confidentiality, and Transparency in Artificial
Intelligence (FACT-AI) at the University of Amsterdam, which teaches FACT-AI
concepts through the lens of reproducibility. The focal point of the course is
a group project based on reproducing existing FACT-AI algorithms from top AI
conferences and writing a corresponding report. In the first iteration of the
course, we created an open source repository with the code implementations from
the group projects. In the second iteration, we encouraged students to submit
their group projects to the Machine Learning Reproducibility Challenge,
resulting in 9 reports from our course being accepted for publication in the
ReScience journal. We reflect on our experience teaching the course over two
years, where one year coincided with a global pandemic, and propose guidelines
for teaching FACT-AI through reproducibility in graduate-level AI study
programs. We hope this can be a useful resource for instructors who want to set
up similar courses in the future.
| [
{
"version": "v1",
"created": "Mon, 1 Nov 2021 10:58:35 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Nov 2021 13:06:21 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Nov 2021 13:01:57 GMT"
},
{
"version": "v4",
"created": "Fri, 17 Dec 2021 13:42:51 GMT"
}
] | 1,639,958,400,000 | [
[
"Lucic",
"Ana",
""
],
[
"Bleeker",
"Maurits",
""
],
[
"Jullien",
"Sami",
""
],
[
"Bhargav",
"Samarth",
""
],
[
"de Rijke",
"Maarten",
""
]
] |
2111.01016 | Lorenzo Piazzo Dr. | Lorenzo Piazzo, Michele Scarpiniti and Enzo Baccarelli | Gomoku: analysis of the game and of the player Wine | 32 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gomoku, also known as five in a row, is a classical board game, ideally
suited for quickly testing novel Artificial Intelligence (AI) techniques. With
the aim of facilitating a developer willing to write a new Gomoku player, in
this report we present an analysis of the main game concepts and strategies,
which is wider and deeper than existing ones. Moreover, after discussing the
general structure of an artificial player, we present and analyse a strong
Gomoku player, named Wine, the code of which is freely available on the
Internet and which is an excelent example of how a modern player is organised.
| [
{
"version": "v1",
"created": "Mon, 1 Nov 2021 15:21:26 GMT"
}
] | 1,635,811,200,000 | [
[
"Piazzo",
"Lorenzo",
""
],
[
"Scarpiniti",
"Michele",
""
],
[
"Baccarelli",
"Enzo",
""
]
] |
2111.01042 | Manolis Pitsikalis | Manolis Pitsikalis, Thanh-Toan Do, Alexei Lisitsa and Shan Luo | Logic Rules Meet Deep Learning: A Novel Approach for Ship Type
Classification | Accepted and presented in RuleML+RR 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The shipping industry is an important component of the global trade and
economy, however in order to ensure law compliance and safety it needs to be
monitored. In this paper, we present a novel Ship Type classification model
that combines vessel transmitted data from the Automatic Identification System,
with vessel imagery. The main components of our approach are the Faster R-CNN
Deep Neural Network and a Neuro-Fuzzy system with IF-THEN rules. We evaluate
our model using real world data and showcase the advantages of this combination
while also compare it with other methods. Results show that our model can
increase prediction scores by up to 15.4\% when compared with the next best
model we considered, while also maintaining a level of explainability as
opposed to common black box approaches.
| [
{
"version": "v1",
"created": "Mon, 1 Nov 2021 15:47:37 GMT"
}
] | 1,635,811,200,000 | [
[
"Pitsikalis",
"Manolis",
""
],
[
"Do",
"Thanh-Toan",
""
],
[
"Lisitsa",
"Alexei",
""
],
[
"Luo",
"Shan",
""
]
] |
2111.01364 | Juncheng Liu Dr | Liu Juncheng, McCane Brendan, Mills Steven | Learning to Explore by Reinforcement over High-Level Options | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Autonomous 3D environment exploration is a fundamental task for various
applications such as navigation. The goal of exploration is to investigate a
new environment and build its occupancy map efficiently. In this paper, we
propose a new method which grants an agent two intertwined options of
behaviors: "look-around" and "frontier navigation". This is implemented by an
option-critic architecture and trained by reinforcement learning algorithms. In
each timestep, an agent produces an option and a corresponding action according
to the policy. We also take advantage of macro-actions by incorporating classic
path-planning techniques to increase training efficiency. We demonstrate the
effectiveness of the proposed method on two publicly available 3D environment
datasets and the results show our method achieves higher coverage than
competing techniques with better efficiency.
| [
{
"version": "v1",
"created": "Tue, 2 Nov 2021 04:21:34 GMT"
}
] | 1,635,897,600,000 | [
[
"Juncheng",
"Liu",
""
],
[
"Brendan",
"McCane",
""
],
[
"Steven",
"Mills",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.