id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2106.02204 | Jonathan Balloch | Xiangyu Peng, Jonathan C. Balloch, Mark O. Riedl | Detecting and Adapting to Novelty in Games | 10 pages, 5 figures, Accepted to the AAAI21 Workshop on on
Reinforcement Learning in Games | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Open-world novelty occurs when the rules of an environment can change
abruptly, such as when a game player encounters "house rules". To address
open-world novelty, game playing agents must be able to detect when novelty is
injected, and to quickly adapt to the new rules. We propose a model-based
reinforcement learning approach where game state and rules are represented as
knowledge graphs. The knowledge graph representation of the state and rules
allows novelty to be detected as changes in the knowledge graph, assists with
the training of deep reinforcement learners, and enables imagination-based
re-training where the agent uses the knowledge graph to perform look-ahead.
| [
{
"version": "v1",
"created": "Fri, 4 Jun 2021 01:41:02 GMT"
}
] | 1,623,024,000,000 | [
[
"Peng",
"Xiangyu",
""
],
[
"Balloch",
"Jonathan C.",
""
],
[
"Riedl",
"Mark O.",
""
]
] |
2106.02498 | Tatiana Tommasi | Tatiana Tommasi, Silvia Bucci, Barbara Caputo, Pietro Asinari | Towards Fairness Certification in Artificial Intelligence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Thanks to the great progress of machine learning in the last years, several
Artificial Intelligence (AI) techniques have been increasingly moving from the
controlled research laboratory settings to our everyday life. AI is clearly
supportive in many decision-making scenarios, but when it comes to sensitive
areas such as health care, hiring policies, education, banking or justice, with
major impact on individuals and society, it becomes crucial to establish
guidelines on how to design, develop, deploy and monitor this technology.
Indeed the decision rules elaborated by machine learning models are data-driven
and there are multiple ways in which discriminatory biases can seep into data.
Algorithms trained on those data incur the risk of amplifying prejudices and
societal stereotypes by over associating protected attributes such as gender,
ethnicity or disabilities with the prediction task. Starting from the extensive
experience of the National Metrology Institute on measurement standards and
certification roadmaps, and of Politecnico di Torino on machine learning as
well as methods for domain bias evaluation and mastering, we propose a first
joint effort to define the operational steps needed for AI fairness
certification. Specifically we will overview the criteria that should be met by
an AI system before coming into official service and the conformity assessment
procedures useful to monitor its functioning for fair decisions.
| [
{
"version": "v1",
"created": "Fri, 4 Jun 2021 14:12:12 GMT"
}
] | 1,623,024,000,000 | [
[
"Tommasi",
"Tatiana",
""
],
[
"Bucci",
"Silvia",
""
],
[
"Caputo",
"Barbara",
""
],
[
"Asinari",
"Pietro",
""
]
] |
2106.02578 | Gavin Abercrombie | Gavin Abercrombie, Amanda Cercas Curry, Mugdha Pandya, Verena Rieser | Alexa, Google, Siri: What are Your Pronouns? Gender and Anthropomorphism
in the Design and Perception of Conversational Assistants | To be presented at the 3rd Workshop on Gender Bias in Natural
Language Processing (GeBNLP 2021) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Technology companies have produced varied responses to concerns about the
effects of the design of their conversational AI systems. Some have claimed
that their voice assistants are in fact not gendered or human-like -- despite
design features suggesting the contrary. We compare these claims to user
perceptions by analysing the pronouns they use when referring to AI assistants.
We also examine systems' responses and the extent to which they generate output
which is gendered and anthropomorphic. We find that, while some companies
appear to be addressing the ethical concerns raised, in some cases, their
claims do not seem to hold true. In particular, our results show that system
outputs are ambiguous as to the humanness of the systems, and that users tend
to personify and gender them as a result.
| [
{
"version": "v1",
"created": "Fri, 4 Jun 2021 16:19:40 GMT"
}
] | 1,623,024,000,000 | [
[
"Abercrombie",
"Gavin",
""
],
[
"Curry",
"Amanda Cercas",
""
],
[
"Pandya",
"Mugdha",
""
],
[
"Rieser",
"Verena",
""
]
] |
2106.03324 | Izack Cohen | Izack Cohen and Avigdor Gal | Uncertain Process Data with Probabilistic Knowledge: Problem
Characterization and Challenges | null | Proceedings of the International Workshop Problems21, co-located
with the 19th International Conference on Business Process Management BPM
2021, Italy, published in CEUR Workshop Proceedings , 2938, 51-56, 2021 | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Motivated by the abundance of uncertain event data from multiple sources
including physical devices and sensors, this paper presents the task of
relating a stochastic process observation to a process model that can be
rendered from a dataset. In contrast to previous research that suggested to
transform a stochastically known event log into a less informative uncertain
log with upper and lower bounds on activity frequencies, we consider the
challenge of accommodating the probabilistic knowledge into conformance
checking techniques. Based on a taxonomy that captures the spectrum of
conformance checking cases under stochastic process observations, we present
three types of challenging cases. The first includes conformance checking of a
stochastically known log with respect to a given process model. The second case
extends the first to classify a stochastically known log into one of several
process models. The third case extends the two previous ones into settings in
which process models are only stochastically known. The suggested problem
captures the increasingly growing number of applications in which sensors
provide probabilistic process information.
| [
{
"version": "v1",
"created": "Mon, 7 Jun 2021 03:56:14 GMT"
}
] | 1,656,374,400,000 | [
[
"Cohen",
"Izack",
""
],
[
"Gal",
"Avigdor",
""
]
] |
2106.03400 | Xiaoteng Ma | Yiqin Yang, Xiaoteng Ma, Chenghao Li, Zewu Zheng, Qiyuan Zhang, Gao
Huang, Jun Yang, Qianchuan Zhao | Believe What You See: Implicit Constraint Approach for Offline
Multi-Agent Reinforcement Learning | Accepted by NeurIPS2021. The first two authors contributed equally to
the work | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Learning from datasets without interaction with environments (Offline
Learning) is an essential step to apply Reinforcement Learning (RL) algorithms
in real-world scenarios. However, compared with the single-agent counterpart,
offline multi-agent RL introduces more agents with the larger state and action
space, which is more challenging but attracts little attention. We demonstrate
current offline RL algorithms are ineffective in multi-agent systems due to the
accumulated extrapolation error. In this paper, we propose a novel offline RL
algorithm, named Implicit Constraint Q-learning (ICQ), which effectively
alleviates the extrapolation error by only trusting the state-action pairs
given in the dataset for value estimation. Moreover, we extend ICQ to
multi-agent tasks by decomposing the joint-policy under the implicit
constraint. Experimental results demonstrate that the extrapolation error is
successfully controlled within a reasonable range and insensitive to the number
of agents. We further show that ICQ achieves the state-of-the-art performance
in the challenging multi-agent offline tasks (StarCraft II). Our code is public
online at https://github.com/YiqinYang/ICQ.
| [
{
"version": "v1",
"created": "Mon, 7 Jun 2021 08:02:31 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Oct 2021 10:50:50 GMT"
}
] | 1,635,292,800,000 | [
[
"Yang",
"Yiqin",
""
],
[
"Ma",
"Xiaoteng",
""
],
[
"Li",
"Chenghao",
""
],
[
"Zheng",
"Zewu",
""
],
[
"Zhang",
"Qiyuan",
""
],
[
"Huang",
"Gao",
""
],
[
"Yang",
"Jun",
""
],
[
"Zhao",
"Qianchuan",
""
]
] |
2106.03567 | Biswanath Dutta Dr. | Biswanath Dutta and Jyotima Patel | AMV : Algorithm Metadata Vocabulary | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Metadata vocabularies are used in various domains of study. It provides an
in-depth description of the resources. In this work, we develop Algorithm
Metadata Vocabulary (AMV), a vocabulary for capturing and storing the metadata
about the algorithms (a procedure or a set of rules that is followed
step-by-step to solve a problem, especially by a computer). The snag faced by
the researchers in the current time is the failure of getting relevant results
when searching for algorithms in any search engine. AMV is represented as a
semantic model and produced OWL file, which can be directly used by anyone
interested to create and publish algorithm metadata as a knowledge graph, or to
provide metadata service through SPARQL endpoint. To design the vocabulary, we
propose a well-defined methodology, which considers real issues faced by the
algorithm users and the practitioners. The evaluation shows a promising result.
| [
{
"version": "v1",
"created": "Tue, 1 Jun 2021 20:09:42 GMT"
}
] | 1,623,110,400,000 | [
[
"Dutta",
"Biswanath",
""
],
[
"Patel",
"Jyotima",
""
]
] |
2106.03619 | Hao Guo | Hao Guo, Jiuyang Tang, Weixin Zeng, Xiang Zhao, Li Liu | Multi-modal Entity Alignment in Hyperbolic Space | 24 pages,5 figures; | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many AI-related tasks involve the interactions of data in multiple
modalities. It has been a new trend to merge multi-modal information into
knowledge graph(KG), resulting in multi-modal knowledge graphs (MMKG). However,
MMKGs usually suffer from low coverage and incompleteness. To mitigate this
problem, a viable approach is to integrate complementary knowledge from other
MMKGs. To this end, although existing entity alignment approaches could be
adopted, they operate in the Euclidean space, and the resulting Euclidean
entity representations can lead to large distortion of KG's hierarchical
structure. Besides, the visual information has yet not been well exploited. In
response to these issues, in this work, we propose a novel multi-modal entity
alignment approach, Hyperbolic multi-modal entity alignment(HMEA), which
extends the Euclidean representation to hyperboloid manifold. We first adopt
the Hyperbolic Graph Convolutional Networks (HGCNs) to learn structural
representations of entities. Regarding the visual information, we generate
image embeddings using the densenet model, which are also projected into the
hyperbolic space using HGCNs. Finally, we combine the structure and visual
representations in the hyperbolic space and use the aggregated embeddings to
predict potential alignment results. Extensive experiments and ablation studies
demonstrate the effectiveness of our proposed model and its components.
| [
{
"version": "v1",
"created": "Mon, 7 Jun 2021 13:45:03 GMT"
}
] | 1,623,110,400,000 | [
[
"Guo",
"Hao",
""
],
[
"Tang",
"Jiuyang",
""
],
[
"Zeng",
"Weixin",
""
],
[
"Zhao",
"Xiang",
""
],
[
"Liu",
"Li",
""
]
] |
2106.03684 | Hal Ashton | Hal Ashton | Extending counterfactual accounts of intent to include oblique intent | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | One approach to defining Intention is to use the counterfactual tools
developed to define Causality. Direct Intention is considered the highest level
of intent in the common law, and is a sufficient component for the most serious
crimes to be committed. Loosely defined it is the commission of actions to
bring about a desired or targeted outcome. Direct Intention is not always
necessary for the most serious category of crimes because society has also
found it necessary to develop a theory of intention around side-effects, known
as oblique intent or indirect intent. This is to prevent moral harms from going
unpunished which were not the aim of the actor, but were natural consequences
nevertheless. This paper uses a canonical example of a plane owner, planting a
bomb on their own plane in order to collect insurance, to illustrate how two
accounts of counterfactual intent do not conclude that murder of the plane's
passengers and crew were directly intended. We extend both frameworks to
include a definition of oblique intent developed in Ashton (2021)
| [
{
"version": "v1",
"created": "Mon, 7 Jun 2021 15:00:20 GMT"
}
] | 1,623,110,400,000 | [
[
"Ashton",
"Hal",
""
]
] |
2106.03894 | Matthew Fontaine | Matthew C. Fontaine, Stefanos Nikolaidis | Differentiable Quality Diversity | Accepted to NeurIPS 2021 (oral presentation) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quality diversity (QD) is a growing branch of stochastic optimization
research that studies the problem of generating an archive of solutions that
maximize a given objective function but are also diverse with respect to a set
of specified measure functions. However, even when these functions are
differentiable, QD algorithms treat them as "black boxes", ignoring gradient
information. We present the differentiable quality diversity (DQD) problem, a
special case of QD, where both the objective and measure functions are first
order differentiable. We then present MAP-Elites via a Gradient Arborescence
(MEGA), a DQD algorithm that leverages gradient information to efficiently
explore the joint range of the objective and measure functions. Results in two
QD benchmark domains and in searching the latent space of a StyleGAN show that
MEGA significantly outperforms state-of-the-art QD algorithms, highlighting
DQD's promise for efficient quality diversity optimization when gradient
information is available. Source code is available at
https://github.com/icaros-usc/dqd.
| [
{
"version": "v1",
"created": "Mon, 7 Jun 2021 18:11:53 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Oct 2021 05:38:14 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Oct 2021 01:53:55 GMT"
}
] | 1,635,379,200,000 | [
[
"Fontaine",
"Matthew C.",
""
],
[
"Nikolaidis",
"Stefanos",
""
]
] |
2106.04233 | Gra\c{c}aliz Dimuro Prof. Dr. | Tiago da Cruz Asmus, Gra\c{c}aliz Pereira Dimuro, Benjam\'in Bedregal,
Jos\'e Antonio Sanz, Radko Mesiar and Humberto Bustince | Towards interval uncertainty propagation control in bivariate
aggregation processes and the introduction of width-limited interval-valued
overlap functions | submitted | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Overlap functions are a class of aggregation functions that measure the
overlapping degree between two values. Interval-valued overlap functions were
defined as an extension to express the overlapping of interval-valued data, and
they have been usually applied when there is uncertainty regarding the
assignment of membership degrees. The choice of a total order for intervals can
be significant, which motivated the recent developments on interval-valued
aggregation functions and interval-valued overlap functions that are increasing
to a given admissible order, that is, a total order that refines the usual
partial order for intervals. Also, width preservation has been considered on
these recent works, in an intent to avoid the uncertainty increase and
guarantee the information quality, but no deeper study was made regarding the
relation between the widths of the input intervals and the output interval,
when applying interval-valued functions, or how one can control such
uncertainty propagation based on this relation. Thus, in this paper we: (i)
introduce and develop the concepts of width-limited interval-valued functions
and width limiting functions, presenting a theoretical approach to analyze the
relation between the widths of the input and output intervals of bivariate
interval-valued functions, with special attention to interval-valued
aggregation functions; (ii) introduce the concept of $(a,b)$-ultramodular
aggregation functions, a less restrictive extension of one-dimension convexity
for bivariate aggregation functions, which have an important predictable
behaviour with respect to the width when extended to the interval-valued
context; (iii) define width-limited interval-valued overlap functions, taking
into account a function that controls the width of the output interval; (iv)
present and compare three construction methods for these width-limited
interval-valued overlap functions.
| [
{
"version": "v1",
"created": "Tue, 8 Jun 2021 10:22:31 GMT"
}
] | 1,623,196,800,000 | [
[
"Asmus",
"Tiago da Cruz",
""
],
[
"Dimuro",
"Graçaliz Pereira",
""
],
[
"Bedregal",
"Benjamín",
""
],
[
"Sanz",
"José Antonio",
""
],
[
"Mesiar",
"Radko",
""
],
[
"Bustince",
"Humberto",
""
]
] |
2106.04235 | Hal Ashton | Hal Ashton | Definitions of intent suitable for algorithms | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Intent modifies an actor's culpability of many types wrongdoing. Autonomous
Algorithmic Agents have the capability of causing harm, and whilst their
current lack of legal personhood precludes them from committing crimes, it is
useful for a number of parties to understand under what type of intentional
mode an algorithm might transgress. From the perspective of the creator or
owner they would like ensure that their algorithms never intend to cause harm
by doing things that would otherwise be labelled criminal if committed by a
legal person. Prosecutors might have an interest in understanding whether the
actions of an algorithm were internally intended according to a transparent
definition of the concept. The presence or absence of intention in the
algorithmic agent might inform the court as to the complicity of its owner.
This article introduces definitions for direct, oblique (or indirect) and
ulterior intent which can be used to test for intent in an algorithmic actor.
| [
{
"version": "v1",
"created": "Tue, 8 Jun 2021 10:30:29 GMT"
}
] | 1,623,196,800,000 | [
[
"Ashton",
"Hal",
""
]
] |
2106.04866 | Nir Lipovetzky | Nir Lipovetzky | Planning for Novelty: Width-Based Algorithms for Common Problems in
Control, Planning and Reinforcement Learning | null | IJCAI 2021 Early Career Spotlight Talk | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Width-based algorithms search for solutions through a general definition of
state novelty. These algorithms have been shown to result in state-of-the-art
performance in classical planning, and have been successfully applied to
model-based and model-free settings where the dynamics of the problem are given
through simulation engines. Width-based algorithms performance is understood
theoretically through the notion of planning width, providing polynomial
guarantees on their runtime and memory consumption. To facilitate synergies
across research communities, this paper summarizes the area of width-based
planning, and surveys current and future research directions.
| [
{
"version": "v1",
"created": "Wed, 9 Jun 2021 07:46:19 GMT"
}
] | 1,623,283,200,000 | [
[
"Lipovetzky",
"Nir",
""
]
] |
2106.05193 | Wadii Boulila Prof. | Zouhayra Ayadi, Wadii Boulila, Imed Riadh Farah | A Hybrid APM-CPGSO Approach for Constraint Satisfaction Problem Solving:
Application to Remote Sensing | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Constraint satisfaction problem (CSP) has been actively used for modeling and
solving a wide range of complex real-world problems. However, it has been
proven that developing efficient methods for solving CSP, especially for large
problems, is very difficult and challenging. Existing complete methods for
problem-solving are in most cases unsuitable. Therefore, proposing hybrid
CSP-based methods for problem-solving has been of increasing interest in the
last decades. This paper aims at proposing a novel approach that combines
incomplete and complete CSP methods for problem-solving. The proposed approach
takes advantage of the group search algorithm (GSO) and the constraint
propagation (CP) methods to solve problems related to the remote sensing field.
To the best of our knowledge, this paper represents the first study that
proposes a hybridization between an improved version of GSO and CP in the
resolution of complex constraint-based problems. Experiments have been
conducted for the resolution of object recognition problems in satellite
images. Results show good performances in terms of convergence and running time
of the proposed CSP-based method compared to existing state-of-the-art methods.
| [
{
"version": "v1",
"created": "Sun, 6 Jun 2021 22:05:22 GMT"
}
] | 1,623,283,200,000 | [
[
"Ayadi",
"Zouhayra",
""
],
[
"Boulila",
"Wadii",
""
],
[
"Farah",
"Imed Riadh",
""
]
] |
2106.05348 | Pawe{\l} Matyszok | Marek Sikora (1), Pawe{\l} Matyszok (1), {\L}ukasz Wr\'obel (1)((1)
Faculty of Automatic Control, Electronics and Computer Science, Silesian
University of Technology, Akademicka 16, 44-100 Gliwice, Poland) | SCARI: Separate and Conquer Algorithm for Action Rules and
Recommendations Induction | 47 pages, 6 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article describes an action rule induction algorithm based on a
sequential covering approach. Two variants of the algorithm are presented. The
algorithm allows the action rule induction from a source and a target decision
class point of view. The application of rule quality measures enables the
induction of action rules that meet various quality criteria. The article also
presents a method for recommendation induction. The recommendations indicate
the actions to be taken to move a given test example, representing the source
class, to the target one. The recommendation method is based on a set of
induced action rules. The experimental part of the article presents the results
of the algorithm operation on sixteen data sets. As a result of the conducted
research the Ac-Rules package was made available.
| [
{
"version": "v1",
"created": "Wed, 9 Jun 2021 19:27:30 GMT"
}
] | 1,623,369,600,000 | [
[
"Sikora",
"Marek",
""
],
[
"Matyszok",
"Paweł",
""
],
[
"Wróbel",
"Łukasz",
""
]
] |
2106.06768 | Victor-Alexandru Darvariu | Victor-Alexandru Darvariu, Stephen Hailes, Mirco Musolesi | Planning Spatial Networks with Monte Carlo Tree Search | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We tackle the problem of goal-directed graph construction: given a starting
graph, a budget of modifications, and a global objective function, the aim is
to find a set of edges whose addition to the graph achieves the maximum
improvement in the objective (e.g., communication efficiency). This problem
emerges in many networks of great importance for society such as transportation
and critical infrastructure networks. We identify two significant shortcomings
with present methods. Firstly, they focus exclusively on network topology while
ignoring spatial information; however, in many real-world networks, nodes are
embedded in space, which yields different global objectives and governs the
range and density of realizable connections. Secondly, existing RL methods
scale poorly to large networks due to the high cost of training a model and the
scaling factors of the action space and global objectives.
In this work, we formulate this problem as a deterministic MDP. We adopt the
Monte Carlo Tree Search framework for planning in this domain, prioritizing the
optimality of final solutions over the speed of policy evaluation. We propose
several improvements over the standard UCT algorithm for this family of
problems, addressing their single-agent nature, the trade-off between the costs
of edges and their contribution to the objective, and an action space linear in
the number of nodes. We demonstrate the suitability of this approach for
improving the global efficiency and attack resilience of a variety of synthetic
and real-world networks, including Internet backbone networks and metro
systems. Our approach obtains a 24% improvement in these metrics compared to
UCT on the largest networks tested and scalability superior to previous
methods.
| [
{
"version": "v1",
"created": "Sat, 12 Jun 2021 13:01:11 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Feb 2022 10:30:04 GMT"
}
] | 1,645,056,000,000 | [
[
"Darvariu",
"Victor-Alexandru",
""
],
[
"Hailes",
"Stephen",
""
],
[
"Musolesi",
"Mirco",
""
]
] |
2106.06780 | Stefania Costantini | Pedro Cabalar and Stefania Costantini and Giovanni De Gasperis and
Andrea Formisano | Multi-Context Systems: Dynamics and Evolution (Pre-Print of
"Multi-context systems in dynamic environments") | 35 pages 2 figures | Annals of Mathematics and Artificial Intelligence 86, 87-120
(2019) | 10.1007/s10472-019-09622-0 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-Context Systems (MCS) model in Computational Logic distributed systems
composed of heterogeneous sources, or "contexts", interacting via special rules
called "bridge rules". In this paper, we consider how to enhance flexibility
and generality in bridge-rules definition and application. In particular, we
introduce and discuss some formal extensions of MCSs useful for a practical use
in dynamic environments, and we try to provide guidelines for implementations
| [
{
"version": "v1",
"created": "Sat, 12 Jun 2021 13:52:49 GMT"
}
] | 1,623,715,200,000 | [
[
"Cabalar",
"Pedro",
""
],
[
"Costantini",
"Stefania",
""
],
[
"De Gasperis",
"Giovanni",
""
],
[
"Formisano",
"Andrea",
""
]
] |
2106.06931 | Min Zhang | Peng Jin, Min Zhang, Jianwen Li, Li Han, Xuejun Wen | Learning on Abstract Domains: A New Approach for Verifiable Guarantee in
Reinforcement Learning | 14 pages, 7 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Formally verifying Deep Reinforcement Learning (DRL) systems is a challenging
task due to the dynamic continuity of system behaviors and the black-box
feature of embedded neural networks. In this paper, we propose a novel
abstraction-based approach to train DRL systems on finite abstract domains
instead of concrete system states. It yields neural networks whose input states
are finite, making hosting DRL systems directly verifiable using model checking
techniques. Our approach is orthogonal to existing DRL algorithms and
off-the-shelf model checkers. We implement a resulting prototype training and
verification framework and conduct extensive experiments on the
state-of-the-art benchmark. The results show that the systems trained in our
approach can be verified more efficiently while they retain comparable
performance against those that are trained without abstraction.
| [
{
"version": "v1",
"created": "Sun, 13 Jun 2021 06:28:40 GMT"
}
] | 1,623,715,200,000 | [
[
"Jin",
"Peng",
""
],
[
"Zhang",
"Min",
""
],
[
"Li",
"Jianwen",
""
],
[
"Han",
"Li",
""
],
[
"Wen",
"Xuejun",
""
]
] |
2106.06972 | Yapeng Jasper Hu | Yapeng Jasper Hu, Ralph van Gurp, Ashay Somai, Hugo Kooijman and Jan
S. Rellermeyer (Distributed Systems Group, Delft University of Technology) | RCURRENCY: Live Digital Asset Trading Using a Recurrent Neural
Network-based Forecasting System | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consistent alpha generation, i.e., maintaining an edge over the market,
underpins the ability of asset traders to reliably generate profits. Technical
indicators and trading strategies are commonly used tools to determine when to
buy/hold/sell assets, yet these are limited by the fact that they operate on
known values. Over the past decades, multiple studies have investigated the
potential of artificial intelligence in stock trading in conventional markets,
with some success. In this paper, we present RCURRENCY, an RNN-based trading
engine to predict data in the highly volatile digital asset market which is
able to successfully manage an asset portfolio in a live environment. By
combining asset value prediction and conventional trading tools, RCURRENCY
determines whether to buy, hold or sell digital currencies at a given point in
time. Experimental results show that, given the data of an interval $t$, a
prediction with an error of less than 0.5\% of the data at the subsequent
interval $t+1$ can be obtained. Evaluation of the system through backtesting
shows that RCURRENCY can be used to successfully not only maintain a stable
portfolio of digital assets in a simulated live environment using real
historical trading data but even increase the portfolio value over time.
| [
{
"version": "v1",
"created": "Sun, 13 Jun 2021 11:58:36 GMT"
}
] | 1,623,715,200,000 | [
[
"Hu",
"Yapeng Jasper",
"",
"Distributed Systems Group, Delft University of Technology"
],
[
"van Gurp",
"Ralph",
"",
"Distributed Systems Group, Delft University of Technology"
],
[
"Somai",
"Ashay",
"",
"Distributed Systems Group, Delft University of Technology"
],
[
"Kooijman",
"Hugo",
"",
"Distributed Systems Group, Delft University of Technology"
],
[
"Rellermeyer",
"Jan S.",
"",
"Distributed Systems Group, Delft University of Technology"
]
] |
2106.07114 | Jingwei Huang | Jingwei Huang, Wael Khallouli, Ghaith Rabadi, Mamadou Seck | Intelligent Agent for Hurricane Emergency Identification and Text
Information Extraction from Streaming Social Media Big Data | 16 pages, 3 figures, and 1 table | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents our research on leveraging social media Big Data and AI
to support hurricane disaster emergency response. The current practice of
hurricane emergency response for rescue highly relies on emergency call
centres. The more recent Hurricane Harvey event reveals the limitations of the
current systems. We use Hurricane Harvey and the associated Houston flooding as
the motivating scenario to conduct research and develop a prototype as a
proof-of-concept of using an intelligent agent as a complementary role to
support emergency centres in hurricane emergency response. This intelligent
agent is used to collect real-time streaming tweets during a natural disaster
event, to identify tweets requesting rescue, to extract key information such as
address and associated geocode, and to visualize the extracted information in
an interactive map in decision supports. Our experiment shows promising
outcomes and the potential application of the research in support of hurricane
emergency response.
| [
{
"version": "v1",
"created": "Mon, 14 Jun 2021 00:12:27 GMT"
}
] | 1,623,715,200,000 | [
[
"Huang",
"Jingwei",
""
],
[
"Khallouli",
"Wael",
""
],
[
"Rabadi",
"Ghaith",
""
],
[
"Seck",
"Mamadou",
""
]
] |
2106.07211 | Renlong Jie | Renlong Jie and Junbin Gao | Differentiable Neural Architecture Search with Morphism-based
Transformable Backbone Architectures | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study aims at making the architecture search process more adaptive for
one-shot or online training. It is extended from the existing study on
differentiable neural architecture search, and we made the backbone
architecture transformable rather than fixed during the training process. As is
known, differentiable neural architecture search (DARTS) requires a pre-defined
over-parameterized backbone architecture, while its size is to be determined
manually. Also, in DARTS backbone, Hadamard product of two elements is not
introduced, which exists in both LSTM and GRU cells for recurrent nets. This
study introduces a growing mechanism for differentiable neural architecture
search based on network morphism. It enables growing of the cell structures
from small size towards large size ones with one-shot training. Two modes can
be applied in integrating the growing and original pruning process. We also
implement a recently proposed two-input backbone architecture for recurrent
neural networks. Initial experimental results indicate that our approach and
the two-input backbone structure can be quite effective compared with other
baseline architectures including LSTM, in a variety of learning tasks including
multi-variate time series forecasting and language modeling. On the other hand,
we find that dynamic network transformation is promising in improving the
efficiency of differentiable architecture search.
| [
{
"version": "v1",
"created": "Mon, 14 Jun 2021 07:56:33 GMT"
}
] | 1,623,715,200,000 | [
[
"Jie",
"Renlong",
""
],
[
"Gao",
"Junbin",
""
]
] |
2106.07288 | Xijun Li | Yingtian Tang, Han Lu, Xijun Li, Lei Chen, Mingxuan Yuan and Jia Zeng | Learning-Aided Heuristics Design for Storage System | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Computer systems such as storage systems normally require transparent
white-box algorithms that are interpretable for human experts. In this work, we
propose a learning-aided heuristic design method, which automatically generates
human-readable strategies from Deep Reinforcement Learning (DRL) agents. This
method benefits from the power of deep learning but avoids the shortcoming of
its black-box property. Besides the white-box advantage, experiments in our
storage productions resource allocation scenario also show that this solution
outperforms the systems default settings and the elaborately handcrafted
strategy by human experts.
| [
{
"version": "v1",
"created": "Mon, 14 Jun 2021 10:35:11 GMT"
}
] | 1,623,715,200,000 | [
[
"Tang",
"Yingtian",
""
],
[
"Lu",
"Han",
""
],
[
"Li",
"Xijun",
""
],
[
"Chen",
"Lei",
""
],
[
"Yuan",
"Mingxuan",
""
],
[
"Zeng",
"Jia",
""
]
] |
2106.07549 | Sung Hwan Jeon | Sung Hwan Jeon and Sungzoon Cho | Named Entity Normalization Model Using Edge Weight Updating Neural
Network: Assimilation Between Knowledge-Driven Graph and Data-Driven Graph | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Discriminating the matched named entity pairs or identifying the entities'
canonical forms are critical in text mining tasks. More precise named entity
normalization in text mining will benefit other subsequent text analytic
applications. We built the named entity normalization model with a novel Edge
Weight Updating Neural Network. Our proposed model when tested on four
different datasets achieved state-of-the-art results. We, next, verify our
model's performance on NCBI Disease, BC5CDR Disease, and BC5CDR Chemical
databases, which are widely used named entity normalization datasets in the
bioinformatics field. We also tested our model with our own financial named
entity normalization dataset to validate the efficacy for more general
applications. Using the constructed dataset, we differentiate named entity
pairs. Our model achieved the highest named entity normalization performances
in terms of various evaluation metrics.
| [
{
"version": "v1",
"created": "Mon, 14 Jun 2021 16:14:58 GMT"
}
] | 1,623,715,200,000 | [
[
"Jeon",
"Sung Hwan",
""
],
[
"Cho",
"Sungzoon",
""
]
] |
2106.07555 | S\'ebastien Lall\'e | S\'ebastien Lall\'e and Cristina Conati | A Framework to Counteract Suboptimal User-Behaviors in Exploratory
Learning Environments: an Application to MOOCs | The AAAI 2019 Workshop on Plan, Activity, and Intent Recognition | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While there is evidence that user-adaptive support can greatly enhance the
effectiveness of educational systems, designing such support for exploratory
learning environments (e.g., simulations) is still challenging due to the
open-ended nature of their interaction. In particular, there is little a priori
knowledge of which student's behaviors can be detrimental to learning in such
environments. To address this problem, we focus on a data-driven user-modeling
framework that uses logged interaction data to learn which behavioral or
activity patterns should trigger help during interaction with a specific
learning environment. This framework has been successfully used to provide
adaptive support in interactive learning simulations. Here we present a novel
application of this framework we are working on, namely to Massive Open Online
Courses (MOOCs), a form of exploratory environment that could greatly benefit
from adaptive support due to the large diversity of their users, but typically
lack of such adaptation. We describe an experiment aimed at investigating the
value of our framework to identify student's behaviors that can justify
adapting to, and report some preliminary results.
| [
{
"version": "v1",
"created": "Mon, 14 Jun 2021 16:16:33 GMT"
}
] | 1,623,715,200,000 | [
[
"Lallé",
"Sébastien",
""
],
[
"Conati",
"Cristina",
""
]
] |
2106.07824 | Yewen Pu | Samuel Acquaviva, Yewen Pu, Marta Kryven, Theodoros Sechopoulos,
Catherine Wong, Gabrielle E Ecanow, Maxwell Nye, Michael Henry Tessler,
Joshua B. Tenenbaum | Communicating Natural Programs to Humans and Machines | equal contributions: (author 1,2) and (author 3,4,5). 36th Conference
on Neural Information Processing Systems (NeurIPS 2022) Track on Datasets and
Benchmarks | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Abstraction and Reasoning Corpus (ARC) is a set of procedural tasks that
tests an agent's ability to flexibly solve novel problems. While most ARC tasks
are easy for humans, they are challenging for state-of-the-art AI. What makes
building intelligent systems that can generalize to novel situations such as
ARC difficult? We posit that the answer might be found by studying the
difference of \emph{language}: While humans readily generate and interpret
instructions in a general language, computer systems are shackled to a narrow
domain-specific language that they can precisely execute. We present LARC, the
\textit{Language-complete ARC}: a collection of natural language descriptions
by a group of human participants who instruct each other on how to solve ARC
tasks using language alone, which contains successful instructions for 88\% of
the ARC tasks. We analyze the collected instructions as `natural programs',
finding that while they resemble computer programs, they are distinct in two
ways: First, they contain a wide range of primitives; Second, they frequently
leverage communicative strategies beyond directly executable codes. We
demonstrate that these two distinctions prevent current program synthesis
techniques from leveraging LARC to its full potential, and give concrete
suggestions on how to build the next-generation program synthesizers.
| [
{
"version": "v1",
"created": "Tue, 15 Jun 2021 01:05:04 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Dec 2021 08:22:09 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Oct 2022 20:25:54 GMT"
},
{
"version": "v4",
"created": "Sat, 20 May 2023 01:19:06 GMT"
}
] | 1,684,800,000,000 | [
[
"Acquaviva",
"Samuel",
""
],
[
"Pu",
"Yewen",
""
],
[
"Kryven",
"Marta",
""
],
[
"Sechopoulos",
"Theodoros",
""
],
[
"Wong",
"Catherine",
""
],
[
"Ecanow",
"Gabrielle E",
""
],
[
"Nye",
"Maxwell",
""
],
[
"Tessler",
"Michael Henry",
""
],
[
"Tenenbaum",
"Joshua B.",
""
]
] |
2106.07854 | Duzhen Zhang | Duzhen Zhang, Tielin Zhang, Shuncheng Jia, Xiang Cheng and Bo Xu | Population-coding and Dynamic-neurons improved Spiking Actor Network for
Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the Deep Neural Networks (DNNs) as a powerful function approximator,
Deep Reinforcement Learning (DRL) has been excellently demonstrated on robotic
control tasks. Compared to DNNs with vanilla artificial neurons, the
biologically plausible Spiking Neural Network (SNN) contains a diverse
population of spiking neurons, making it naturally powerful on state
representation with spatial and temporal information. Based on a hybrid
learning framework, where a spike actor-network infers actions from states and
a deep critic network evaluates the actor, we propose a Population-coding and
Dynamic-neurons improved Spiking Actor Network (PDSAN) for efficient state
representation from two different scales: input coding and neuronal coding. For
input coding, we apply population coding with dynamically receptive fields to
directly encode each input state component. For neuronal coding, we propose
different types of dynamic-neurons (containing 1st-order and 2nd-order neuronal
dynamics) to describe much more complex neuronal dynamics. Finally, the PDSAN
is trained in conjunction with deep critic networks using the Twin Delayed Deep
Deterministic policy gradient algorithm (TD3-PDSAN). Extensive experimental
results show that our TD3-PDSAN model achieves better performance than
state-of-the-art models on four OpenAI gym benchmark tasks. It is an important
attempt to improve RL with SNN towards the effective computation satisfying
biological plausibility.
| [
{
"version": "v1",
"created": "Tue, 15 Jun 2021 03:14:41 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Jun 2021 08:54:31 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Sep 2022 08:49:43 GMT"
}
] | 1,663,891,200,000 | [
[
"Zhang",
"Duzhen",
""
],
[
"Zhang",
"Tielin",
""
],
[
"Jia",
"Shuncheng",
""
],
[
"Cheng",
"Xiang",
""
],
[
"Xu",
"Bo",
""
]
] |
2106.07921 | Niklas Muennighoff | Niklas Muennighoff | Diagnosing the Impact of AI on Radiology in China | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence will significantly impact the work environment of
radiologists. I suggest that up to 50% of a radiologists work in 2021 will be
performed by AI-models in 2025. However, it won't increase beyond that 50%
level, as radiologists remain key for human-centered aspects of their job. I
project that few to no radiologists will be laid off in China due to the
existing supply shortage of radiology services in 2021. The application of AI
in radiology could contribute 1.7 billion USD to China's GDP in 2025. It will
further allow radiologists to start productive work up to four years earlier.
AI in radiology will positively impact the health of patients and radiologists
themselves.
| [
{
"version": "v1",
"created": "Tue, 15 Jun 2021 07:18:07 GMT"
}
] | 1,623,801,600,000 | [
[
"Muennighoff",
"Niklas",
""
]
] |
2106.07924 | Elad Denenberg | Elad Denenberg, Amanda Coles, and Derek Long | Improving Search by Utilizing State Information in OPTIC Planners
Compilation to LP | 8 pages, 3 figures. Preprint, last submitted to the International
Conference on Automated Planning and Scheduling (ICAPS 2021) at 21.01.2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Automated planners are computer tools that allow autonomous agents to make
strategies and decisions by determining a set of actions for the agent that to
take, which will carry a system from a given initial state to the desired goal
state. Many planners are domain-independent, allowing their deployment in a
variety of domains. Such is the broad family of OPTIC planners. These planners
perform Forward Search and call a Linear Programming (LP) solver multiple times
at every state to check for consistency and to set bounds on the numeric
variables. These checks can be computationally costly, especially in real-life
applications. This paper suggests a method for identifying information about
the specific state being evaluated, allowing the formulation of the equations
to facilitate better solver selection and faster LP solving. The usefulness of
the method is demonstrated in six domains and is shown to enhance performance
significantly.
| [
{
"version": "v1",
"created": "Tue, 15 Jun 2021 07:23:31 GMT"
}
] | 1,623,801,600,000 | [
[
"Denenberg",
"Elad",
""
],
[
"Coles",
"Amanda",
""
],
[
"Long",
"Derek",
""
]
] |
2106.07932 | Yoo Yongmin | Tak-Sung Heo, Yongmin Yoo, Yeongjoon Park, Byeong-Cheol Jo, Kyungsun
Kim | Medical Code Prediction from Discharge Summary: Document to Sequence
BERT using Sequence Attention | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clinical notes are unstructured text generated by clinicians during patient
encounters. Clinical notes are usually accompanied by a set of metadata codes
from the International Classification of Diseases(ICD). ICD code is an
important code used in various operations, including insurance, reimbursement,
medical diagnosis, etc. Therefore, it is important to classify ICD codes
quickly and accurately. However, annotating these codes is costly and
time-consuming. So we propose a model based on bidirectional encoder
representations from transformers (BERT) using the sequence attention method
for automatic ICD code assignment. We evaluate our approach on the medical
information mart for intensive care III (MIMIC-III) benchmark dataset. Our
model achieved performance of macro-averaged F1: 0.62898 and micro-averaged F1:
0.68555 and is performing better than a performance of the state-of-the-art
model using the MIMIC-III dataset. The contribution of this study proposes a
method of using BERT that can be applied to documents and a sequence attention
method that can capture important sequence in-formation appearing in documents.
| [
{
"version": "v1",
"created": "Tue, 15 Jun 2021 07:35:50 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Jun 2021 05:26:19 GMT"
},
{
"version": "v3",
"created": "Mon, 5 Jul 2021 06:44:14 GMT"
},
{
"version": "v4",
"created": "Thu, 11 Nov 2021 00:30:34 GMT"
}
] | 1,636,675,200,000 | [
[
"Heo",
"Tak-Sung",
""
],
[
"Yoo",
"Yongmin",
""
],
[
"Park",
"Yeongjoon",
""
],
[
"Jo",
"Byeong-Cheol",
""
],
[
"Kim",
"Kyungsun",
""
]
] |
2106.08022 | Jialong Wang | Zheng Wang, Jialong Wang, Yuchen Guo, Zhiguo Gong | Zero-shot Node Classification with Decomposed Graph Prototype Network | Accepted by KDD 2021 | null | 10.1145/3447548.3467230 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Node classification is a central task in graph data analysis. Scarce or even
no labeled data of emerging classes is a big challenge for existing methods. A
natural question arises: can we classify the nodes from those classes that have
never been seen? In this paper, we study this zero-shot node classification
(ZNC) problem which has a two-stage nature: (1) acquiring high-quality class
semantic descriptions (CSDs) for knowledge transfer, and (2) designing a well
generalized graph-based learning model. For the first stage, we give a novel
quantitative CSDs evaluation strategy based on estimating the real class
relationships, so as to get the "best" CSDs in a completely automatic way. For
the second stage, we propose a novel Decomposed Graph Prototype Network (DGPN)
method, following the principles of locality and compositionality for zero-shot
model generalization. Finally, we conduct extensive experiments to demonstrate
the effectiveness of our solutions.
| [
{
"version": "v1",
"created": "Tue, 15 Jun 2021 10:13:20 GMT"
}
] | 1,623,801,600,000 | [
[
"Wang",
"Zheng",
""
],
[
"Wang",
"Jialong",
""
],
[
"Guo",
"Yuchen",
""
],
[
"Gong",
"Zhiguo",
""
]
] |
2106.08371 | Ivan Bravi | Ivan Bravi and Simon Lucas | Rinascimento: searching the behaviour space of Splendor | 11 pages, 6 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The use of Artificial Intelligence (AI) for play-testing is still on the
sidelines of main applications of AI in games compared to performance-oriented
game-playing. One of the main purposes of play-testing a game is gathering data
on the gameplay, highlighting good and bad features of the design of the game,
providing useful insight to the game designers for improving the design. Using
AI agents has the potential of speeding the process dramatically. The purpose
of this research is to map the behavioural space (BSpace) of a game by using a
general method. Using the MAP-Elites algorithm we search the hyperparameter
space Rinascimento AI agents and map it to the BSpace defined by several
behavioural metrics. This methodology was able to highlight both exemplary and
degenerated behaviours in the original game design of Splendor and two
variations. In particular, the use of event-value functions has generally shown
a remarkable improvement in the coverage of the BSpace compared to agents based
on classic score-based reward signals.
| [
{
"version": "v1",
"created": "Tue, 15 Jun 2021 18:46:57 GMT"
}
] | 1,623,888,000,000 | [
[
"Bravi",
"Ivan",
""
],
[
"Lucas",
"Simon",
""
]
] |
2106.08409 | Aurora Saibene | Francesca Gasparini, Giulia Rizzi, Aurora Saibene, Elisabetta Fersini | Benchmark dataset of memes with text transcriptions for automatic
detection of multi-modal misogynistic content | null | Data in brief 44 (2022): 108526 | 10.1016/j.dib.2022.108526 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper we present a benchmark dataset generated as part of a project
for automatic identification of misogyny within online content, which focuses
in particular on memes. The benchmark here described is composed of 800 memes
collected from the most popular social media platforms, such as Facebook,
Twitter, Instagram and Reddit, and consulting websites dedicated to collection
and creation of memes. To gather misogynistic memes, specific keywords that
refer to misogynistic content have been considered as search criterion,
considering different manifestations of hatred against women, such as body
shaming, stereotyping, objectification and violence. In parallel, memes with no
misogynist content have been manually downloaded from the same web sources.
Among all the collected memes, three domain experts have selected a dataset of
800 memes equally balanced between misogynistic and non-misogynistic ones. This
dataset has been validated through a crowdsourcing platform, involving 60
subjects for the labelling process, in order to collect three evaluations for
each instance. Two further binary labels have been collected from both the
experts and the crowdsourcing platform, for memes evaluated as misogynistic,
concerning aggressiveness and irony. Finally for each meme, the text has been
manually transcribed. The dataset provided is thus composed of the 800 memes,
the labels given by the experts and those obtained by the crowdsourcing
validation, and the transcribed texts. This data can be used to approach the
problem of automatic detection of misogynistic content on the Web relying on
both textual and visual cues, facing phenomenons that are growing every day
such as cybersexism and technology-facilitated violence.
| [
{
"version": "v1",
"created": "Tue, 15 Jun 2021 20:01:28 GMT"
}
] | 1,665,100,800,000 | [
[
"Gasparini",
"Francesca",
""
],
[
"Rizzi",
"Giulia",
""
],
[
"Saibene",
"Aurora",
""
],
[
"Fersini",
"Elisabetta",
""
]
] |
2106.08452 | Matthias Knorr | Ricardo Ferreira, Carolina Lopes, Ricardo Gon\c{c}alves, Matthias
Knorr, Ludwig Krippahl, Jo\~ao Leite | Deep Neural Networks for Approximating Stream Reasoning with C-SPARQL | Accepted at the 20th EPIA Conference on Artificial Intelligence, EPIA
2021; update on previous version - data on optimizer and loss added for CNNs
in the appendix | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The amount of information produced, whether by newspapers, blogs and social
networks, or by monitoring systems, is increasing rapidly. Processing all this
data in real-time, while taking into consideration advanced knowledge about the
problem domain, is challenging, but required in scenarios where assessing
potential risks in a timely fashion is critical. C-SPARQL, a language for
continuous queries over streams of RDF data, is one of the more prominent
approaches in stream reasoning that provides such continuous inference
capabilities over dynamic data that go beyond mere stream processing. However,
it has been shown that, in the presence of huge amounts of data, C-SPARQL may
not be able to answer queries in time, in particular when the frequency of
incoming data is higher than the time required for reasoning with that data. In
this paper, we investigate whether reasoning with C-SPARQL can be approximated
using Recurrent Neural Networks and Convolutional Neural Networks, two neural
network architectures that have been shown to be well-suited for time series
forecasting and time series classification, to leverage on their higher
processing speed once the network has been trained. We consider a variety of
different kinds of queries and obtain overall positive results with high
accuracies while improving processing time often by several orders of
magnitude.
| [
{
"version": "v1",
"created": "Tue, 15 Jun 2021 21:51:47 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jul 2021 11:42:29 GMT"
}
] | 1,626,652,800,000 | [
[
"Ferreira",
"Ricardo",
""
],
[
"Lopes",
"Carolina",
""
],
[
"Gonçalves",
"Ricardo",
""
],
[
"Knorr",
"Matthias",
""
],
[
"Krippahl",
"Ludwig",
""
],
[
"Leite",
"João",
""
]
] |
2106.08457 | Ricardo Gon\c{c}alves | Jo\~ao Ferreira, Diogo Lavado, Ricardo Gon\c{c}alves, Matthias Knorr,
Ludwig Krippahl, and Jo\~ao Leite | Faster than LASER -- Towards Stream Reasoning with Deep Neural Networks | Extended version of EPIA 21 paper | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | With the constant increase of available data in various domains, such as the
Internet of Things, Social Networks or Smart Cities, it has become fundamental
that agents are able to process and reason with such data in real time. Whereas
reasoning over time-annotated data with background knowledge may be
challenging, due to the volume and velocity in which such data is being
produced, such complex reasoning is necessary in scenarios where agents need to
discover potential problems and this cannot be done with simple stream
processing techniques. Stream Reasoners aim at bridging this gap between
reasoning and stream processing and LASER is such a stream reasoner designed to
analyse and perform complex reasoning over streams of data. It is based on
LARS, a rule-based logical language extending Answer Set Programming, and it
has shown better runtime results than other state-of-the-art stream reasoning
systems. Nevertheless, for high levels of data throughput even LASER may be
unable to compute answers in a timely fashion. In this paper, we study whether
Convolutional and Recurrent Neural Networks, which have shown to be
particularly well-suited for time series forecasting and classification, can be
trained to approximate reasoning with LASER, so that agents can benefit from
their high processing speed.
| [
{
"version": "v1",
"created": "Tue, 15 Jun 2021 22:06:12 GMT"
}
] | 1,623,888,000,000 | [
[
"Ferreira",
"João",
""
],
[
"Lavado",
"Diogo",
""
],
[
"Gonçalves",
"Ricardo",
""
],
[
"Knorr",
"Matthias",
""
],
[
"Krippahl",
"Ludwig",
""
],
[
"Leite",
"João",
""
]
] |
2106.08482 | Varun Kumar Vijay | Varun Kumar Vijay and Hassam Sheikh and Somdeb Majumdar and Mariano
Phielipp | Minimizing Communication while Maximizing Performance in Multi-Agent
Reinforcement Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Inter-agent communication can significantly increase performance in
multi-agent tasks that require co-ordination to achieve a shared goal. Prior
work has shown that it is possible to learn inter-agent communication protocols
using multi-agent reinforcement learning and message-passing network
architectures. However, these models use an unconstrained broadcast
communication model, in which an agent communicates with all other agents at
every step, even when the task does not require it. In real-world applications,
where communication may be limited by system constraints like bandwidth, power
and network capacity, one might need to reduce the number of messages that are
sent. In this work, we explore a simple method of minimizing communication
while maximizing performance in multi-task learning: simultaneously optimizing
a task-specific objective and a communication penalty. We show that the
objectives can be optimized using Reinforce and the Gumbel-Softmax
reparameterization. We introduce two techniques to stabilize training: 50%
training and message forwarding. Training with the communication penalty on
only 50% of the episodes prevents our models from turning off their outgoing
messages. Second, repeating messages received previously helps models retain
information, and further improves performance. With these techniques, we show
that we can reduce communication by 75% with no loss of performance.
| [
{
"version": "v1",
"created": "Tue, 15 Jun 2021 23:13:51 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Jun 2021 19:49:32 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Dec 2021 18:53:46 GMT"
}
] | 1,639,008,000,000 | [
[
"Vijay",
"Varun Kumar",
""
],
[
"Sheikh",
"Hassam",
""
],
[
"Majumdar",
"Somdeb",
""
],
[
"Phielipp",
"Mariano",
""
]
] |
2106.08500 | Loc Hoang | Loc Hoang and Udit Agarwal and Gurbinder Gill and Roshan Dathathri and
Abhik Seal and Brian Martin and Keshav Pingali | Optimizing Graph Transformer Networks with Graph-based Techniques | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph transformer networks (GTN) are a variant of graph convolutional
networks (GCN) that are targeted to heterogeneous graphs in which nodes and
edges have associated type information that can be exploited to improve
inference accuracy. GTNs learn important metapaths in the graph, create
weighted edges for these metapaths, and use the resulting graph in a GCN.
Currently, the only available implementation of GTNs uses dense matrix
multiplication to find metapaths. Unfortunately, the space overhead of this
approach can be large, so in practice it is used only for small graphs. In
addition, the matrix-based implementation is not fine-grained enough to use
random-walk based methods to optimize metapath finding. In this paper, we
present a graph-based formulation and implementation of the GTN metapath
finding problem. This graph-based formulation has two advantages over the
matrix-based approach. First, it is more space efficient than the original GTN
implementation and more compute-efficient for metapath sizes of practical
interest. Second, it permits us to implement a sampling method that reduces the
number of metapaths that must be enumerated, allowing the implementation to be
used for larger graphs and larger metapath sizes. Experimental results show
that our implementation is $6.5\times$ faster than the original GTN
implementation on average for a metapath length of 4, and our sampling
implementation is $155\times$ faster on average than this implementation
without compromising on the accuracy of the GTN.
| [
{
"version": "v1",
"created": "Wed, 16 Jun 2021 00:54:24 GMT"
}
] | 1,623,888,000,000 | [
[
"Hoang",
"Loc",
""
],
[
"Agarwal",
"Udit",
""
],
[
"Gill",
"Gurbinder",
""
],
[
"Dathathri",
"Roshan",
""
],
[
"Seal",
"Abhik",
""
],
[
"Martin",
"Brian",
""
],
[
"Pingali",
"Keshav",
""
]
] |
2106.08670 | Vimukthini Pinto | Vimukthini Pinto, Cheng Xue, Chathura Nagoda Gamage, Matthew
Stephenson and Jochen Renz | The Difficulty of Novelty Detection in Open-World Physical Domains: An
Application to Angry Birds | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting and responding to novel situations in open-world environments is a
key capability of human cognition and is a persistent problem for AI systems.
In an open-world, novelties can appear in many different forms and may be easy
or hard to detect. Therefore, to accurately evaluate the novelty detection
capability of AI systems, it is necessary to investigate how difficult it may
be to detect different types of novelty. In this paper, we propose a
qualitative physics-based method to quantify the difficulty of novelty
detection focusing on open-world physical domains. We apply our method in the
popular physics simulation game Angry Birds, and conduct a user study across
different novelties to validate our method. Results indicate that our
calculated detection difficulties are in line with those of human users.
| [
{
"version": "v1",
"created": "Wed, 16 Jun 2021 10:14:09 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Jun 2023 07:41:19 GMT"
}
] | 1,687,824,000,000 | [
[
"Pinto",
"Vimukthini",
""
],
[
"Xue",
"Cheng",
""
],
[
"Gamage",
"Chathura Nagoda",
""
],
[
"Stephenson",
"Matthew",
""
],
[
"Renz",
"Jochen",
""
]
] |
2106.08732 | Li Xiao | Hao Chen, Fuzhen Zhuang, Li Xiao, Ling Ma, Haiyan Liu, Ruifang Zhang,
Huiqin Jiang, Qing He | AMA-GCN: Adaptive Multi-layer Aggregation Graph Convolutional Network
for Disease Prediction | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Recently, Graph Convolutional Networks (GCNs) have proven to be a powerful
mean for Computer Aided Diagnosis (CADx). This approach requires building a
population graph to aggregate structural information, where the graph adjacency
matrix represents the relationship between nodes. Until now, this adjacency
matrix is usually defined manually based on phenotypic information. In this
paper, we propose an encoder that automatically selects the appropriate
phenotypic measures according to their spatial distribution, and uses the text
similarity awareness mechanism to calculate the edge weights between nodes. The
encoder can automatically construct the population graph using phenotypic
measures which have a positive impact on the final results, and further
realizes the fusion of multimodal information. In addition, a novel graph
convolution network architecture using multi-layer aggregation mechanism is
proposed. The structure can obtain deep structure information while suppressing
over-smooth, and increase the similarity between the same type of nodes.
Experimental results on two databases show that our method can significantly
improve the diagnostic accuracy for Autism spectrum disorder and breast cancer,
indicating its universality in leveraging multimodal data for disease
prediction.
| [
{
"version": "v1",
"created": "Wed, 16 Jun 2021 12:13:23 GMT"
}
] | 1,623,888,000,000 | [
[
"Chen",
"Hao",
""
],
[
"Zhuang",
"Fuzhen",
""
],
[
"Xiao",
"Li",
""
],
[
"Ma",
"Ling",
""
],
[
"Liu",
"Haiyan",
""
],
[
"Zhang",
"Ruifang",
""
],
[
"Jiang",
"Huiqin",
""
],
[
"He",
"Qing",
""
]
] |
2106.09013 | Yachen Tang | Yachen Tang, Haiyun Han, Xianmao Yu, Jing Zhao, Guangyi Liu, and
Longfei Wei | An Intelligent Question Answering System based on Power Knowledge Graph | 5 pages,6 figures, IEEE General Meeting 2020 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The intelligent question answering (IQA) system can accurately capture users'
search intention by understanding the natural language questions, searching
relevant content efficiently from a massive knowledge-base, and returning the
answer directly to the user. Since the IQA system can save inestimable time and
workforce in data search and reasoning, it has received more and more attention
in data science and artificial intelligence. This article introduced a domain
knowledge graph using the graph database and graph computing technologies from
massive heterogeneous data in electric power. It then proposed an IQA system
based on the electrical power knowledge graph to extract the intent and
constraints of natural interrogation based on the natural language processing
(NLP) method, to construct graph data query statements via knowledge reasoning,
and to complete the accurate knowledge search and analysis to provide users
with an intuitive visualization. This method thoroughly combined knowledge
graph and graph computing characteristics, realized high-speed multi-hop
knowledge correlation reasoning analysis in tremendous knowledge. The proposed
work can also provide a basis for the context-aware intelligent question and
answer.
| [
{
"version": "v1",
"created": "Wed, 16 Jun 2021 17:57:51 GMT"
}
] | 1,623,888,000,000 | [
[
"Tang",
"Yachen",
""
],
[
"Han",
"Haiyun",
""
],
[
"Yu",
"Xianmao",
""
],
[
"Zhao",
"Jing",
""
],
[
"Liu",
"Guangyi",
""
],
[
"Wei",
"Longfei",
""
]
] |
2106.09086 | Hengyuan Hu | Hengyuan Hu, Adam Lerer, Noam Brown, Jakob Foerster | Learned Belief Search: Efficiently Improving Policies in Partially
Observable Settings | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Search is an important tool for computing effective policies in single- and
multi-agent environments, and has been crucial for achieving superhuman
performance in several benchmark fully and partially observable games. However,
one major limitation of prior search approaches for partially observable
environments is that the computational cost scales poorly with the amount of
hidden information. In this paper we present \emph{Learned Belief Search}
(LBS), a computationally efficient search procedure for partially observable
environments. Rather than maintaining an exact belief distribution, LBS uses an
approximate auto-regressive counterfactual belief that is learned as a
supervised task. In multi-agent settings, LBS uses a novel public-private model
architecture for underlying policies in order to efficiently evaluate these
policies during rollouts. In the benchmark domain of Hanabi, LBS can obtain 55%
~ 91% of the benefit of exact search while reducing compute requirements by
$35.8 \times$ ~ $4.6 \times$, allowing it to scale to larger settings that were
inaccessible to previous search methods.
| [
{
"version": "v1",
"created": "Wed, 16 Jun 2021 19:00:53 GMT"
}
] | 1,623,974,400,000 | [
[
"Hu",
"Hengyuan",
""
],
[
"Lerer",
"Adam",
""
],
[
"Brown",
"Noam",
""
],
[
"Foerster",
"Jakob",
""
]
] |
2106.09106 | Scott Cheng-Hsin Yang | Tomas Folke, ZhaoBin Li, Ravi B. Sojitra, Scott Cheng-Hsin Yang, and
Patrick Shafto | Explainable AI for Natural Adversarial Images | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Adversarial images highlight how vulnerable modern image classifiers are to
perturbations outside of their training set. Human oversight might mitigate
this weakness, but depends on humans understanding the AI well enough to
predict when it is likely to make a mistake. In previous work we have found
that humans tend to assume that the AI's decision process mirrors their own.
Here we evaluate if methods from explainable AI can disrupt this assumption to
help participants predict AI classifications for adversarial and standard
images. We find that both saliency maps and examples facilitate catching AI
errors, but their effects are not additive, and saliency maps are more
effective than examples.
| [
{
"version": "v1",
"created": "Wed, 16 Jun 2021 20:19:04 GMT"
}
] | 1,623,974,400,000 | [
[
"Folke",
"Tomas",
""
],
[
"Li",
"ZhaoBin",
""
],
[
"Sojitra",
"Ravi B.",
""
],
[
"Yang",
"Scott Cheng-Hsin",
""
],
[
"Shafto",
"Patrick",
""
]
] |
2106.09225 | Monireh Ebrahimi | Monireh Ebrahimi, Aaron Eberhart, Pascal Hitzler | On the Capabilities of Pointer Networks for Deep Deductive Reasoning | 14 pages, 1 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The importance of building neural networks that can learn to reason has been
well recognized in the neuro-symbolic community. In this paper, we apply neural
pointer networks for conducting reasoning over symbolic knowledge bases. In
doing so, we explore the benefits and limitations of encoder-decoder
architectures in general and pointer networks in particular for developing
accurate, generalizable and robust neuro-symbolic reasoners. Based on our
experimental results, pointer networks performs remarkably well across multiple
reasoning tasks while outperforming the previously reported state of the art by
a significant margin. We observe that the Pointer Networks preserve their
performance even when challenged with knowledge graphs of the domain/vocabulary
it has never encountered before. To the best of our knowledge, this is the
first study on neuro-symbolic reasoning using Pointer Networks. We hope our
impressive results on these reasoning problems will encourage broader
exploration of pointer networks' capabilities for reasoning over more complex
logics and for other neuro-symbolic problems.
| [
{
"version": "v1",
"created": "Thu, 17 Jun 2021 03:25:20 GMT"
}
] | 1,623,974,400,000 | [
[
"Ebrahimi",
"Monireh",
""
],
[
"Eberhart",
"Aaron",
""
],
[
"Hitzler",
"Pascal",
""
]
] |
2106.09230 | Bla\v{z} \v{S}krlj | Timen Stepi\v{s}nik Perdih, Senja Pollak, Bla\v{z} \v{Skrlj} | JSI at the FinSim-2 task: Ontology-Augmented Financial Concept
Classification | null | null | 10.1145/3442442.3451383 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ontologies are increasingly used for machine reasoning over the last few
years. They can provide explanations of concepts or be used for concept
classification if there exists a mapping from the desired labels to the
relevant ontology. Another advantage of using ontologies is that they do not
need a learning process, meaning that we do not need the train data or time
before using them. This paper presents a practical use of an ontology for a
classification problem from the financial domain. It first transforms a given
ontology to a graph and proceeds with generalization with the aim to find
common semantic descriptions of the input sets of financial concepts.
We present a solution to the shared task on Learning Semantic Similarities
for the Financial Domain (FinSim-2 task). The task is to design a system that
can automatically classify concepts from the Financial domain into the most
relevant hypernym concept in an external ontology - the Financial Industry
Business Ontology. We propose a method that maps given concepts to the
mentioned ontology and performs a graph search for the most relevant hypernyms.
We also employ a word vectorization method and a machine learning classifier to
supplement the method with a ranked list of labels for each concept.
| [
{
"version": "v1",
"created": "Thu, 17 Jun 2021 03:56:15 GMT"
}
] | 1,623,974,400,000 | [
[
"Perdih",
"Timen Stepišnik",
""
],
[
"Pollak",
"Senja",
""
],
[
"\\v{Skrlj}",
"Blaž",
""
]
] |
2106.09258 | Konstantinos Kotis | Evangelos Paparidis and Konstantinos Kotis | Knowledge Graphs and Machine Learning in biased C4I applications | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper introduces our position on the critical issue of bias that
recently appeared in AI applications. Specifically, we discuss the combination
of current technologies used in AI applications i.e., Machine Learning and
Knowledge Graphs, and point to their involvement in (de)biased applications of
the C4I domain. Although this is a wider problem that currently emerges from
different application domains, bias appears more critical in C4I than in others
due to its security-related nature. While proposing certain actions to be taken
towards debiasing C4I applications, we acknowledge the immature aspect of this
topic within the Knowledge Graph and Semantic Web communities.
| [
{
"version": "v1",
"created": "Thu, 17 Jun 2021 05:53:46 GMT"
}
] | 1,623,974,400,000 | [
[
"Paparidis",
"Evangelos",
""
],
[
"Kotis",
"Konstantinos",
""
]
] |
2106.09281 | Haile Haile Misgna | Haile Misgna, Moges Ahmed and Anubhav Kumar | MatES: Web-based Forward Chaining Expert System for Maternal Care | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The solution to prevent maternal complications are known and preventable by
trained health professionals. But in countries like Ethiopia where the patient
to physician ratio is 1 doctor to 1000 patients, maternal mortality and
morbidity rate is high. To fill the gap of highly trained health professionals,
Ethiopia introduced health extension programs. Task shifting to health
extension workers (HEWs) contributed in decreasing mortality and morbidity rate
in Ethiopia. Knowledge-gap has been one of the major challenges to HEWs. The
reasons are trainings are not given in regular manner, there is no midwife,
gynecologists or doctors around for consultation, and all guidelines are
paper-based which are easily exposed to damage. In this paper, we describe the
design and implementation of a web-based expert system for maternal care. We
only targeted the major 10 diseases and complication of maternal health issues
seen in Sub-Saharan Africa. The expert system can be accessed through the use
of web browsers from computers as well as smart phones. Forward chaining
rule-based expert system is used in order to give suggestions and create a new
knowledge from the knowledge-base. This expert system can be used to train HEWs
in the field of maternal health.
Keywords: expert system, maternal care, forward-chaining, rule-based expert
system, PHLIPS
| [
{
"version": "v1",
"created": "Thu, 17 Jun 2021 07:06:58 GMT"
}
] | 1,623,974,400,000 | [
[
"Misgna",
"Haile",
""
],
[
"Ahmed",
"Moges",
""
],
[
"Kumar",
"Anubhav",
""
]
] |
2106.09325 | Mohammad Mohammadamini | Zhila Amini, Mohammad Mohammadamini (LIA), Hawre Hosseini, Mehran
Mansouri, Daban Jaff | Central Kurdish machine translation: First large scale parallel corpus
and experiments | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While the computational processing of Kurdish has experienced a relative
increase, the machine translation of this language seems to be lacking a
considerable body of scientific work. This is in part due to the lack of
resources especially curated for this task. In this paper, we present the first
large scale parallel corpus of Central Kurdish-English, Awta, containing
229,222 pairs of manually aligned translations. Our corpus is collected from
different text genres and domains in an attempt to build more robust and
real-world applications of machine translation. We make a portion of this
corpus publicly available in order to foster research in this area. Further, we
build several neural machine translation models in order to benchmark the task
of Kurdish machine translation. Additionally, we perform extensive experimental
analysis of results in order to identify the major challenges that Central
Kurdish machine translation faces. These challenges include language-dependent
and-independent ones as categorized in this paper, the first group of which are
aware of Central Kurdish linguistic properties on different morphological,
syntactic and semantic levels. Our best performing systems achieve 22.72 and
16.81 in BLEU score for Ku$\rightarrow$EN and En$\rightarrow$Ku, respectively.
| [
{
"version": "v1",
"created": "Thu, 17 Jun 2021 08:41:53 GMT"
}
] | 1,623,974,400,000 | [
[
"Amini",
"Zhila",
"",
"LIA"
],
[
"Mohammadamini",
"Mohammad",
"",
"LIA"
],
[
"Hosseini",
"Hawre",
""
],
[
"Mansouri",
"Mehran",
""
],
[
"Jaff",
"Daban",
""
]
] |
2106.09344 | Claire Palmer Dr | Claire Palmer, Ben Roullier, Muhammad Aamir, Leonardo Stella, Uchenna
Diala, Ashiq Anjum, Frank Mcquade, Keith Cox and Alex Calvert | Virtual Reality based Digital Twin System for remote laboratories and
online practical learning | 6 pages, 4 figures, accepted for publication ICMR2021 18th
International Conference in Manufacturing Research Virtual Conference hosted
by the University of Derby, UK 7 - 10 September 2021 | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | There is a need for remote learning and virtual learning applications such as
virtual reality (VR) and tablet-based solutions which the current pandemic has
demonstrated. Creating complex learning scenarios by developers is highly
time-consuming and can take over a year. There is a need to provide a simple
method to enable lecturers to create their own content for their laboratory
tutorials. Research is currently being undertaken into developing generic
models to enable the semi-automatic creation of a virtual learning application.
A case study describing the creation of a virtual learning application for an
electrical laboratory tutorial is presented.
| [
{
"version": "v1",
"created": "Thu, 17 Jun 2021 09:38:24 GMT"
}
] | 1,623,974,400,000 | [
[
"Palmer",
"Claire",
""
],
[
"Roullier",
"Ben",
""
],
[
"Aamir",
"Muhammad",
""
],
[
"Stella",
"Leonardo",
""
],
[
"Diala",
"Uchenna",
""
],
[
"Anjum",
"Ashiq",
""
],
[
"Mcquade",
"Frank",
""
],
[
"Cox",
"Keith",
""
],
[
"Calvert",
"Alex",
""
]
] |
2106.09455 | Annette Knoedler | Michael Arnemann, Per Olof Beckemeier, Thomas Bertram, Michael Eder,
Maximilian Erschig, Matthias Feiner, Francisco Javier Fernandez Garcia,
Frederic Foerster, Ruediger Haas, Martin Kipfmueller, Jan Kotschenreuther,
Bernd Langer, Ivan Lozada Rodriguez, Thomas Meibert, Simon Ottenhaus, Stefan
Paschek, Lars Pfotzer, Michael M. Roth, Tim Schanz, Philip Scherer, Janine
Schwienke, Martin Simon, Robin Tenscher-Philipp | Conference proceedings KI4Industry AI for SMEs -- The online congress
for practical entry into AI for SMEs | Editors: Matthias Feiner and Manuel Schoellhorn, 72 pages, 48
figures, in German, Conference proceedings KI 4 Industry, 79 pages in total | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Institute of Materials and Processes, IMP, of the University of Applied
Sciences in Karlsruhe, Germany in cooperation with VDI Verein Deutscher
Ingenieure e.V, AEN Automotive Engineering Network and their cooperation
partners present their competences of AI-based solution approaches in the
production engineering field. The online congress KI 4 Industry on November 12
and 13, 2020, showed what opportunities the use of artificial intelligence
offers for medium-sized manufacturing companies, SMEs, and where potential
fields of application lie. The main purpose of KI 4 Industry is to increase the
transfer of knowledge, research and technology from universities to small and
medium-sized enterprises, to demystify the term AI and to encourage companies
to use AI-based solutions in their own value chain or in their products.
| [
{
"version": "v1",
"created": "Mon, 14 Jun 2021 15:08:01 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jul 2021 16:36:48 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Aug 2021 23:49:57 GMT"
}
] | 1,628,467,200,000 | [
[
"Arnemann",
"Michael",
""
],
[
"Beckemeier",
"Per Olof",
""
],
[
"Bertram",
"Thomas",
""
],
[
"Eder",
"Michael",
""
],
[
"Erschig",
"Maximilian",
""
],
[
"Feiner",
"Matthias",
""
],
[
"Garcia",
"Francisco Javier Fernandez",
""
],
[
"Foerster",
"Frederic",
""
],
[
"Haas",
"Ruediger",
""
],
[
"Kipfmueller",
"Martin",
""
],
[
"Kotschenreuther",
"Jan",
""
],
[
"Langer",
"Bernd",
""
],
[
"Rodriguez",
"Ivan Lozada",
""
],
[
"Meibert",
"Thomas",
""
],
[
"Ottenhaus",
"Simon",
""
],
[
"Paschek",
"Stefan",
""
],
[
"Pfotzer",
"Lars",
""
],
[
"Roth",
"Michael M.",
""
],
[
"Schanz",
"Tim",
""
],
[
"Scherer",
"Philip",
""
],
[
"Schwienke",
"Janine",
""
],
[
"Simon",
"Martin",
""
],
[
"Tenscher-Philipp",
"Robin",
""
]
] |
2106.09643 | Arpit Bansal | Arpit Bansal, Micah Goldblum, Valeriia Cherepanova, Avi Schwarzschild,
C. Bayan Bruss, Tom Goldstein | MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Class-imbalanced data, in which some classes contain far more samples than
others, is ubiquitous in real-world applications. Standard techniques for
handling class-imbalance usually work by training on a re-weighted loss or on
re-balanced data. Unfortunately, training overparameterized neural networks on
such objectives causes rapid memorization of minority class data. To avoid this
trap, we harness meta-learning, which uses both an ''outer-loop'' and an
''inner-loop'' loss, each of which may be balanced using different strategies.
We evaluate our method, MetaBalance, on image classification, credit-card fraud
detection, loan default prediction, and facial recognition tasks with severely
imbalanced data, and we find that MetaBalance outperforms a wide array of
popular re-sampling strategies.
| [
{
"version": "v1",
"created": "Thu, 17 Jun 2021 16:42:50 GMT"
}
] | 1,623,974,400,000 | [
[
"Bansal",
"Arpit",
""
],
[
"Goldblum",
"Micah",
""
],
[
"Cherepanova",
"Valeriia",
""
],
[
"Schwarzschild",
"Avi",
""
],
[
"Bruss",
"C. Bayan",
""
],
[
"Goldstein",
"Tom",
""
]
] |
2106.10138 | Irfansha Shaik | Irfansha Shaik, Jaco van de Pol | Classical Planning as QBF without Grounding (extended version) | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Most classical planners use grounding as a preprocessing step, essentially
reducing planning to propositional logic. However, grounding involves
instantiating all action rules with concrete object combinations, and results
in large encodings for SAT/QBF-based planners. This severe cost in memory
becomes a main bottleneck when actions have many parameters, such as in the
Organic Synthesis problems from the IPC 2018 competition. We provide a compact
QBF encoding that is logarithmic in the number of objects and avoids grounding
completely, by using universal quantification for object combinations. We show
that we can solve some of the Organic Synthesis problems, which could not be
handled before by any SAT/QBF based planners due to grounding.
| [
{
"version": "v1",
"created": "Fri, 18 Jun 2021 14:06:57 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Dec 2021 10:27:25 GMT"
}
] | 1,640,044,800,000 | [
[
"Shaik",
"Irfansha",
""
],
[
"van de Pol",
"Jaco",
""
]
] |
2106.10832 | Stefan Sarkadi | OHAAI Collaboration: Andreas Brannstrom, Federico Castagna, Theo
Duchatelle, Matt Foulis, Timotheus Kampik, Isabelle Kuhlmann, Lars Malmqvist,
Mariela Morveli-Espinoza, Jack Mumford, Stipe Pandzic, Robin Schaefer, Luke
Thorburn, Andreas Xydis, Antonio Yuste-Ginel, Heng Zheng | Online Handbook of Argumentation for AI: Volume 2 | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This volume contains revised versions of the papers selected for the second
volume of the Online Handbook of Argumentation for AI (OHAAI). Previously,
formal theories of argument and argument interaction have been proposed and
studied, and this has led to the more recent study of computational models of
argument. Argumentation, as a field within artificial intelligence (AI), is
highly relevant for researchers interested in symbolic representations of
knowledge and defeasible reasoning. The purpose of this handbook is to provide
an open access and curated anthology for the argumentation research community.
OHAAI is designed to serve as a research hub to keep track of the latest and
upcoming PhD-driven research on the theory and application of argumentation in
all areas related to AI.
| [
{
"version": "v1",
"created": "Wed, 16 Jun 2021 13:34:13 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Jun 2021 11:24:19 GMT"
}
] | 1,624,492,800,000 | [
[
"OHAAI Collaboration",
"",
""
],
[
"Brannstrom",
"Andreas",
""
],
[
"Castagna",
"Federico",
""
],
[
"Duchatelle",
"Theo",
""
],
[
"Foulis",
"Matt",
""
],
[
"Kampik",
"Timotheus",
""
],
[
"Kuhlmann",
"Isabelle",
""
],
[
"Malmqvist",
"Lars",
""
],
[
"Morveli-Espinoza",
"Mariela",
""
],
[
"Mumford",
"Jack",
""
],
[
"Pandzic",
"Stipe",
""
],
[
"Schaefer",
"Robin",
""
],
[
"Thorburn",
"Luke",
""
],
[
"Xydis",
"Andreas",
""
],
[
"Yuste-Ginel",
"Antonio",
""
],
[
"Zheng",
"Heng",
""
]
] |
2106.11397 | Arman Dehpanah | Arman Dehpanah, Muheeb Faizan Ghori, Jonathan Gemmell, Bamshad
Mobasher | Evaluating Team Skill Aggregation in Online Competitive Games | Accepted in IEEE Conference on Games 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the main goals of online competitive games is increasing player
engagement by ensuring fair matches. These games use rating systems for
creating balanced match-ups. Rating systems leverage statistical estimation to
rate players' skills and use skill ratings to predict rank before matching
players. Skill ratings of individual players can be aggregated to compute the
skill level of a team. While research often aims to improve the accuracy of
skill estimation and fairness of match-ups, less attention has been given to
how the skill level of a team is calculated from the skill level of its
members. In this paper, we propose two new aggregation methods and compare them
with a standard approach extensively used in the research literature. We
present an exhaustive analysis of the impact of these methods on the predictive
performance of rating systems. We perform our experiments using three popular
rating systems, Elo, Glicko, and TrueSkill, on three real-world datasets
including over 100,000 battle royale and head-to-head matches. Our evaluations
show the superiority of the MAX method over the other two methods in the
majority of the tested cases, implying that the overall performance of a team
is best determined by the performance of its most skilled member. The results
of this study highlight the necessity of devising more elaborated methods for
calculating a team's performance -- methods covering different aspects of
players' behavior such as skills, strategy, or goals.
| [
{
"version": "v1",
"created": "Mon, 21 Jun 2021 20:17:36 GMT"
}
] | 1,624,406,400,000 | [
[
"Dehpanah",
"Arman",
""
],
[
"Ghori",
"Muheeb Faizan",
""
],
[
"Gemmell",
"Jonathan",
""
],
[
"Mobasher",
"Bamshad",
""
]
] |
2106.12151 | Stefan O'Toole | Stefan O'Toole, Nir Lipovetzky, Miquel Ramirez, Adrian Pearce | Width-based Lookaheads with Learnt Base Policies and Heuristics Over the
Atari-2600 Benchmark | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We propose new width-based planning and learning algorithms inspired from a
careful analysis of the design decisions made by previous width-based planners.
The algorithms are applied over the Atari-2600 games and our best performing
algorithm, Novelty guided Critical Path Learning (N-CPL), outperforms the
previously introduced width-based planning and learning algorithms $\pi$-IW(1),
$\pi$-IW(1)+ and $\pi$-HIW(n, 1). Furthermore, we present a taxonomy of the
Atari-2600 games according to some of their defining characteristics. This
analysis of the games provides further insight into the behaviour and
performance of the algorithms introduced. Namely, for games with large
branching factors, and games with sparse meaningful rewards, N-CPL outperforms
$\pi$-IW, $\pi$-IW(1)+ and $\pi$-HIW(n, 1).
| [
{
"version": "v1",
"created": "Wed, 23 Jun 2021 04:27:55 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Oct 2021 03:55:19 GMT"
}
] | 1,635,465,600,000 | [
[
"O'Toole",
"Stefan",
""
],
[
"Lipovetzky",
"Nir",
""
],
[
"Ramirez",
"Miquel",
""
],
[
"Pearce",
"Adrian",
""
]
] |
2106.12831 | Valentina Anita Carriero | Luigi Asprino, Valentina Anita Carriero, Valentina Presutti | Extraction of common conceptual components from multiple ontologies | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Understanding large ontologies is still an issue, and has an impact on many
ontology engineering tasks. We describe a novel method for identifying and
extracting conceptual components from domain ontologies, which are used to
understand and compare them. The method is applied to two corpora of ontologies
in the Cultural Heritage and Conference domain, respectively. The results,
which show good quality, are evaluated by manual inspection and by correlation
with datasets and tool performance from the ontology alignment evaluation
initiative.
| [
{
"version": "v1",
"created": "Thu, 24 Jun 2021 08:33:31 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Nov 2021 09:54:43 GMT"
}
] | 1,636,070,400,000 | [
[
"Asprino",
"Luigi",
""
],
[
"Carriero",
"Valentina Anita",
""
],
[
"Presutti",
"Valentina",
""
]
] |
2106.13093 | Xianlong Zeng | Xianlong Zeng, Fanghao Song, Zhongen Li, Krerkkiat Chusap, Chang Liu | Human-in-the-loop model explanation via verbatim boundary identification
in generated neighborhoods | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The black-box nature of machine learning models limits their use in
case-critical applications, raising faithful and ethical concerns that lead to
trust crises. One possible way to mitigate this issue is to understand how a
(mispredicted) decision is carved out from the decision boundary. This paper
presents a human-in-the-loop approach to explain machine learning models using
verbatim neighborhood manifestation. Contrary to most of the current
eXplainable Artificial Intelligence (XAI) systems, which provide hit-or-miss
approximate explanations, our approach generates the local decision boundary of
the given instance and enables human intelligence to conclude the model
behavior. Our method can be divided into three stages: 1) a neighborhood
generation stage, which generates instances based on the given sample; 2) a
classification stage, which yields classifications on the generated instances
to carve out the local decision boundary and delineate the model behavior; and
3) a human-in-the-loop stage, which involves human to refine and explore the
neighborhood of interest. In the generation stage, a generative model is used
to generate the plausible synthetic neighbors around the given instance. After
the classification stage, the classified neighbor instances provide a
multifaceted understanding of the model behavior. Three intervention points are
provided in the human-in-the-loop stage, enabling humans to leverage their own
intelligence to interpret the model behavior. Several experiments on two
datasets are conducted, and the experimental results demonstrate the potential
of our proposed approach for boosting human understanding of the complex
machine learning model.
| [
{
"version": "v1",
"created": "Thu, 24 Jun 2021 15:24:30 GMT"
}
] | 1,624,579,200,000 | [
[
"Zeng",
"Xianlong",
""
],
[
"Song",
"Fanghao",
""
],
[
"Li",
"Zhongen",
""
],
[
"Chusap",
"Krerkkiat",
""
],
[
"Liu",
"Chang",
""
]
] |
2106.13322 | Saveli Goldberg | Saveli Goldberg (1), Stanislav Belyaev (2), Vladimir Sluchak ((1) MGH
Radiation Oncology Department, (2) Eastern New Mexico Medical Center) | Dr. Watson type Artificial Intellect (AI) Systems | 24 pages,13 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The article proposes a new type of AI system that does not give solutions
directly but rather points toward it, friendly prompting the user with
questions and adjusting messages. Models of AI human collaboration can be
deduced from the classic literary example of interaction between Mr. Holmes and
Dr. Watson from the stories by Conan Doyle, where the highly qualified expert
Mr. Holmes answers questions posed by Dr. Watson. Here Mr. Holmes, with his
rule-based calculations, logic, and memory management, apparently plays the
role of an AI system, and Dr. Watson is the user. Looking into the same
Holmes-Watson interaction, we find and promote another model in which the AI
behaves like Dr. Watson, who, by asking questions and acting in a particular
way, helps Holmes (the AI user) make the right decisions. We call the systems
based on this principle "Dr. Watson-type systems." The article describes the
properties of such systems and introduces two particular: Patient Management
System for intensive care physicians and Data Error Prevention System.
| [
{
"version": "v1",
"created": "Wed, 23 Jun 2021 03:59:39 GMT"
}
] | 1,624,838,400,000 | [
[
"Goldberg",
"Saveli",
""
],
[
"Belyaev",
"Stanislav",
""
],
[
"Sluchak",
"Vladimir",
""
]
] |
2106.13976 | Yiheng Yao | Yiheng Yao | Explanatory Pluralism in Explainable AI | To be published in CD-MAKE 2021 conference proceedings | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The increasingly widespread application of AI models motivates increased
demand for explanations from a variety of stakeholders. However, this demand is
ambiguous because there are many types of 'explanation' with different
evaluative criteria. In the spirit of pluralism, I chart a taxonomy of types of
explanation and the associated XAI methods that can address them. When we look
to expose the inner mechanisms of AI models, we develop
Diagnostic-explanations. When we seek to render model output understandable, we
produce Explication-explanations. When we wish to form stable generalizations
of our models, we produce Expectation-explanations. Finally, when we want to
justify the usage of a model, we produce Role-explanations that situate models
within their social context. The motivation for such a pluralistic view stems
from a consideration of causes as manipulable relationships and the different
types of explanations as identifying the relevant points in AI systems we can
intervene upon to affect our desired changes. This paper reduces the ambiguity
in use of the word 'explanation' in the field of XAI, allowing practitioners
and stakeholders a useful template for avoiding equivocation and evaluating XAI
methods and putative explanations.
| [
{
"version": "v1",
"created": "Sat, 26 Jun 2021 09:02:06 GMT"
}
] | 1,624,924,800,000 | [
[
"Yao",
"Yiheng",
""
]
] |
2106.14431 | Steven Schockaert | Steven Schockaert | Modelling Monotonic and Non-Monotonic Attribute Dependencies with
Embeddings: A Theoretical Analysis | Accepted for AKBC 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | During the last decade, entity embeddings have become ubiquitous in
Artificial Intelligence. Such embeddings essentially serve as compact but
semantically meaningful representations of the entities of interest. In most
approaches, vectors are used for representing the entities themselves, as well
as for representing their associated attributes. An important advantage of
using attribute embeddings is that (some of the) semantic dependencies between
the attributes can thus be captured. However, little is known about what kinds
of semantic dependencies can be modelled in this way. The aim of this paper is
to shed light on this question, focusing on settings where the embedding of an
entity is obtained by pooling the embeddings of its known attributes. Our
particular focus is on studying the theoretical limitations of different
embedding strategies, rather than their ability to effectively learn attribute
dependencies in practice. We first show a number of negative results, revealing
that some of the most popular embedding models are not able to capture even
basic Horn rules. However, we also find that some embedding strategies are
capable, in principle, of modelling both monotonic and non-monotonic attribute
dependencies.
| [
{
"version": "v1",
"created": "Mon, 28 Jun 2021 07:29:11 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Sep 2021 08:49:51 GMT"
}
] | 1,631,664,000,000 | [
[
"Schockaert",
"Steven",
""
]
] |
2106.14977 | Sharada Mohanty | Sharada Prasanna Mohanty, Gaurav Singhal, Eric Antoine Scuccimarra,
Djilani Kebaili, Harris H\'eritier, Victor Boulanger, Marcel Salath\'e | The Food Recognition Benchmark: Using DeepLearning to Recognize Food on
Images | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The automatic recognition of food on images has numerous interesting
applications, including nutritional tracking in medical cohorts. The problem
has received significant research attention, but an ongoing public benchmark to
develop open and reproducible algorithms has been missing. Here, we report on
the setup of such a benchmark using publicly available food images sourced
through the mobile MyFoodRepo app. Through four rounds, the benchmark released
the MyFoodRepo-273 dataset constituting 24,119 images and a total of 39,325
segmented polygons categorized in 273 different classes. Models were evaluated
on private tests sets from the same platform with 5,000 images and 7,865
annotations in the final round. Top-performing models on the 273 food
categories reached a mean average precision of 0.568 (round 4) and a mean
average recall of 0.885 (round 3). We present experimental validation of round
4 results, and discuss implications of the benchmark setup designed to increase
the size and diversity of the dataset for future rounds.
| [
{
"version": "v1",
"created": "Mon, 28 Jun 2021 20:51:26 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Jun 2021 10:05:21 GMT"
}
] | 1,625,097,600,000 | [
[
"Mohanty",
"Sharada Prasanna",
""
],
[
"Singhal",
"Gaurav",
""
],
[
"Scuccimarra",
"Eric Antoine",
""
],
[
"Kebaili",
"Djilani",
""
],
[
"Héritier",
"Harris",
""
],
[
"Boulanger",
"Victor",
""
],
[
"Salathé",
"Marcel",
""
]
] |
2106.15047 | Yuxia Geng | Yuxia Geng, Jiaoyan Chen, Xiang Zhuang, Zhuo Chen, Jeff Z. Pan, Juan
Li, Zonggang Yuan, Huajun Chen | Benchmarking Knowledge-driven Zero-shot Learning | Published in Journal of Web Semantics, 2022. Final version please
refer to our Github repository! | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | External knowledge (a.k.a. side information) plays a critical role in
zero-shot learning (ZSL) which aims to predict with unseen classes that have
never appeared in training data. Several kinds of external knowledge, such as
text and attribute, have been widely investigated, but they alone are limited
with incomplete semantics. Some very recent studies thus propose to use
Knowledge Graph (KG) due to its high expressivity and compatibility for
representing kinds of knowledge. However, the ZSL community is still in short
of standard benchmarks for studying and comparing different external knowledge
settings and different KG-based ZSL methods. In this paper, we proposed six
resources covering three tasks, i.e., zero-shot image classification (ZS-IMGC),
zero-shot relation extraction (ZS-RE), and zero-shot KG completion (ZS-KGC).
Each resource has a normal ZSL benchmark and a KG containing semantics ranging
from text to attribute, from relational knowledge to logical expressions. We
have clearly presented these resources including their construction,
statistics, data formats and usage cases w.r.t. different ZSL methods. More
importantly, we have conducted a comprehensive benchmarking study, with two
general and state-of-the-art methods, two setting-specific methods and one
interpretable method. We discussed and compared different ZSL paradigms w.r.t.
different external knowledge settings, and found that our resources have great
potential for developing more advanced ZSL methods and more solutions for
applying KGs for augmenting machine learning. All the resources are available
at https://github.com/China-UK-ZSL/Resources_for_KZSL.
| [
{
"version": "v1",
"created": "Tue, 29 Jun 2021 01:22:49 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Nov 2021 11:27:20 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Oct 2022 06:17:11 GMT"
}
] | 1,666,915,200,000 | [
[
"Geng",
"Yuxia",
""
],
[
"Chen",
"Jiaoyan",
""
],
[
"Zhuang",
"Xiang",
""
],
[
"Chen",
"Zhuo",
""
],
[
"Pan",
"Jeff Z.",
""
],
[
"Li",
"Juan",
""
],
[
"Yuan",
"Zonggang",
""
],
[
"Chen",
"Huajun",
""
]
] |
2106.15200 | Bo Zhou | Bo Zhou, Hongsheng Zeng, Yuecheng Liu, Kejiao Li, Fan Wang, Hao Tian | Action Set Based Policy Optimization for Safe Power Grid Management | accepted by ECML PKDD 2021; 1st place in NeurIPS2020 RL challenge:
power grid management | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Maintaining the stability of the modern power grid is becoming increasingly
difficult due to fluctuating power consumption, unstable power supply coming
from renewable energies, and unpredictable accidents such as man-made and
natural disasters. As the operation on the power grid must consider its impact
on future stability, reinforcement learning (RL) has been employed to provide
sequential decision-making in power grid management. However, existing methods
have not considered the environmental constraints. As a result, the learned
policy has risk of selecting actions that violate the constraints in
emergencies, which will escalate the issue of overloaded power lines and lead
to large-scale blackouts. In this work, we propose a novel method for this
problem, which builds on top of the search-based planning algorithm. At the
planning stage, the search space is limited to the action set produced by the
policy. The selected action strictly follows the constraints by testing its
outcome with the simulation function provided by the system. At the learning
stage, to address the problem that gradients cannot be propagated to the
policy, we introduce Evolutionary Strategies (ES) with black-box policy
optimization to improve the policy directly, maximizing the returns of the long
run. In NeurIPS 2020 Learning to Run Power Network (L2RPN) competition, our
solution safely managed the power grid and ranked first in both tracks.
| [
{
"version": "v1",
"created": "Tue, 29 Jun 2021 09:36:36 GMT"
}
] | 1,625,011,200,000 | [
[
"Zhou",
"Bo",
""
],
[
"Zeng",
"Hongsheng",
""
],
[
"Liu",
"Yuecheng",
""
],
[
"Li",
"Kejiao",
""
],
[
"Wang",
"Fan",
""
],
[
"Tian",
"Hao",
""
]
] |
2106.15221 | Linyi Yang | Linyi Yang, Tin Lok James Ng, Barry Smyth, Ruihai Dong | Fact Check: Analyzing Financial Events from Multilingual News Sources | Demo | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The explosion in the sheer magnitude and complexity of financial news data in
recent years makes it increasingly challenging for investment analysts to
extract valuable insights and perform analysis. We propose FactCheck in
finance, a web-based news aggregator with deep learning models, to provide
analysts with a holistic view of important financial events from multilingual
news sources and extract events using an unsupervised clustering method. A web
interface is provided to examine the credibility of news articles using a
transformer-based fact-checker. The performance of the fact checker is
evaluated using a dataset related to merger and acquisition (M\&A) events and
is shown to outperform several strong baselines.
| [
{
"version": "v1",
"created": "Tue, 29 Jun 2021 10:05:47 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Jun 2021 05:00:20 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Aug 2023 12:40:07 GMT"
}
] | 1,693,180,800,000 | [
[
"Yang",
"Linyi",
""
],
[
"Ng",
"Tin Lok James",
""
],
[
"Smyth",
"Barry",
""
],
[
"Dong",
"Ruihai",
""
]
] |
2106.15433 | Bla\v{z} \v{S}krlj | Timen Stepi\v{s}nik Perdih, Nada Lavra\v{c}, Bla\v{z} \v{S}krlj | Semantic Reasoning from Model-Agnostic Explanations | null | null | 10.1109/SAMI50585.2021.9378668 | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | With the wide adoption of black-box models, instance-based \emph{post hoc}
explanation tools, such as LIME and SHAP became increasingly popular. These
tools produce explanations, pinpointing contributions of key features
associated with a given prediction. However, the obtained explanations remain
at the raw feature level and are not necessarily understandable by a human
expert without extensive domain knowledge. We propose ReEx (Reasoning with
Explanations), a method applicable to explanations generated by arbitrary
instance-level explainers, such as SHAP. By using background knowledge in the
form of ontologies, ReEx generalizes instance explanations in a least general
generalization-like manner. The resulting symbolic descriptions are specific
for individual classes and offer generalizations based on the explainer's
output. The derived semantic explanations are potentially more informative, as
they describe the key attributes in the context of more general background
knowledge, e.g., at the biological process level. We showcase ReEx's
performance on nine biological data sets, showing that compact, semantic
explanations can be obtained and are more informative than generic ontology
mappings that link terms directly to feature names. ReEx is offered as a
simple-to-use Python library and is compatible with tools such as SHAP and
similar. To our knowledge, this is one of the first methods that directly
couples semantic reasoning with contemporary model explanation methods. This
paper is a preprint. Full version's doi is: 10.1109/SAMI50585.2021.9378668
| [
{
"version": "v1",
"created": "Tue, 29 Jun 2021 14:03:47 GMT"
}
] | 1,625,011,200,000 | [
[
"Perdih",
"Timen Stepišnik",
""
],
[
"Lavrač",
"Nada",
""
],
[
"Škrlj",
"Blaž",
""
]
] |
2106.15444 | Paolo Cintia | Paolo Cintia, Luca Pappalardo | Coach2vec: autoencoding the playing style of soccer coaches | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Capturing the playing style of professional soccer coaches is a complex, and
yet barely explored, task in sports analytics. Nowadays, the availability of
digital data describing every relevant spatio-temporal aspect of soccer
matches, allows for capturing and analyzing the playing style of players,
teams, and coaches in an automatic way. In this paper, we present coach2vec, a
workflow to capture the playing style of professional coaches using match event
streams and artificial intelligence. Coach2vec extracts ball possessions from
each match, clusters them based on their similarity, and reconstructs the
typical ball possessions of coaches. Then, it uses an autoencoder, a type of
artificial neural network, to obtain a concise representation (encoding) of the
playing style of each coach. Our experiments, conducted on soccer-logs
describing the last four seasons of the Italian first division, reveal
interesting similarities between prominent coaches, paving the road to the
simulation of playing styles and the quantitative comparison of professional
coaches.
| [
{
"version": "v1",
"created": "Tue, 29 Jun 2021 14:32:38 GMT"
}
] | 1,625,011,200,000 | [
[
"Cintia",
"Paolo",
""
],
[
"Pappalardo",
"Luca",
""
]
] |
2106.15802 | Xu Geng | Zhengfei Zheng, Xu Geng, and Hai Yang | CityNet: A Comprehensive Multi-Modal Urban Dataset for Advanced Research
in Urban Computing | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Data-driven approaches have emerged as a popular tool for addressing
challenges in urban computing. However, current research efforts have primarily
focused on limited data sources, which fail to capture the complexity of urban
data arising from multiple entities and their interconnections. Therefore, a
comprehensive and multifaceted dataset is required to enable more extensive
studies in urban computing. In this paper, we present CityNet, a multi-modal
urban dataset that incorporates various data, including taxi trajectory,
traffic speed, point of interest (POI), road network, wind, rain, temperature,
and more, from seven cities. We categorize this comprehensive data into three
streams: mobility data, geographical data, and meteorological data. We begin by
detailing the generation process and basic properties of CityNet. Additionally,
we conduct extensive data mining and machine learning experiments, including
spatio-temporal predictions, transfer learning, and reinforcement learning, to
facilitate the use of CityNet. Our experimental results provide benchmarks for
various tasks and methods, and also reveal internal correlations among cities
and tasks within CityNet that can be leveraged to improve spatiotemporal
forecasting performance. Based on our benchmarking results and the correlations
uncovered, we believe that CityNet can significantly contribute to the field of
urban computing by enabling research on advanced topics.
| [
{
"version": "v1",
"created": "Wed, 30 Jun 2021 04:05:51 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Apr 2024 14:11:50 GMT"
}
] | 1,712,793,600,000 | [
[
"Zheng",
"Zhengfei",
""
],
[
"Geng",
"Xu",
""
],
[
"Yang",
"Hai",
""
]
] |
2106.15877 | Jialin Liu Ph.D | Tianye Shu, Jialin Liu, Georgios N. Yannakakis | Experience-Driven PCG via Reinforcement Learning: A Super Mario Bros
Study | This paper is accepted by the 2021 IEEE Conference on Games | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a procedural content generation (PCG) framework at the
intersections of experience-driven PCG and PCG via reinforcement learning,
named ED(PCG)RL, EDRL in short. EDRL is able to teach RL designers to generate
endless playable levels in an online manner while respecting particular
experiences for the player as designed in the form of reward functions. The
framework is tested initially in the Super Mario Bros game. In particular, the
RL designers of Super Mario Bros generate and concatenate level segments while
considering the diversity among the segments. The correctness of the generation
is ensured by a neural net-assisted evolutionary level repairer and the
playability of the whole level is determined through AI-based testing. Our
agents in this EDRL implementation learn to maximise a quantification of
Koster's principle of fun by moderating the degree of diversity across level
segments. Moreover, we test their ability to design fun levels that are diverse
over time and playable. Our proposed framework is capable of generating
endless, playable Super Mario Bros levels with varying degrees of fun,
deviation from earlier segments, and playability. EDRL can be generalised to
any game that is built as a segment-based sequential process and features a
built-in compressed representation of its game content.
| [
{
"version": "v1",
"created": "Wed, 30 Jun 2021 08:10:45 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jul 2021 01:30:15 GMT"
}
] | 1,625,529,600,000 | [
[
"Shu",
"Tianye",
""
],
[
"Liu",
"Jialin",
""
],
[
"Yannakakis",
"Georgios N.",
""
]
] |
2106.15931 | Maximilian Hoffmann | Maximilian Hoffmann, Ralph Bergmann | Informed Machine Learning for Improved Similarity Assessment in
Process-Oriented Case-Based Reasoning | Accepted at the IJCAI-21 workshop on Deep Learning, Case-Based
Reasoning, and AutoML: Present and Future Synergies | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Currently, Deep Learning (DL) components within a Case-Based Reasoning (CBR)
application often lack the comprehensive integration of available domain
knowledge. The trend within machine learning towards so-called Informed machine
learning can help to overcome this limitation. In this paper, we therefore
investigate the potential of integrating domain knowledge into Graph Neural
Networks (GNNs) that are used for similarity assessment between semantic graphs
within process-oriented CBR applications. We integrate knowledge in two ways:
First, a special data representation and processing method is used that encodes
structural knowledge about the semantic annotations of each graph node and
edge. Second, the message-passing component of the GNNs is constrained by
knowledge on legal node mappings. The evaluation examines the quality and
training time of the extended GNNs, compared to the stock models. The results
show that both extensions are capable of providing better quality, shorter
training times, or in some configurations both advantages at once.
| [
{
"version": "v1",
"created": "Wed, 30 Jun 2021 09:31:58 GMT"
}
] | 1,625,097,600,000 | [
[
"Hoffmann",
"Maximilian",
""
],
[
"Bergmann",
"Ralph",
""
]
] |
2107.00140 | Beren Millidge Mr | Beren Millidge | Applications of the Free Energy Principle to Machine Learning and
Neuroscience | PhD thesis. 30-06-21 initial upload | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this PhD thesis, we explore and apply methods inspired by the free energy
principle to two important areas in machine learning and neuroscience. The free
energy principle is a general mathematical theory of the necessary
information-theoretic behaviours of systems that maintain a separation from
their environment. A core postulate of the theory is that complex systems can
be seen as performing variational Bayesian inference and minimizing an
information-theoretic quantity called the variational free energy. The thesis
is structured into three independent sections. Firstly, we focus on predictive
coding, a neurobiologically plausible process theory derived from the free
energy principle which argues that the primary function of the brain is to
minimize prediction errors, showing how predictive coding can be scaled up and
extended to be more biologically plausible, and elucidating its close links
with other methods such as Kalman Filtering. Secondly, we study active
inference, a neurobiologically grounded account of action through variational
message passing, and investigate how these methods can be scaled up to match
the performance of deep reinforcement learning methods. We additionally provide
a detailed mathematical understanding of the nature and origin of the
information-theoretic objectives that underlie exploratory behaviour. Finally,
we investigate biologically plausible methods of credit assignment in the
brain. We first demonstrate a close link between predictive coding and the
backpropagation of error algorithm. We go on to propose novel and simpler
algorithms which allow for backprop to be implemented in purely local,
biologically plausible computations.
| [
{
"version": "v1",
"created": "Wed, 30 Jun 2021 22:53:03 GMT"
}
] | 1,630,368,000,000 | [
[
"Millidge",
"Beren",
""
]
] |
2107.00156 | Filip Ilievski | Kartik Shenoy and Filip Ilievski and Daniel Garijo and Daniel Schwabe
and Pedro Szekely | A Study of the Quality of Wikidata | 12 pages | Journal of Web Semantics, Special issue on Community-Based
Knowledge Bases, 2021 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Wikidata has been increasingly adopted by many communities for a wide variety
of applications, which demand high-quality knowledge to deliver successful
results. In this paper, we develop a framework to detect and analyze
low-quality statements in Wikidata by shedding light on the current practices
exercised by the community. We explore three indicators of data quality in
Wikidata, based on: 1) community consensus on the currently recorded knowledge,
assuming that statements that have been removed and not added back are
implicitly agreed to be of low quality; 2) statements that have been
deprecated; and 3) constraint violations in the data. We combine these
indicators to detect low-quality statements, revealing challenges with
duplicate entities, missing triples, violated type rules, and taxonomic
distinctions. Our findings complement ongoing efforts by the Wikidata community
to improve data quality, aiming to make it easier for users and editors to find
and correct mistakes.
| [
{
"version": "v1",
"created": "Thu, 1 Jul 2021 00:19:02 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Jul 2021 15:50:35 GMT"
},
{
"version": "v3",
"created": "Wed, 17 Nov 2021 22:47:02 GMT"
},
{
"version": "v4",
"created": "Fri, 19 Nov 2021 02:33:03 GMT"
}
] | 1,637,539,200,000 | [
[
"Shenoy",
"Kartik",
""
],
[
"Ilievski",
"Filip",
""
],
[
"Garijo",
"Daniel",
""
],
[
"Schwabe",
"Daniel",
""
],
[
"Szekely",
"Pedro",
""
]
] |
2107.00184 | Yongqi Zhang | Yongqi Zhang and Quanming Yao and James Tin-Yau Kwok | Bilinear Scoring Function Search for Knowledge Graph Learning | TPAMI accepted | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning embeddings for entities and relations in knowledge graph (KG) have
benefited many downstream tasks. In recent years, scoring functions, the crux
of KG learning, have been human-designed to measure the plausibility of triples
and capture different kinds of relations in KGs. However, as relations exhibit
intricate patterns that are hard to infer before training, none of them
consistently perform the best on benchmark tasks. In this paper, inspired by
the recent success of automated machine learning (AutoML), we search bilinear
scoring functions for different KG tasks through the AutoML techniques.
However, it is non-trivial to explore domain-specific information here. We
first set up a search space for AutoBLM by analyzing existing scoring
functions. Then, we propose a progressive algorithm (AutoBLM) and an
evolutionary algorithm (AutoBLM+), which are further accelerated by filter and
predictor to deal with the domain-specific properties for KG learning. Finally,
we perform extensive experiments on benchmarks in KG completion, multi-hop
query, and entity classification tasks. Empirical results show that the
searched scoring functions are KG dependent, new to the literature, and
outperform the existing scoring functions. AutoBLM+ is better than AutoBLM as
the evolutionary algorithm can flexibly explore better structures in the same
budget.
| [
{
"version": "v1",
"created": "Thu, 1 Jul 2021 02:28:23 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Mar 2022 06:43:32 GMT"
}
] | 1,646,611,200,000 | [
[
"Zhang",
"Yongqi",
""
],
[
"Yao",
"Quanming",
""
],
[
"Kwok",
"James Tin-Yau",
""
]
] |
2107.00316 | Danqing Zhu | Qiwei Zhong, Guanxiong Zeng, Danqing Zhu, Yang Zhang, Wangli Lin, Ben
Chen, Jiayu Tang | Leveraging Domain Agnostic and Specific Knowledge for Acronym
Disambiguation | Second Place Solution, Accepted to SDU@AAAI-21 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | An obstacle to scientific document understanding is the extensive use of
acronyms which are shortened forms of long technical phrases. Acronym
disambiguation aims to find the correct meaning of an ambiguous acronym in a
given text. Recent efforts attempted to incorporate word embeddings and deep
learning architectures, and achieved significant effects in this task. In
general domains, kinds of fine-grained pretrained language models have sprung
up, thanks to the largescale corpora which can usually be obtained through
crowdsourcing. However, these models based on domain agnostic knowledge might
achieve insufficient performance when directly applied to the scientific
domain. Moreover, obtaining large-scale high-quality annotated data and
representing high-level semantics in the scientific domain is challenging and
expensive. In this paper, we consider both the domain agnostic and specific
knowledge, and propose a Hierarchical Dual-path BERT method coined hdBERT to
capture the general fine-grained and high-level specific representations for
acronym disambiguation. First, the context-based pretrained models, RoBERTa and
SciBERT, are elaborately involved in encoding these two kinds of knowledge
respectively. Second, multiple layer perceptron is devised to integrate the
dualpath representations simultaneously and outputs the prediction. With a
widely adopted SciAD dataset contained 62,441 sentences, we investigate the
effectiveness of hdBERT. The experimental results exhibit that the proposed
approach outperforms state-of-the-art methods among various evaluation metrics.
Specifically, its macro F1 achieves 93.73%.
| [
{
"version": "v1",
"created": "Thu, 1 Jul 2021 09:10:00 GMT"
}
] | 1,625,184,000,000 | [
[
"Zhong",
"Qiwei",
""
],
[
"Zeng",
"Guanxiong",
""
],
[
"Zhu",
"Danqing",
""
],
[
"Zhang",
"Yang",
""
],
[
"Lin",
"Wangli",
""
],
[
"Chen",
"Ben",
""
],
[
"Tang",
"Jiayu",
""
]
] |
2107.00317 | Fredrik Pr\"antare | Fredrik Pr\"antare, Mattias Tiger, David Bergstr\"om, Herman
Appelgren, Fredrik Heintz | Towards Utilitarian Combinatorial Assignment with Deep Neural Networks
and Heuristic Algorithms | 7 pages, 4 figures, presented at the ECAI 2020 TAILOR workshop | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents preliminary work on using deep neural networks to guide
general-purpose heuristic algorithms for performing utilitarian combinatorial
assignment. In more detail, we use deep learning in an attempt to produce
heuristics that can be used together with e.g., search algorithms to generate
feasible solutions of higher quality more quickly. Our results indicate that
our approach could be a promising future method for constructing such
heuristics.
| [
{
"version": "v1",
"created": "Thu, 1 Jul 2021 09:15:20 GMT"
}
] | 1,625,184,000,000 | [
[
"Präntare",
"Fredrik",
""
],
[
"Tiger",
"Mattias",
""
],
[
"Bergström",
"David",
""
],
[
"Appelgren",
"Herman",
""
],
[
"Heintz",
"Fredrik",
""
]
] |
2107.00528 | Lars Malmqvist | Lars Malmqvist, Tommy Yuan, Suresh Manandhar | Visualising Argumentation Graphs with Graph Embeddings and t-SNE | null | COMMA Workshop on Argument Visualization, 2020 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper applies t-SNE, a visualisation technique familiar from Deep Neural
Network research to argumentation graphs by applying it to the output of graph
embeddings generated using several different methods. It shows that such a
visualisation approach can work for argumentation and show interesting
structural properties of argumentation graphs, opening up paths for further
research in the area.
| [
{
"version": "v1",
"created": "Thu, 1 Jul 2021 15:13:24 GMT"
}
] | 1,625,184,000,000 | [
[
"Malmqvist",
"Lars",
""
],
[
"Yuan",
"Tommy",
""
],
[
"Manandhar",
"Suresh",
""
]
] |
2107.00749 | Vaden Masrani | Vaden Masrani | Proof of the impossibility of probabilistic induction | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this short note I restate and simplify the proof of the impossibility of
probabilistic induction from Popper (1992). Other proofs are possible (cf.
Popper (1985)).
| [
{
"version": "v1",
"created": "Thu, 1 Jul 2021 21:30:46 GMT"
}
] | 1,625,443,200,000 | [
[
"Masrani",
"Vaden",
""
]
] |
2107.00894 | Maosen Li | Maosen Li, Siheng Chen, Yanning Shen, Genjia Liu, Ivor W. Tsang, Ya
Zhang | Online Multi-Agent Forecasting with Interpretable Collaborative Graph
Neural Network | Submitted to IEEE-TNNLS SI-Deep Neural Networks for Graphs: Theory,
Models, Algorithms and Applications | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper considers predicting future statuses of multiple agents in an
online fashion by exploiting dynamic interactions in the system. We propose a
novel collaborative prediction unit (CoPU), which aggregates the predictions
from multiple collaborative predictors according to a collaborative graph. Each
collaborative predictor is trained to predict the status of an agent by
considering the impact of another agent. The edge weights of the collaborative
graph reflect the importance of each predictor. The collaborative graph is
adjusted online by multiplicative update, which can be motivated by minimizing
an explicit objective. With this objective, we also conduct regret analysis to
indicate that, along with training, our CoPU achieves similar performance with
the best individual collaborative predictor in hindsight. This theoretical
interpretability distinguishes our method from many other graph networks. To
progressively refine predictions, multiple CoPUs are stacked to form a
collaborative graph neural network. Extensive experiments are conducted on
three tasks: online simulated trajectory prediction, online human motion
prediction and online traffic speed prediction, and our methods outperform
state-of-the-art works on the three tasks by 28.6%, 17.4% and 21.0% on average,
respectively.
| [
{
"version": "v1",
"created": "Fri, 2 Jul 2021 08:20:06 GMT"
}
] | 1,625,443,200,000 | [
[
"Li",
"Maosen",
""
],
[
"Chen",
"Siheng",
""
],
[
"Shen",
"Yanning",
""
],
[
"Liu",
"Genjia",
""
],
[
"Tsang",
"Ivor W.",
""
],
[
"Zhang",
"Ya",
""
]
] |
2107.01078 | Eric Piette E.P. | \'Eric Piette, Matthew Stephenson, Dennis J.N.J. Soemers and Cameron
Browne | General Board Game Concepts | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many games often share common ideas or aspects between them, such as their
rules, controls, or playing area. However, in the context of General Game
Playing (GGP) for board games, this area remains under-explored. We propose to
formalise the notion of "game concept", inspired by terms generally used by
game players and designers. Through the Ludii General Game System, we describe
concepts for several levels of abstraction, such as the game itself, the moves
played, or the states reached. This new GGP feature associated with the ludeme
representation of games opens many new lines of research. The creation of a
hyper-agent selector, the transfer of AI learning between games, or explaining
AI techniques using game terms, can all be facilitated by the use of game
concepts. Other applications which can benefit from game concepts are also
discussed, such as the generation of plausible reconstructed rules for
incomplete ancient games, or the implementation of a board game recommender
system.
| [
{
"version": "v1",
"created": "Fri, 2 Jul 2021 13:39:10 GMT"
}
] | 1,625,443,200,000 | [
[
"Piette",
"Éric",
""
],
[
"Stephenson",
"Matthew",
""
],
[
"Soemers",
"Dennis J. N. J.",
""
],
[
"Browne",
"Cameron",
""
]
] |
2107.01170 | Nidhika Yadav | Nidhika Yadav | Computing Fuzzy Rough Set based Similarities with Fuzzy Inference and
Its Application to Sentence Similarity Computations | 5 figures, 3 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several research initiatives have been proposed for computing similarity
between two Fuzzy Sets in analysis through Fuzzy Rough Sets. These techniques
yield two measures viz. lower similarity and upper similarity. While in most
applications only one entity is useful to further analysis and for drawing
conclusions. The aim of this paper is to propose novel technique to combine
Fuzzy Rough Set based lower similarity and upper similarity using Fuzzy
Inference Engine. Further, the proposed approach is applied to the problem
computing sentence similarity and have been evaluated on SICK2014 dataset.
| [
{
"version": "v1",
"created": "Fri, 2 Jul 2021 16:21:25 GMT"
}
] | 1,625,443,200,000 | [
[
"Yadav",
"Nidhika",
""
]
] |
2107.01654 | Joao Marques-Silva | Xuanxiang Huang and Yacine Izza and Alexey Ignatiev and Martin C.
Cooper and Nicholas Asher and Joao Marques-Silva | Efficient Explanations for Knowledge Compilation Languages | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Knowledge compilation (KC) languages find a growing number of practical uses,
including in Constraint Programming (CP) and in Machine Learning (ML). In most
applications, one natural question is how to explain the decisions made by
models represented by a KC language. This paper shows that for many of the best
known KC languages, well-known classes of explanations can be computed in
polynomial time. These classes include deterministic decomposable negation
normal form (d-DNNF), and so any KC language that is strictly less succinct
than d-DNNF. Furthermore, the paper also investigates the conditions under
which polynomial time computation of explanations can be extended to KC
languages more succinct than d-DNNF.
| [
{
"version": "v1",
"created": "Sun, 4 Jul 2021 14:45:32 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jul 2021 09:58:58 GMT"
}
] | 1,625,788,800,000 | [
[
"Huang",
"Xuanxiang",
""
],
[
"Izza",
"Yacine",
""
],
[
"Ignatiev",
"Alexey",
""
],
[
"Cooper",
"Martin C.",
""
],
[
"Asher",
"Nicholas",
""
],
[
"Marques-Silva",
"Joao",
""
]
] |
2107.01715 | Gal Dalal | Assaf Hallak and Gal Dalal, Steven Dalton, Iuri Frosio, Shie Mannor,
Gal Chechik | Improve Agents without Retraining: Parallel Tree Search with Off-Policy
Correction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tree Search (TS) is crucial to some of the most influential successes in
reinforcement learning. Here, we tackle two major challenges with TS that limit
its usability: \textit{distribution shift} and \textit{scalability}. We first
discover and analyze a counter-intuitive phenomenon: action selection through
TS and a pre-trained value function often leads to lower performance compared
to the original pre-trained agent, even when having access to the exact state
and reward in future steps. We show this is due to a distribution shift to
areas where value estimates are highly inaccurate and analyze this effect using
Extreme Value theory. To overcome this problem, we introduce a novel off-policy
correction term that accounts for the mismatch between the pre-trained value
and its corresponding TS policy by penalizing under-sampled trajectories. We
prove that our correction eliminates the above mismatch and bound the
probability of sub-optimal action selection. Our correction significantly
improves pre-trained Rainbow agents without any further training, often more
than doubling their scores on Atari games. Next, we address the scalability
issue given by the computational complexity of exhaustive TS that scales
exponentially with the tree depth. We introduce Batch-BFS: a GPU breadth-first
search that advances all nodes in each depth of the tree simultaneously.
Batch-BFS reduces runtime by two orders of magnitude and, beyond inference,
enables also training with TS of depths that were not feasible before. We train
DQN agents from scratch using TS and show improvement in several Atari games
compared to both the original DQN and the more advanced Rainbow.
The code for BCTS can be found in \url{https://github.com/NVlabs/bcts}.
| [
{
"version": "v1",
"created": "Sun, 4 Jul 2021 19:32:24 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Oct 2021 09:44:58 GMT"
},
{
"version": "v3",
"created": "Sun, 5 Feb 2023 11:01:20 GMT"
}
] | 1,675,728,000,000 | [
[
"Hallak",
"Assaf",
""
],
[
"Dalal",
"Gal",
""
],
[
"Dalton",
"Steven",
""
],
[
"Frosio",
"Iuri",
""
],
[
"Mannor",
"Shie",
""
],
[
"Chechik",
"Gal",
""
]
] |
2107.01905 | Hans Weytjens | Hans Weytjens, Jochen De Weerdt | Creating Unbiased Public Benchmark Datasets with Data Leakage Prevention
for Predictive Process Monitoring | Accepted for AI4BPM workshop at BMP2021 conferences | null | 10.13140/RG.2.2.16036.19848 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Advances in AI, and especially machine learning, are increasingly drawing
research interest and efforts towards predictive process monitoring, the
subfield of process mining (PM) that concerns predicting next events, process
outcomes and remaining execution times. Unfortunately, researchers use a
variety of datasets and ways to split them into training and test sets. The
documentation of these preprocessing steps is not always complete.
Consequently, research results are hard or even impossible to reproduce and to
compare between papers. At times, the use of non-public domain knowledge
further hampers the fair competition of ideas. Often the training and test sets
are not completely separated, a data leakage problem particular to predictive
process monitoring. Moreover, test sets usually suffer from bias in terms of
both the mix of case durations and the number of running cases. These obstacles
pose a challenge to the field's progress. The contribution of this paper is to
identify and demonstrate the importance of these obstacles and to propose
preprocessing steps to arrive at unbiased benchmark datasets in a principled
way, thus creating representative test sets without data leakage with the aim
of levelling the playing field, promoting open science and contributing to more
rapid progress in predictive process monitoring.
| [
{
"version": "v1",
"created": "Mon, 5 Jul 2021 09:54:34 GMT"
}
] | 1,625,529,600,000 | [
[
"Weytjens",
"Hans",
""
],
[
"De Weerdt",
"Jochen",
""
]
] |
2107.02083 | Fatema Tuj Johora MSc | Fatema T. Johora and J\"org P. M\"uller | Modeling Interactions of Multimodal Road Users in Shared Spaces | null | IEEE, 2018, https://ieeexplore.ieee.org/document/8569687 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In shared spaces, motorized and non-motorized road users share the same space
with equal priority. Their movements are not regulated by traffic rules, hence
they interact more frequently to negotiate priority over the shared space. To
estimate the safeness and efficiency of shared spaces, reproducing the traffic
behavior in such traffic places is important. In this paper, we consider and
combine different levels of interaction between pedestrians and cars in shared
space environments. Our proposed model consists of three layers: a layer to
plan trajectories of road users; a force-based modeling layer to reproduce free
flow movement and simple interactions; and a game-theoretic decision layer to
handle complex situations where road users need to make a decision over
different alternatives. We validate our model by simulating scenarios involving
various interactions between pedestrians and cars and also car-to-car
interaction. The results indicate that simulated behaviors match observed
behaviors well.
| [
{
"version": "v1",
"created": "Mon, 5 Jul 2021 15:25:08 GMT"
}
] | 1,625,529,600,000 | [
[
"Johora",
"Fatema T.",
""
],
[
"Müller",
"Jörg P.",
""
]
] |
2107.02298 | Jo\v{z}e Ro\v{z}anec | Jo\v{z}e M. Ro\v{z}anec, Inna Novalija, d Patrik Zajec, Klemen Kenda,
Dunja Mladeni\'c | Knowledge Modelling and Active Learning in Manufacturing | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing digitalization of the manufacturing domain requires adequate
knowledge modeling to capture relevant information. Ontologies and Knowledge
Graphs provide means to model and relate a wide range of concepts, problems,
and configurations. Both can be used to generate new knowledge through
deductive inference and identify missing knowledge. While digitalization
increases the amount of data available, much data is not labeled and cannot be
directly used to train supervised machine learning models. Active learning can
be used to identify the most informative data instances for which to obtain
users' feedback, reduce friction, and maximize knowledge acquisition. By
combining semantic technologies and active learning, multiple use cases in the
manufacturing domain can be addressed taking advantage of the available
knowledge and data.
| [
{
"version": "v1",
"created": "Mon, 5 Jul 2021 22:07:21 GMT"
}
] | 1,625,616,000,000 | [
[
"Rožanec",
"Jože M.",
""
],
[
"Novalija",
"Inna",
""
],
[
"Zajec",
"d Patrik",
""
],
[
"Kenda",
"Klemen",
""
],
[
"Mladenić",
"Dunja",
""
]
] |
2107.02385 | Mark J. Nelson | Mark J. Nelson | Estimates for the Branching Factors of Atari Games | Accepted at IEEE Conference on Games (CoG) 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The branching factor of a game is the average number of new states reachable
from a given state. It is a widely used metric in AI research on board games,
but less often computed or discussed for videogames. This paper provides
estimates for the branching factors of 103 Atari 2600 games, as implemented in
the Arcade Learning Environment (ALE). Depending on the game, ALE exposes
between 3 and 18 available actions per frame of gameplay, which is an upper
bound on branching factor. This paper shows, based on an enumeration of the
first 1 million distinct states reachable in each game, that the average
branching factor is usually much lower, in many games barely above 1. In
addition to reporting the branching factors, this paper aims to clarify what
constitutes a distinct state in ALE.
| [
{
"version": "v1",
"created": "Tue, 6 Jul 2021 04:45:24 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jul 2021 05:59:10 GMT"
}
] | 1,625,788,800,000 | [
[
"Nelson",
"Mark J.",
""
]
] |
2107.02457 | Jean-Baptiste Herv\'e | Jean-Baptiste Herv\'e, Christoph Salge | Comparing PCG metrics with Human Evaluation in Minecraft Settlement
Generation | Accepted to the FDG'21 workshop on PCG | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are a range of metrics that can be applied to the artifacts produced by
procedural content generation, and several of them come with qualitative
claims. In this paper, we adapt a range of existing PCG metrics to generated
Minecraft settlements, develop a few new metrics inspired by PCG literature,
and compare the resulting measurements to existing human evaluations. The aim
is to analyze how those metrics capture human evaluation scores in different
categories, how the metrics generalize to another game domain, and how metrics
deal with more complex artifacts. We provide an exploratory look at a variety
of metrics and provide an information gain and several correlation analyses. We
found some relationships between human scores and metrics counting specific
elements, measuring the diversity of blocks and measuring the presence of
crafting materials for the present complex blocks.
| [
{
"version": "v1",
"created": "Tue, 6 Jul 2021 08:07:24 GMT"
}
] | 1,625,616,000,000 | [
[
"Hervé",
"Jean-Baptiste",
""
],
[
"Salge",
"Christoph",
""
]
] |
2107.02609 | Golsa Heidari | Golsa Heidari, Kamran Zamanifar, Naser Nematbakhsh, Farhad Mardookhi | How to Discover a Semantic Web Service by Knowing Its Functionality
Parameters | 5 pages, 1 figure, 2 tables, ICSTE 2010 | null | 10.1109/icste.2010.5608824 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this work, we show how to discover a semantic web service among a
repository of web services. A new approach for web service discovery based on
calculating the functions similarity. We define the Web service functions with
Ontology Web Language (OWL). We wrote some rules for comparing two web
services` parameters. Our algorithm compares the parameters of two web
services` inputs/outputs by making a bipartite graph. We compute the similarity
rate by using the Ford-Fulkerson algorithm. The higher the similarity, the less
are the differences between their functions. At last, our algorithm chooses the
service which has the highest similarity. As a consequence, our method is
useful when we need to find a web service suitable to replace an existing one
that has failed. Especially in autonomic systems, this situation is very common
and important since we need to ensure the availability of the application which
is based on the failed web service. We use Universal Description, Discovery and
Integration (UDDI) compliant web service registry.
| [
{
"version": "v1",
"created": "Tue, 6 Jul 2021 13:29:59 GMT"
}
] | 1,625,616,000,000 | [
[
"Heidari",
"Golsa",
""
],
[
"Zamanifar",
"Kamran",
""
],
[
"Nematbakhsh",
"Naser",
""
],
[
"Mardookhi",
"Farhad",
""
]
] |
2107.03265 | AnneMarie Borg | AnneMarie Borg and Floris Bex | Contrastive Explanations for Argumentation-Based Conclusions | Forthcoming as an extended abstract in the Proceedings of the 21st
International Conference on Autonomous Agents and Multiagent Systems (AAMAS
2022) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper we discuss contrastive explanations for formal argumentation -
the question why a certain argument (the fact) can be accepted, whilst another
argument (the foil) cannot be accepted under various extension-based semantics.
The recent work on explanations for argumentation-based conclusions has mostly
focused on providing minimal explanations for the (non-)acceptance of
arguments. What is still lacking, however, is a proper argumentation-based
interpretation of contrastive explanations. We show under which conditions
contrastive explanations in abstract and structured argumentation are
meaningful, and how argumentation allows us to make implicit foils explicit.
| [
{
"version": "v1",
"created": "Wed, 7 Jul 2021 15:00:47 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jan 2022 15:36:26 GMT"
}
] | 1,643,155,200,000 | [
[
"Borg",
"AnneMarie",
""
],
[
"Bex",
"Floris",
""
]
] |
2107.03305 | Jeppe Theiss Kristensen | Jeppe Theiss Kristensen, Arturo Valdivia, Paolo Burelli | Statistical Modelling of Level Difficulty in Puzzle Games | Conference on Games 2021 conference paper | Proceedings of the 2021 IEEE Conference on Games (CoG) | 10.1109/CoG52621.2021.9619050 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Successful and accurate modelling of level difficulty is a fundamental
component of the operationalisation of player experience as difficulty is one
of the most important and commonly used signals for content design and
adaptation. In games that feature intermediate milestones, such as completable
areas or levels, difficulty is often defined by the probability of completion
or completion rate; however, this operationalisation is limited in that it does
not describe the behaviour of the player within the area.
In this research work, we formalise a model of level difficulty for puzzle
games that goes beyond the classical probability of success. We accomplish this
by describing the distribution of actions performed within a game level using a
parametric statistical model thus creating a richer descriptor of difficulty.
The model is fitted and evaluated on a dataset collected from the game Lily's
Garden by Tactile Games, and the results of the evaluation show that the it is
able to describe and explain difficulty in a vast majority of the levels.
| [
{
"version": "v1",
"created": "Mon, 5 Jul 2021 13:47:28 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jul 2021 08:21:25 GMT"
}
] | 1,687,824,000,000 | [
[
"Kristensen",
"Jeppe Theiss",
""
],
[
"Valdivia",
"Arturo",
""
],
[
"Burelli",
"Paolo",
""
]
] |
2107.03961 | Yuexiang Zhai | Yuexiang Zhai, Christina Baek, Zhengyuan Zhou, Jiantao Jiao, Yi Ma | Computational Benefits of Intermediate Rewards for Goal-Reaching Policy
Learning | null | Journal of Artificial Intelligence Research, 2022, Vol 73 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Many goal-reaching reinforcement learning (RL) tasks have empirically
verified that rewarding the agent on subgoals improves convergence speed and
practical performance. We attempt to provide a theoretical framework to
quantify the computational benefits of rewarding the completion of subgoals, in
terms of the number of synchronous value iterations. In particular, we consider
subgoals as one-way {\em intermediate states}, which can only be visited once
per episode and propose two settings that consider these one-way intermediate
states: the one-way single-path (OWSP) and the one-way multi-path (OWMP)
settings. In both OWSP and OWMP settings, we demonstrate that adding {\em
intermediate rewards} to subgoals is more computationally efficient than only
rewarding the agent once it completes the goal of reaching a terminal state. We
also reveal a trade-off between computational complexity and the pursuit of the
shortest path in the OWMP setting: adding intermediate rewards significantly
reduces the computational complexity of reaching the goal but the agent may not
find the shortest path, whereas with sparse terminal rewards, the agent finds
the shortest path at a significantly higher computational cost. We also
corroborate our theoretical results with extensive experiments on the MiniGrid
environments using Q-learning and some popular deep RL algorithms.
| [
{
"version": "v1",
"created": "Thu, 8 Jul 2021 16:39:13 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Sep 2021 17:34:46 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Feb 2022 03:43:37 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Mar 2022 06:37:42 GMT"
},
{
"version": "v5",
"created": "Sun, 13 Mar 2022 06:06:53 GMT"
}
] | 1,647,302,400,000 | [
[
"Zhai",
"Yuexiang",
""
],
[
"Baek",
"Christina",
""
],
[
"Zhou",
"Zhengyuan",
""
],
[
"Jiao",
"Jiantao",
""
],
[
"Ma",
"Yi",
""
]
] |
2107.04125 | Fariba Irany | Fariba Afrin Irany, Arnav Iyer, Rubenia Borge Flores, Armin R. Mikler | The Multi-phase spatial meta-heuristic algorithm for public health
emergency transportation | 17 pages, 3 figures, 3 tables, Journals | International Journal of Scientific Research & Engineering Trends
Volume 7, Issue 4, July-Aug-2020, ISSN (Online): 2395-566X | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The delivery of Medical Countermeasures(MCMs) for mass prophylaxis in the
case of a bio-terrorist attack is an active research topic that has interested
the research community over the past decades. The objective of this study is to
design an efficient algorithm for the Receive Reload and Store Problem(RSS) in
which we aim to find feasible routes to deliver MCMs to a target population
considering time, physical, and human resources, and capacity limitations. For
doing this, we adapt the p-median problem to the POD-based emergency response
planning procedures and propose an efficient algorithm solution to perform the
p-median in reasonable computational time. We present RE-PLAN, the Response
PLan Analyzer system that contains some RSS solutions developed at The Center
for Computational Epidemiology and Response Analysis (CeCERA) at the University
of North Texas. Finally, we analyze a study case where we show how the
computational performance of the algorithm can impact the process of decision
making and emergency planning in the short and long terms.
| [
{
"version": "v1",
"created": "Mon, 5 Jul 2021 22:34:42 GMT"
}
] | 1,626,739,200,000 | [
[
"Irany",
"Fariba Afrin",
""
],
[
"Iyer",
"Arnav",
""
],
[
"Flores",
"Rubenia Borge",
""
],
[
"Mikler",
"Armin R.",
""
]
] |
2107.04169 | Roni Stern | Brendan Juba, Hai S. Le, Roni Stern | Safe Learning of Lifted Action Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Creating a domain model, even for classical, domain-independent planning, is
a notoriously hard knowledge-engineering task. A natural approach to solve this
problem is to learn a domain model from observations. However, model learning
approaches frequently do not provide safety guarantees: the learned model may
assume actions are applicable when they are not, and may incorrectly capture
actions' effects. This may result in generating plans that will fail when
executed. In some domains such failures are not acceptable, due to the cost of
failure or inability to replan online after failure. In such settings, all
learning must be done offline, based on some observations collected, e.g., by
some other agents or a human. Through this learning, the task is to generate a
plan that is guaranteed to be successful. This is called the model-free
planning problem. Prior work proposed an algorithm for solving the model-free
planning problem in classical planning. However, they were limited to learning
grounded domains, and thus they could not scale. We generalize this prior work
and propose the first safe model-free planning algorithm for lifted domains. We
prove the correctness of our approach, and provide a statistical analysis
showing that the number of trajectories needed to solve future problems with
high probability is linear in the potential size of the domain model. We also
present experiments on twelve IPC domains showing that our approach is able to
learn the real action model in all cases with at most two trajectories.
| [
{
"version": "v1",
"created": "Fri, 9 Jul 2021 01:24:01 GMT"
}
] | 1,626,048,000,000 | [
[
"Juba",
"Brendan",
""
],
[
"Le",
"Hai S.",
""
],
[
"Stern",
"Roni",
""
]
] |
2107.04303 | Utkarsh Soni | Sriram Gopalakrishnan, Utkarsh Soni, Tung Thai, Panagiotis
Lymperopoulos, Matthias Scheutz, Subbarao Kambhampati | Integrating Planning, Execution and Monitoring in the presence of Open
World Novelties: Case Study of an Open World Monopoly Solver | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The game of monopoly is an adversarial multi-agent domain where there is no
fixed goal other than to be the last player solvent, There are useful subgoals
like monopolizing sets of properties, and developing them. There is also a lot
of randomness from dice rolls, card-draws, and adversaries' strategies. This
unpredictability is made worse when unknown novelties are added during
gameplay. Given these challenges, Monopoly was one of the test beds chosen for
the DARPA-SAILON program which aims to create agents that can detect and
accommodate novelties. To handle the game complexities, we developed an agent
that eschews complete plans, and adapts it's policy online as the game evolves.
In the most recent independent evaluation in the SAILON program, our agent was
the best performing agent on most measures. We herein present our approach and
results.
| [
{
"version": "v1",
"created": "Fri, 9 Jul 2021 08:26:28 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Aug 2021 21:22:15 GMT"
}
] | 1,628,640,000,000 | [
[
"Gopalakrishnan",
"Sriram",
""
],
[
"Soni",
"Utkarsh",
""
],
[
"Thai",
"Tung",
""
],
[
"Lymperopoulos",
"Panagiotis",
""
],
[
"Scheutz",
"Matthias",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2107.04378 | Stefan Bischof | Stefan Bischof, Gottfried Schenner | Rail Topology Ontology: A Rail Infrastructure Base Ontology | accepted at the International Semantic Web Conference'21 (ISWC 2021) | LNCS 12922 (2021) 597-612 | 10.1007/978-3-030-88361-4_35 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Engineering projects for railway infrastructure typically involve many
subsystems which need consistent views of the planned and built infrastructure
and its underlying topology. Consistency is typically ensured by exchanging and
verifying data between tools using XML-based data formats and UML-based
object-oriented models. A tighter alignment of these data representations via a
common topology model could decrease the development effort of railway
infrastructure engineering tools. A common semantic model is also a
prerequisite for the successful adoption of railway knowledge graphs. Based on
the RailTopoModel standard, we developed the Rail Topology Ontology as a model
to represent core features of railway infrastructures in a standard-compliant
manner. This paper describes the ontology and its development method, and
discusses its suitability for integrating data of railway engineering systems
and other sources in a knowledge graph.
With the Rail Topology Ontology, software engineers and knowledge scientists
have a standard-based ontology for representing railway topologies to integrate
disconnected data sources. We use the Rail Topology Ontology for our rail
knowledge graph and plan to extend it by rail infrastructure ontologies derived
from existing data exchange standards, since many such standards use the same
base model as the presented ontology, viz., RailTopoModel.
| [
{
"version": "v1",
"created": "Fri, 9 Jul 2021 12:03:50 GMT"
}
] | 1,633,392,000,000 | [
[
"Bischof",
"Stefan",
""
],
[
"Schenner",
"Gottfried",
""
]
] |
2107.04635 | Wiktor Piotrowski | Wiktor Piotrowski, Roni Stern, Matthew Klenk, Alexandre Perez, Shiwali
Mohan, Johan de Kleer, Jacob Le | Playing Angry Birds with a Domain-Independent PDDL+ Planner | 2 pages, submitted to ICAPS 2021 Demonstration Track | Proceedings of the International Conference on Automated Planning
and Scheduling (2021) Demonstration Track | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | This demo paper presents the first system for playing the popular Angry Birds
game using a domain-independent planner. Our system models Angry Birds levels
using PDDL+, a planning language for mixed discrete/continuous domains. It uses
a domain-independent PDDL+ planner to generate plans and executes them. In this
demo paper, we present the system's PDDL+ model for this domain, identify key
design decisions that reduce the problem complexity, and compare the
performance of our system to model-specific methods for this domain. The
results show that our system's performance is on par with other domain-specific
systems for Angry Birds, suggesting the applicability of domain-independent
planning to this benchmark AI challenge.
| [
{
"version": "v1",
"created": "Fri, 9 Jul 2021 19:12:49 GMT"
}
] | 1,710,115,200,000 | [
[
"Piotrowski",
"Wiktor",
""
],
[
"Stern",
"Roni",
""
],
[
"Klenk",
"Matthew",
""
],
[
"Perez",
"Alexandre",
""
],
[
"Mohan",
"Shiwali",
""
],
[
"de Kleer",
"Johan",
""
],
[
"Le",
"Jacob",
""
]
] |
2107.04771 | Balaji Ganesan | Jaspreet Singh Dhani, Ruchika Bhatt, Balaji Ganesan, Parikshet Sirohi,
Vasudha Bhatnagar | Similar Cases Recommendation using Legal Knowledge Graphs | 10 pages. 6 figures. 3rd Symposium on Artificial Intelligence and
Law. SAIL 2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A legal knowledge graph constructed from court cases, judgments, laws and
other legal documents can enable a number of applications like question
answering, document similarity, and search. While the use of knowledge graphs
for distant supervision in NLP tasks is well researched, using knowledge graphs
for applications like case similarity presents challenges. In this work, we
describe our solution for predicting similar cases in Indian court judgements.
We present our results and also discuss the impact of large language models on
this task.
| [
{
"version": "v1",
"created": "Sat, 10 Jul 2021 06:37:36 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Mar 2024 08:46:51 GMT"
}
] | 1,709,596,800,000 | [
[
"Dhani",
"Jaspreet Singh",
""
],
[
"Bhatt",
"Ruchika",
""
],
[
"Ganesan",
"Balaji",
""
],
[
"Sirohi",
"Parikshet",
""
],
[
"Bhatnagar",
"Vasudha",
""
]
] |
2107.04870 | Laura Giordano | Laura Giordano, Valentina Gliozzi, Daniele Theseider Dupr\'e | From Common Sense Reasoning to Neural Network Models through Multiple
Preferences: an overview | 17 pages. arXiv admin note: text overlap with arXiv:2008.13278,
arXiv:2012.13421, arXiv:2103.06854 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we discuss the relationships between conditional and
preferential logics and neural network models, based on a multi-preferential
semantics. We propose a concept-wise multipreference semantics, recently
introduced for defeasible description logics to take into account preferences
with respect to different concepts, as a tool for providing a semantic
interpretation to neural network models. This approach has been explored both
for unsupervised neural network models (Self-Organising Maps) and for
supervised ones (Multilayer Perceptrons), and we expect that the same approach
might be extended to other neural network models. It allows for logical
properties of the network to be checked (by model checking) over an
interpretation capturing the input-output behavior of the network. For
Multilayer Perceptrons, the deep network itself can be regarded as a
conditional knowledge base, in which synaptic connections correspond to
weighted conditionals. The paper describes the general approach, through the
cases of Self-Organising Maps and Multilayer Perceptrons, and discusses some
open issues and perspectives.
| [
{
"version": "v1",
"created": "Sat, 10 Jul 2021 16:25:19 GMT"
}
] | 1,626,134,400,000 | [
[
"Giordano",
"Laura",
""
],
[
"Gliozzi",
"Valentina",
""
],
[
"Dupré",
"Daniele Theseider",
""
]
] |
2107.05151 | Reza Karimi Dr | H.J. Meijer, J. Truong, R. Karimi | Document Embedding for Scientific Articles: Efficacy of Word Embeddings
vs TFIDF | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Over the last few years, neural network derived word embeddings became
popular in the natural language processing literature. Studies conducted have
mostly focused on the quality and application of word embeddings trained on
public available corpuses such as Wikipedia or other news and social media
sources. However, these studies are limited to generic text and thus lack
technical and scientific nuances such as domain specific vocabulary,
abbreviations, or scientific formulas which are commonly used in academic
context. This research focuses on the performance of word embeddings applied to
a large scale academic corpus. More specifically, we compare quality and
efficiency of trained word embeddings to TFIDF representations in modeling
content of scientific articles. We use a word2vec skip-gram model trained on
titles and abstracts of about 70 million scientific articles. Furthermore, we
have developed a benchmark to evaluate content models in a scientific context.
The benchmark is based on a categorization task that matches articles to
journals for about 1.3 million articles published in 2017. Our results show
that content models based on word embeddings are better for titles (short text)
while TFIDF works better for abstracts (longer text). However, the slight
improvement of TFIDF for larger text comes at the expense of 3.7 times more
memory requirement as well as up to 184 times higher computation times which
may make it inefficient for online applications. In addition, we have created a
2-dimensional visualization of the journals modeled via embeddings to
qualitatively inspect embedding model. This graph shows useful insights and can
be used to find competitive journals or gaps to propose new journals.
| [
{
"version": "v1",
"created": "Sun, 11 Jul 2021 23:58:39 GMT"
}
] | 1,626,134,400,000 | [
[
"Meijer",
"H. J.",
""
],
[
"Truong",
"J.",
""
],
[
"Karimi",
"R.",
""
]
] |
2107.05278 | Erwin de Gelder | Erwin de Gelder, Eric Cator, Jan-Pieter Paardekooper, Olaf Op den
Camp, Bart De Schutter | Constrained Sampling from a Kernel Density Estimator to Generate
Scenarios for the Assessment of Automated Vehicles | 6 pages, 3 figures, to be published in the proceedings of the IEEE
Intelligent Vehicle Symposium Workshops (IV workshop) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The safety assessment of automated vehicles (AVs) is an important aspect of
the development cycle of AVs. A scenario-based assessment approach is accepted
by many players in the field as part of the complete safety assessment. A
scenario is a representation of a situation on the road to which the AV needs
to respond appropriately. One way to generate the required scenario-based test
descriptions is to parameterize the scenarios and to draw these parameters from
a probability density function (pdf). Because the shape of the pdf is unknown
beforehand, assuming a functional form of the pdf and fitting the parameters to
the data may lead to inaccurate fits. As an alternative, Kernel Density
Estimation (KDE) is a promising candidate for estimating the underlying pdf,
because it is flexible with the underlying distribution of the parameters.
Drawing random samples from a pdf estimated with KDE is possible without the
need of evaluating the actual pdf, which makes it suitable for drawing random
samples for, e.g., Monte Carlo methods. Sampling from a KDE while the samples
satisfy a linear equality constraint, however, has not been described in the
literature, as far as the authors know.
In this paper, we propose a method to sample from a pdf estimated using KDE,
such that the samples satisfy a linear equality constraint. We also present an
algorithm of our method in pseudo-code. The method can be used to generating
scenarios that have, e.g., a predetermined starting speed or to generate
different types of scenarios. This paper also shows that the method for
sampling scenarios can be used in case a Singular Value Decomposition (SVD) is
used to reduce the dimension of the parameter vectors.
| [
{
"version": "v1",
"created": "Mon, 12 Jul 2021 09:28:25 GMT"
}
] | 1,626,134,400,000 | [
[
"de Gelder",
"Erwin",
""
],
[
"Cator",
"Eric",
""
],
[
"Paardekooper",
"Jan-Pieter",
""
],
[
"Camp",
"Olaf Op den",
""
],
[
"De Schutter",
"Bart",
""
]
] |
2107.05346 | Muhammad Salman Shaukat | Muhammad Salman Shaukat, Bjarne Christian Hiller, Sebastian Bader,
Thomas Kirste | SimDem A Multi-agent Simulation Environment to Model Persons with
Dementia and their Assistance | 5 pages, accepted in ARIAL@IJCAI 2021: 4th Workshop on AI for Aging,
Rehabilitation, and Intelligent Assisted Living | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Developing artificial intelligence based assistive systems to aid Persons
with Dementia (PwD) requires large amounts of training data. However, data
collection poses ethical, legal, economic, and logistic issues. Synthetic data
generation tools, in this regard, provide a potential solution. However, we
believe that already available such tools do not adequately reflect cognitive
deficiencies in behavior simulation. To counter these issues we propose a
simulation model (SimDem ) that primarily focuses on cognitive impairments
suffered by PwD and can be easily configured and adapted by the users to model
and evaluate assistive solutions.
| [
{
"version": "v1",
"created": "Mon, 12 Jul 2021 12:13:47 GMT"
}
] | 1,626,134,400,000 | [
[
"Shaukat",
"Muhammad Salman",
""
],
[
"Hiller",
"Bjarne Christian",
""
],
[
"Bader",
"Sebastian",
""
],
[
"Kirste",
"Thomas",
""
]
] |
2107.05348 | Zhuo Chen | Zhuo Chen, Jiaoyan Chen, Yuxia Geng, Jeff Z. Pan, Zonggang Yuan and
Huajun Chen | Zero-shot Visual Question Answering using Knowledge Graph | accepted at the International Semantic Web Conference '21 (ISWC 2021) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Incorporating external knowledge to Visual Question Answering (VQA) has
become a vital practical need. Existing methods mostly adopt pipeline
approaches with different components for knowledge matching and extraction,
feature learning, etc.However, such pipeline approaches suffer when some
component does not perform well, which leads to error propagation and poor
overall performance. Furthermore, the majority of existing approaches ignore
the answer bias issue -- many answers may have never appeared during training
(i.e., unseen answers) in real-word application. To bridge these gaps, in this
paper, we propose a Zero-shot VQA algorithm using knowledge graphs and a
mask-based learning mechanism for better incorporating external knowledge, and
present new answer-based Zero-shot VQA splits for the F-VQA dataset.
Experiments show that our method can achieve state-of-the-art performance in
Zero-shot VQA with unseen answers, meanwhile dramatically augment existing
end-to-end models on the normal F-VQA task.
| [
{
"version": "v1",
"created": "Mon, 12 Jul 2021 12:17:18 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jul 2021 02:50:38 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Jul 2021 11:37:13 GMT"
},
{
"version": "v4",
"created": "Mon, 18 Oct 2021 02:01:02 GMT"
}
] | 1,634,601,600,000 | [
[
"Chen",
"Zhuo",
""
],
[
"Chen",
"Jiaoyan",
""
],
[
"Geng",
"Yuxia",
""
],
[
"Pan",
"Jeff Z.",
""
],
[
"Yuan",
"Zonggang",
""
],
[
"Chen",
"Huajun",
""
]
] |
2107.05363 | Domonkos Czifra | Domonkos Czifra, Endre Cs\'oka, Zsolt Zombori, G\'eza Makay | Towards solving the 7-in-a-row game | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our paper explores the game theoretic value of the 7-in-a-row game. We reduce
the problem to solving a finite board game, which we target using Proof Number
Search. We present a number of heuristic improvements to Proof Number Search
and examine their effect within the context of this particular game. Although
our paper does not solve the 7-in-a-row game, our experiments indicate that we
have made significant progress towards it.
| [
{
"version": "v1",
"created": "Mon, 5 Jul 2021 08:17:12 GMT"
}
] | 1,626,134,400,000 | [
[
"Czifra",
"Domonkos",
""
],
[
"Csóka",
"Endre",
""
],
[
"Zombori",
"Zsolt",
""
],
[
"Makay",
"Géza",
""
]
] |
2107.05850 | Angeline Aguinaldo | Angeline Aguinaldo, William Regli | Encoding Compositionality in Classical Planning Solutions | IJCAI Generalization in Planning Workshop 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Classical AI planners provide solutions to planning problems in the form of
long and opaque text outputs. To aid in the understanding transferability of
planning solutions, it is necessary to have a rich and comprehensible
representation for both human and computers beyond the current line-by-line
text notation. In particular, it is desirable to encode the trace of literals
throughout the plan to capture the dependencies between actions selected. The
approach of this paper is to view the actions as maps between literals and the
selected plan as a composition of those maps. The mathematical theory, called
category theory, provides the relevant structures for capturing maps, their
compositions, and maps between compositions. We employ this theory to propose
an algorithm agnostic, model-based representation for domains, problems, and
plans expressed in the commonly used planning description language, PDDL. This
category theoretic representation is accompanied by a graphical syntax in
addition to a linear notation, similar to algebraic expressions, that can be
used to infer literals used at every step of the plan. This provides the
appropriate constructive abstraction and facilitates comprehension for human
operators. In this paper, we demonstrate this on a plan within the Blocksworld
domain.
| [
{
"version": "v1",
"created": "Tue, 13 Jul 2021 05:05:11 GMT"
}
] | 1,626,220,800,000 | [
[
"Aguinaldo",
"Angeline",
""
],
[
"Regli",
"William",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.