id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2107.05877 | Frederic Lardeux | Fr\'ed\'eric Lardeux (LERIA), Eric Monfroy (LERIA) | GA and ILS for optimizing the size of NFA models | null | The 8th International Conference on Metaheuristics and Nature
Inspired Computing (META), Oct 2021, Marrakech, Morocco | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Grammatical inference consists in learning a formal grammar (as a set of
rewrite rules or a finite state machine). We are concerned with learning
Nondeterministic Finite Automata (NFA) of a given size from samples of positive
and negative words. NFA can naturally be modeled in SAT. The standard model [1]
being enormous, we also try a model based on prefixes [2] which generates
smaller instances. We also propose a new model based on suffixes and a hybrid
model based on prefixes and suffixes. We then focus on optimizing the size of
generated SAT instances issued from the hybrid models. We present two
techniques to optimize this combination, one based on Iterated Local Search
(ILS), the second one based on Genetic Algorithm (GA). Optimizing the
combination significantly reduces the SAT instances and their solving time, but
at the cost of longer generation time. We, therefore, study the balance between
generation time and solving time thanks to some experimental comparisons, and
we analyze our various model improvements.
| [
{
"version": "v1",
"created": "Tue, 13 Jul 2021 06:52:41 GMT"
}
] | 1,626,220,800,000 | [
[
"Lardeux",
"Frédéric",
"",
"LERIA"
],
[
"Monfroy",
"Eric",
"",
"LERIA"
]
] |
2107.05949 | Hamed Rahimi | Hamed Rahimi, Iago Felipe Trentin, Fano Ramparany, Olivier Boissier | Q-SMASH: Q-Learning-based Self-Adaptation of Human-Centered Internet of
Things | Submitted to wi-iat2021. arXiv admin note: text overlap with
arXiv:2105.14915 | null | 10.1145/3486622.3493974 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the number of Human-Centered Internet of Things (HCIoT) applications
increases, the self-adaptation of its services and devices is becoming a
fundamental requirement for addressing the uncertainties of the environment in
decision-making processes. Self-adaptation of HCIoT aims to manage run-time
changes in a dynamic environment and to adjust the functionality of IoT objects
in order to achieve desired goals during execution. SMASH is a semantic-enabled
multi-agent system for self-adaptation of HCIoT that autonomously adapts IoT
objects to uncertainties of their environment. SMASH addresses the
self-adaptation of IoT applications only according to the human values of
users, while the behavior of users is not addressed. This article presents
Q-SMASH: a multi-agent reinforcement learning-based approach for
self-adaptation of IoT objects in human-centered environments. Q-SMASH aims to
learn the behaviors of users along with respecting human values. The learning
ability of Q-SMASH allows it to adapt itself to the behavioral change of users
and make more accurate decisions in different states and situations.
| [
{
"version": "v1",
"created": "Tue, 13 Jul 2021 09:41:05 GMT"
}
] | 1,675,641,600,000 | [
[
"Rahimi",
"Hamed",
""
],
[
"Trentin",
"Iago Felipe",
""
],
[
"Ramparany",
"Fano",
""
],
[
"Boissier",
"Olivier",
""
]
] |
2107.06031 | Alberto Barbado Gonzalez | Alberto Barbado, \'Oscar Corcho | Vehicle Fuel Optimization Under Real-World Driving Conditions: An
Explainable Artificial Intelligence Approach | 30 pages, 15 Figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Fuel optimization of diesel and petrol vehicles within industrial fleets is
critical for mitigating costs and reducing emissions. This objective is
achievable by acting on fuel-related factors, such as the driving behaviour
style.
In this study, we developed an Explainable Boosting Machine (EBM) model to
predict fuel consumption of different types of industrial vehicles, using
real-world data collected from 2020 to 2021. This Machine Learning model also
explains the relationship between the input factors and fuel consumption,
quantifying the individual contribution of each one of them. The explanations
provided by the model are compared with domain knowledge in order to see if
they are aligned. The results show that the 70% of the categories associated to
the fuel-factors are similar to the previous literature.
With the EBM algorithm, we estimate that optimizing driving behaviour
decreases fuel consumption between 12% and 15% in a large fleet (more than 1000
vehicles).
| [
{
"version": "v1",
"created": "Tue, 13 Jul 2021 12:39:59 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jul 2021 09:53:09 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Jul 2021 12:09:21 GMT"
}
] | 1,626,998,400,000 | [
[
"Barbado",
"Alberto",
""
],
[
"Corcho",
"Óscar",
""
]
] |
2107.06071 | Dorien Herremans | Dorien Herremans | aiSTROM -- A roadmap for developing a successful AI strategy | null | IEEE Access, 2021 | 10.1109/ACCESS.2021.3127548 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A total of 34% of AI research and development projects fails or are
abandoned, according to a recent survey by Rackspace Technology of 1,870
companies. We propose a new strategic framework, aiSTROM, that empowers
managers to create a successful AI strategy based on a thorough literature
review. This provides a unique and integrated approach that guides managers and
lead developers through the various challenges in the implementation process.
In the aiSTROM framework, we start by identifying the top n potential projects
(typically 3-5). For each of those, seven areas of focus are thoroughly
analysed. These areas include creating a data strategy that takes into account
unique cross-departmental machine learning data requirements, security, and
legal requirements. aiSTROM then guides managers to think about how to put
together an interdisciplinary artificial intelligence (AI) implementation team
given the scarcity of AI talent. Once an AI team strategy has been established,
it needs to be positioned within the organization, either cross-departmental or
as a separate division. Other considerations include AI as a service (AIaas),
or outsourcing development. Looking at new technologies, we have to consider
challenges such as bias, legality of black-box-models, and keeping humans in
the loop. Next, like any project, we need value-based key performance
indicators (KPIs) to track and validate the progress. Depending on the
company's risk-strategy, a SWOT analysis (strengths, weaknesses, opportunities,
and threats) can help further classify the shortlisted projects. Finally, we
should make sure that our strategy includes continuous education of employees
to enable a culture of adoption. This unique and comprehensive framework offers
a valuable, literature supported, tool for managers and lead developers.
| [
{
"version": "v1",
"created": "Fri, 25 Jun 2021 08:40:15 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Nov 2021 06:15:07 GMT"
}
] | 1,637,020,800,000 | [
[
"Herremans",
"Dorien",
""
]
] |
2107.06146 | Carl Corea | Carl Corea, Michael Fellmann, Patrick Delfmann | Ontology-Based Process Modelling -- Will we live to see it? | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In theory, ontology-based process modelling (OBPM) bares great potential to
extend business process management. Many works have studied OBPM and are clear
on the potential amenities, such as eliminating ambiguities or enabling
advanced reasoning over company processes. However, despite this approval in
academia, a widespread industry adoption is still nowhere to be seen. This can
be mainly attributed to the fact, that it still requires high amounts of manual
labour to initially create ontologies and annotations to process models. As
long as these problems are not addressed, implementing OBPM seems unfeasible in
practice. In this work, we therefore identify requirements needed for a
successful implementation of OBPM and assess the current state of research
w.r.t. these requirements. Our results indicate that the research progress for
means to facilitate OBPM are still alarmingly low and there needs to be urgent
work on extending existing approaches.
| [
{
"version": "v1",
"created": "Mon, 12 Jul 2021 09:44:17 GMT"
}
] | 1,626,220,800,000 | [
[
"Corea",
"Carl",
""
],
[
"Fellmann",
"Michael",
""
],
[
"Delfmann",
"Patrick",
""
]
] |
2107.06413 | Guilherme Paulino-Passos | Guilherme Paulino-Passos, Francesca Toni | Monotonicity and Noise-Tolerance in Case-Based Reasoning with Abstract
Argumentation (with Appendix) | Accepted for KR2021. Includes Appendix. arXiv admin note: substantial
text overlap with arXiv:2007.05284 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Recently, abstract argumentation-based models of case-based reasoning
($AA{\text -} CBR$ in short) have been proposed, originally inspired by the
legal domain, but also applicable as classifiers in different scenarios.
However, the formal properties of $AA{\text -} CBR$ as a reasoning system
remain largely unexplored. In this paper, we focus on analysing the
non-monotonicity properties of a regular version of $AA{\text -} CBR$ (that we
call $AA{\text -} CBR_{\succeq}$). Specifically, we prove that $AA{\text -}
CBR_{\succeq}$ is not cautiously monotonic, a property frequently considered
desirable in the literature. We then define a variation of $AA{\text -}
CBR_{\succeq}$ which is cautiously monotonic. Further, we prove that such
variation is equivalent to using $AA{\text -} CBR_{\succeq}$ with a restricted
casebase consisting of all "surprising" and "sufficient" cases in the original
casebase. As a by-product, we prove that this variation of $AA{\text -}
CBR_{\succeq}$ is cumulative, rationally monotonic, and empowers a principled
treatment of noise in "incoherent" casebases. Finally, we illustrate $AA{\text
-} CBR$ and cautious monotonicity questions on a case study on the U.S. Trade
Secrets domain, a legal casebase.
| [
{
"version": "v1",
"created": "Tue, 13 Jul 2021 22:10:24 GMT"
}
] | 1,626,307,200,000 | [
[
"Paulino-Passos",
"Guilherme",
""
],
[
"Toni",
"Francesca",
""
]
] |
2107.06434 | Qizhen Zhang | Qizhen Zhang, Chris Lu, Animesh Garg, Jakob Foerster | Centralized Model and Exploration Policy for Multi-Agent RL | Accepted to AAMAS 2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning (RL) in partially observable, fully cooperative
multi-agent settings (Dec-POMDPs) can in principle be used to address many
real-world challenges such as controlling a swarm of rescue robots or a team of
quadcopters. However, Dec-POMDPs are significantly harder to solve than
single-agent problems, with the former being NEXP-complete and the latter,
MDPs, being just P-complete. Hence, current RL algorithms for Dec-POMDPs suffer
from poor sample complexity, which greatly reduces their applicability to
practical problems where environment interaction is costly. Our key insight is
that using just a polynomial number of samples, one can learn a centralized
model that generalizes across different policies. We can then optimize the
policy within the learned model instead of the true system, without requiring
additional environment interactions. We also learn a centralized exploration
policy within our model that learns to collect additional data in state-action
regions with high model uncertainty. We empirically evaluate the proposed
model-based algorithm, MARCO, in three cooperative communication tasks, where
it improves sample efficiency by up to 20x. Finally, to investigate the
theoretical sample complexity, we adapt an existing model-based method for
tabular MDPs to Dec-POMDPs, and prove that it achieves polynomial sample
complexity.
| [
{
"version": "v1",
"created": "Wed, 14 Jul 2021 00:34:08 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Feb 2022 02:12:12 GMT"
}
] | 1,644,278,400,000 | [
[
"Zhang",
"Qizhen",
""
],
[
"Lu",
"Chris",
""
],
[
"Garg",
"Animesh",
""
],
[
"Foerster",
"Jakob",
""
]
] |
2107.06547 | Sirko Schindler | Barbara Magagna and Ilaria Rosati and Maria Stoica and Sirko Schindler
and Gwenaelle Moncoiffe and Anusuriya Devaraju and Johannes Peterseil and
Robert Huber | The I-ADOPT Interoperability Framework for FAIRer data descriptions of
biodiversity | submitted to S4BioDiv 2021: 3rd International Workshop on Semantics
for Biodiversity, September 15, 2021, Bozen, Italy | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Biodiversity, the variation within and between species and ecosystems, is
essential for human well-being and the equilibrium of the planet. It is
critical for the sustainable development of human society and is an important
global challenge. Biodiversity research has become increasingly data-intensive
and it deals with heterogeneous and distributed data made available by global
and regional initiatives, such as GBIF, ILTER, LifeWatch, BODC, PANGAEA, and
TERN, that apply different data management practices. In particular, a variety
of metadata and semantic resources have been produced by these initiatives to
describe biodiversity observations, introducing interoperability issues across
data management systems. To address these challenges, the InteroperAble
Descriptions of Observable Property Terminology WG (I-ADOPT WG) was formed by a
group of international terminology providers and data center managers in 2019
with the aim to build a common approach to describe what is observed, measured,
calculated, or derived. Based on an extensive analysis of existing semantic
representations of variables, the WG has recently published the I-ADOPT
framework ontology to facilitate interoperability between existing semantic
resources and support the provision of machine-readable variable descriptions
whose components are mapped to FAIR vocabulary terms. The I-ADOPT framework
ontology defines a set of high level semantic components that can be used to
describe a variety of patterns commonly found in scientific observations. This
contribution will focus on how the I-ADOPT framework can be applied to
represent variables commonly used in the biodiversity domain.
| [
{
"version": "v1",
"created": "Wed, 14 Jul 2021 08:30:10 GMT"
}
] | 1,626,307,200,000 | [
[
"Magagna",
"Barbara",
""
],
[
"Rosati",
"Ilaria",
""
],
[
"Stoica",
"Maria",
""
],
[
"Schindler",
"Sirko",
""
],
[
"Moncoiffe",
"Gwenaelle",
""
],
[
"Devaraju",
"Anusuriya",
""
],
[
"Peterseil",
"Johannes",
""
],
[
"Huber",
"Robert",
""
]
] |
2107.06638 | Anurag Sarkar | Anurag Sarkar, Seth Cooper | Procedural Content Generation using Behavior Trees (PCGBT) | Accepted to EXAG 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Behavior trees (BTs) are a popular method for modeling NPC and enemy AI
behavior and have been widely used in commercial games. In this work, rather
than use BTs to model game playing agents, we use them for modeling game design
agents, defining behaviors as content generation tasks rather than in-game
actions. Similar to how traditional BTs enable modeling behaviors in a modular
and dynamic manner, BTs for PCG enable simple subtrees for generating parts of
levels to be combined modularly to form complex trees for generating whole
levels as well as generators that can dynamically vary the generated content.
We refer to this approach as Procedural Content Generation using Behavior
Trees, or PCGBT, and demonstrate it by using BTs to model generators for Super
Mario Bros., Mega Man and Metroid levels as well as dungeon layouts and discuss
several ways in which this paradigm could be applied and extended in the
future.
| [
{
"version": "v1",
"created": "Thu, 24 Jun 2021 17:54:00 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Oct 2021 03:24:41 GMT"
}
] | 1,633,910,400,000 | [
[
"Sarkar",
"Anurag",
""
],
[
"Cooper",
"Seth",
""
]
] |
2107.06641 | Haochen Liu | Haochen Liu, Yiqi Wang, Wenqi Fan, Xiaorui Liu, Yaxin Li, Shaili Jain,
Yunhao Liu, Anil K. Jain, Jiliang Tang | Trustworthy AI: A Computational Perspective | 55 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the past few decades, artificial intelligence (AI) technology has
experienced swift developments, changing everyone's daily life and profoundly
altering the course of human society. The intention of developing AI is to
benefit humans, by reducing human labor, bringing everyday convenience to human
lives, and promoting social good. However, recent research and AI applications
show that AI can cause unintentional harm to humans, such as making unreliable
decisions in safety-critical scenarios or undermining fairness by inadvertently
discriminating against one group. Thus, trustworthy AI has attracted immense
attention recently, which requires careful consideration to avoid the adverse
effects that AI may bring to humans, so that humans can fully trust and live in
harmony with AI technologies.
Recent years have witnessed a tremendous amount of research on trustworthy
AI. In this survey, we present a comprehensive survey of trustworthy AI from a
computational perspective, to help readers understand the latest technologies
for achieving trustworthy AI. Trustworthy AI is a large and complex area,
involving various dimensions. In this work, we focus on six of the most crucial
dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii)
Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v)
Accountability & Auditability, and (vi) Environmental Well-Being. For each
dimension, we review the recent related technologies according to a taxonomy
and summarize their applications in real-world systems. We also discuss the
accordant and conflicting interactions among different dimensions and discuss
potential aspects for trustworthy AI to investigate in the future.
| [
{
"version": "v1",
"created": "Mon, 12 Jul 2021 14:21:46 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Aug 2021 18:00:23 GMT"
},
{
"version": "v3",
"created": "Thu, 19 Aug 2021 03:32:04 GMT"
}
] | 1,629,417,600,000 | [
[
"Liu",
"Haochen",
""
],
[
"Wang",
"Yiqi",
""
],
[
"Fan",
"Wenqi",
""
],
[
"Liu",
"Xiaorui",
""
],
[
"Li",
"Yaxin",
""
],
[
"Jain",
"Shaili",
""
],
[
"Liu",
"Yunhao",
""
],
[
"Jain",
"Anil K.",
""
],
[
"Tang",
"Jiliang",
""
]
] |
2107.06750 | Zarathustra Amadeus Goertzel | Zarathustra Goertzel, Karel Chvalovsk\'y, Jan Jakub\r{u}v, Miroslav
Ol\v{s}\'ak, Josef Urban | Fast and Slow Enigmas and Parental Guidance | 23 pages, 11 tables, 1 figure, submitted to FroCoS 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe several additions to the ENIGMA system that guides clause
selection in the E automated theorem prover. First, we significantly speed up
its neural guidance by adding server-based GPU evaluation. The second addition
is motivated by fast weight-based rejection filters that are currently used in
systems like E and Prover9. Such systems can be made more intelligent by
instead training fast versions of ENIGMA that implement more intelligent
pre-filtering. This results in combinations of trainable fast and slow thinking
that improves over both the fast-only and slow-only methods. The third addition
is based on "judging the children by their parents", i.e., possibly rejecting
an inference before it produces a clause. This is motivated by standard
evolutionary mechanisms, where there is always a cost to producing all possible
offsprings in the current population. This saves time by not evaluating all
clauses by more expensive methods and provides a complementary view of the
generated clauses. The methods are evaluated on a large benchmark coming from
the Mizar Mathematical Library, showing good improvements over the state of the
art.
| [
{
"version": "v1",
"created": "Wed, 14 Jul 2021 14:53:08 GMT"
}
] | 1,626,393,600,000 | [
[
"Goertzel",
"Zarathustra",
""
],
[
"Chvalovský",
"Karel",
""
],
[
"Jakubův",
"Jan",
""
],
[
"Olšák",
"Miroslav",
""
],
[
"Urban",
"Josef",
""
]
] |
2107.06840 | Akansel Cosgun | Dylan Klein, Akansel Cosgun | Mixing Human Demonstrations with Self-Exploration in Experience Replay
for Deep Reinforcement Learning | 2 pages. Submitted to ICDL 2021 Workshop on Human aligned
Reinforcement Learning for Autonomous Agents and Robots | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the effect of using human demonstration data in the replay
buffer for Deep Reinforcement Learning. We use a policy gradient method with a
modified experience replay buffer where a human demonstration experience is
sampled with a given probability. We analyze different ratios of using
demonstration data in a task where an agent attempts to reach a goal while
avoiding obstacles. Our results suggest that while the agents trained by pure
self-exploration and pure demonstration had similar success rates, the pure
demonstration model converged faster to solutions with less number of steps.
| [
{
"version": "v1",
"created": "Wed, 14 Jul 2021 16:55:30 GMT"
}
] | 1,626,307,200,000 | [
[
"Klein",
"Dylan",
""
],
[
"Cosgun",
"Akansel",
""
]
] |
2107.07031 | Francesco Massari | Francesco Massari, Martin Biehl, Lisa Meeden, Ryota Kanai | Experimental Evidence that Empowerment May Drive Exploration in
Sparse-Reward Environments | 6 pages, 3 figures, to be published in proceedings of the
International Conference on Development and Learning 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement Learning (RL) is known to be often unsuccessful in environments
with sparse extrinsic rewards. A possible countermeasure is to endow RL agents
with an intrinsic reward function, or 'intrinsic motivation', which rewards the
agent based on certain features of the current sensor state. An intrinsic
reward function based on the principle of empowerment assigns rewards
proportional to the amount of control the agent has over its own sensors. We
implemented a variation on a recently proposed intrinsically motivated agent,
which we refer to as the 'curious' agent, and an empowerment-inspired agent.
The former leverages sensor state encoding with a variational autoencoder,
while the latter predicts the next sensor state via a variational information
bottleneck. We compared the performance of both agents to that of an advantage
actor-critic baseline in four sparse reward grid worlds. Both the empowerment
agent and its curious competitor seem to benefit to similar extents from their
intrinsic rewards. This provides some experimental support to the conjecture
that empowerment can be used to drive exploration.
| [
{
"version": "v1",
"created": "Wed, 14 Jul 2021 22:52:38 GMT"
}
] | 1,626,393,600,000 | [
[
"Massari",
"Francesco",
""
],
[
"Biehl",
"Martin",
""
],
[
"Meeden",
"Lisa",
""
],
[
"Kanai",
"Ryota",
""
]
] |
2107.07066 | Ai Guanqun | Guanqun Ai, Xingquan Zuo, Gang chen, and Binglin Wu | Deep Reinforcement Learning based Dynamic Optimization of Bus Timetable | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bus timetable optimization is a key issue to reduce operational cost of bus
companies and improve the service quality. Existing methods use exact or
heuristic algorithms to optimize the timetable in an offline manner. In
practice, the passenger flow may change significantly over time. Timetables
determined in offline cannot adjust the departure interval to satisfy the
changed passenger flow. Aiming at improving the online performance of bus
timetable, we propose a Deep Reinforcement Learning based bus Timetable dynamic
Optimization method (DRL-TO). In this method, the timetable optimization is
considered as a sequential decision problem. A Deep Q-Network (DQN) is employed
as the decision model to determine whether to dispatch a bus service during
each minute of the service period. Therefore, the departure intervals of bus
services are determined in real time in accordance with passenger demand. We
identify several new and useful state features for the DQN, including the load
factor, carrying capacity utilization rate, and the number of stranding
passengers. Taking into account both the interests of the bus company and
passengers, a reward function is designed, which includes the indicators of
full load rate, empty load rate, passengers' waiting time, and the number of
stranding passengers. Building on an existing method for calculating the
carrying capacity, we develop a new technique to enhance the matching degree at
each bus station. Experiments demonstrate that compared with the timetable
generated by the state-of-the-art bus timetable optimization approach based on
a memetic algorithm (BTOA-MA), Genetic Algorithm (GA) and the manual method,
DRL-TO can dynamically determine the departure intervals based on the real-time
passenger flow, saving 8$\%$ of vehicles and reducing 17$\%$ of passengers'
waiting time on average.
| [
{
"version": "v1",
"created": "Thu, 15 Jul 2021 01:22:49 GMT"
}
] | 1,626,393,600,000 | [
[
"Ai",
"Guanqun",
""
],
[
"Zuo",
"Xingquan",
""
],
[
"chen",
"Gang",
""
],
[
"Wu",
"Binglin",
""
]
] |
2107.07114 | Yibo Hu | Yibo Hu, Latifur Khan | Uncertainty-Aware Reliable Text Classification | KDD 2021 | null | 10.1145/3447548.3467382 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Deep neural networks have significantly contributed to the success in
predictive accuracy for classification tasks. However, they tend to make
over-confident predictions in real-world settings, where domain shifting and
out-of-distribution (OOD) examples exist. Most research on uncertainty
estimation focuses on computer vision because it provides visual validation on
uncertainty quality. However, few have been presented in the natural language
process domain. Unlike Bayesian methods that indirectly infer uncertainty
through weight uncertainties, current evidential uncertainty-based methods
explicitly model the uncertainty of class probabilities through subjective
opinions. They further consider inherent uncertainty in data with different
root causes, vacuity (i.e., uncertainty due to a lack of evidence) and
dissonance (i.e., uncertainty due to conflicting evidence). In our paper, we
firstly apply evidential uncertainty in OOD detection for text classification
tasks. We propose an inexpensive framework that adopts both auxiliary outliers
and pseudo off-manifold samples to train the model with prior knowledge of a
certain class, which has high vacuity for OOD samples. Extensive empirical
experiments demonstrate that our model based on evidential uncertainty
outperforms other counterparts for detecting OOD examples. Our approach can be
easily deployed to traditional recurrent neural networks and fine-tuned
pre-trained transformers.
| [
{
"version": "v1",
"created": "Thu, 15 Jul 2021 04:39:55 GMT"
}
] | 1,626,393,600,000 | [
[
"Hu",
"Yibo",
""
],
[
"Khan",
"Latifur",
""
]
] |
2107.07124 | Zitao Liu | Jiahao Chen, Hang Li, Wenbiao Ding, Zitao Liu | An Educational System for Personalized Teacher Recommendation in K-12
Online Classrooms | AIED'21: The 22nd International Conference on Artificial Intelligence
in Education, 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a simple yet effective solution to build practical
teacher recommender systems for online one-on-one classes. Our system consists
of (1) a pseudo matching score module that provides reliable training labels;
(2) a ranking model that scores every candidate teacher; (3) a novelty boosting
module that gives additional opportunities to new teachers; and (4) a diversity
metric that guardrails the recommended results to reduce the chance of
collision. Offline experimental results show that our approach outperforms a
wide range of baselines. Furthermore, we show that our approach is able to
reduce the number of student-teacher matching attempts from 7.22 to 3.09 in a
five-month observation on a third-party online education platform.
| [
{
"version": "v1",
"created": "Thu, 15 Jul 2021 05:04:28 GMT"
}
] | 1,626,393,600,000 | [
[
"Chen",
"Jiahao",
""
],
[
"Li",
"Hang",
""
],
[
"Ding",
"Wenbiao",
""
],
[
"Liu",
"Zitao",
""
]
] |
2107.07136 | Mohit Kumar | Mohit Kumar, Samuel Kolb, Luc De Raedt and Stefano Teso | Learning Mixed-Integer Linear Programs from Contextual Examples | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Mixed-integer linear programs (MILPs) are widely used in artificial
intelligence and operations research to model complex decision problems like
scheduling and routing. Designing such programs however requires both domain
and modelling expertise. In this paper, we study the problem of acquiring MILPs
from contextual examples, a novel and realistic setting in which examples
capture solutions and non-solutions within a specific context. The resulting
learning problem involves acquiring continuous parameters -- namely, a cost
vector and a feasibility polytope -- but has a distinctly combinatorial flavor.
To solve this complex problem, we also contribute MISSLE, an algorithm for
learning MILPs from contextual examples. MISSLE uses a variant of stochastic
local search that is guided by the gradient of a continuous surrogate loss
function. Our empirical evaluation on synthetic data shows that MISSLE acquires
better MILPs faster than alternatives based on stochastic local search and
gradient descent.
| [
{
"version": "v1",
"created": "Thu, 15 Jul 2021 05:45:52 GMT"
}
] | 1,626,393,600,000 | [
[
"Kumar",
"Mohit",
""
],
[
"Kolb",
"Samuel",
""
],
[
"De Raedt",
"Luc",
""
],
[
"Teso",
"Stefano",
""
]
] |
2107.07229 | Ishan Tarunesh | Ishan Tarunesh, Somak Aditya and Monojit Choudhury | Trusting RoBERTa over BERT: Insights from CheckListing the Natural
Language Inference Task | 15 pages, 5 figures and 9 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The recent state-of-the-art natural language understanding (NLU) systems
often behave unpredictably, failing on simpler reasoning examples. Despite
this, there has been limited focus on quantifying progress towards systems with
more predictable behavior. We think that reasoning capability-wise behavioral
summary is a step towards bridging this gap. We create a CheckList test-suite
(184K examples) for the Natural Language Inference (NLI) task, a representative
NLU task. We benchmark state-of-the-art NLI systems on this test-suite, which
reveals fine-grained insights into the reasoning abilities of BERT and RoBERTa.
Our analysis further reveals inconsistencies of the models on examples derived
from the same template or distinct templates but pertaining to same reasoning
capability, indicating that generalizing the models' behavior through
observations made on a CheckList is non-trivial. Through an user-study, we find
that users were able to utilize behavioral information to generalize much
better for examples predicted from RoBERTa, compared to that of BERT.
| [
{
"version": "v1",
"created": "Thu, 15 Jul 2021 10:08:18 GMT"
}
] | 1,626,393,600,000 | [
[
"Tarunesh",
"Ishan",
""
],
[
"Aditya",
"Somak",
""
],
[
"Choudhury",
"Monojit",
""
]
] |
2107.07233 | Sagnik Sarkar | Shaashwat Agrawal, Sagnik Sarkar, Mamoun Alazab, Praveen Kumar Reddy
Maddikunta, Thippa Reddy Gadekallu and Quoc-Viet Pham | Genetic CFL: Optimization of Hyper-Parameters in Clustered Federated
Learning | 7 pages, 4 figures, 4 tables | Computational Intelligence and Neuroscience, vol. 2021, Article ID
7156420, 10 pages, 2021 | 10.1155/2021/7156420 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Federated learning (FL) is a distributed model for deep learning that
integrates client-server architecture, edge computing, and real-time
intelligence. FL has the capability of revolutionizing machine learning (ML)
but lacks in the practicality of implementation due to technological
limitations, communication overhead, non-IID (independent and identically
distributed) data, and privacy concerns. Training a ML model over heterogeneous
non-IID data highly degrades the convergence rate and performance. The existing
traditional and clustered FL algorithms exhibit two main limitations, including
inefficient client training and static hyper-parameter utilization. To overcome
these limitations, we propose a novel hybrid algorithm, namely genetic
clustered FL (Genetic CFL), that clusters edge devices based on the training
hyper-parameters and genetically modifies the parameters cluster-wise. Then, we
introduce an algorithm that drastically increases the individual cluster
accuracy by integrating the density-based clustering and genetic
hyper-parameter optimization. The results are bench-marked using MNIST
handwritten digit dataset and the CIFAR-10 dataset. The proposed genetic CFL
shows significant improvements and works well with realistic cases of non-IID
and ambiguous data.
| [
{
"version": "v1",
"created": "Thu, 15 Jul 2021 10:16:05 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Jul 2021 13:15:20 GMT"
},
{
"version": "v3",
"created": "Fri, 19 Nov 2021 11:53:16 GMT"
}
] | 1,637,539,200,000 | [
[
"Agrawal",
"Shaashwat",
""
],
[
"Sarkar",
"Sagnik",
""
],
[
"Alazab",
"Mamoun",
""
],
[
"Maddikunta",
"Praveen Kumar Reddy",
""
],
[
"Gadekallu",
"Thippa Reddy",
""
],
[
"Pham",
"Quoc-Viet",
""
]
] |
2107.07693 | Wen-Ji Zhou | Yongqing Gao, Guangda Huzhang, Weijie Shen, Yawen Liu, Wen-Ji Zhou,
Qing Da, Yang Yu | Imitate TheWorld: A Search Engine Simulation Platform | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent E-commerce applications benefit from the growth of deep learning
techniques. However, we notice that many works attempt to maximize business
objectives by closely matching offline labels which follow the supervised
learning paradigm. This results in models obtain high offline performance in
terms of Area Under Curve (AUC) and Normalized Discounted Cumulative Gain
(NDCG), but cannot consistently increase the revenue metrics such as purchases
amount of users. Towards the issues, we build a simulated search engine AESim
that can properly give feedback by a well-trained discriminator for generated
pages, as a dynamic dataset. Different from previous simulation platforms which
lose connection with the real world, ours depends on the real data in
AliExpress Search: we use adversarial learning to generate virtual users and
use Generative Adversarial Imitation Learning (GAIL) to capture behavior
patterns of users. Our experiments also show AESim can better reflect the
online performance of ranking models than classic ranking metrics, implying
AESim can play a surrogate of AliExpress Search and evaluate models without
going online.
| [
{
"version": "v1",
"created": "Fri, 16 Jul 2021 03:55:33 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Aug 2021 03:52:32 GMT"
}
] | 1,628,640,000,000 | [
[
"Gao",
"Yongqing",
""
],
[
"Huzhang",
"Guangda",
""
],
[
"Shen",
"Weijie",
""
],
[
"Liu",
"Yawen",
""
],
[
"Zhou",
"Wen-Ji",
""
],
[
"Da",
"Qing",
""
],
[
"Yu",
"Yang",
""
]
] |
2107.08252 | Yuliya Lierler | Yuliya Lierler | Constraint Answer Set Programming: Integrational and Translational (or
SMT-based) Approaches | Under consideration in Theory and Practice of Logic Programming
(TPLP) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Constraint answer set programming or CASP, for short, is a hybrid approach in
automated reasoning putting together the advances of distinct research areas
such as answer set programming, constraint processing, and satisfiability
modulo theories. Constraint answer set programming demonstrates promising
results, including the development of a multitude of solvers: acsolver,
clingcon, ezcsp, idp, inca, dingo, mingo, aspmt, clingo[l,dl], and ezsmt. It
opens new horizons for declarative programming applications such as solving
complex train scheduling problems. Systems designed to find solutions to
constraint answer set programs can be grouped according to their construction
into, what we call, integrational or translational approaches. The focus of
this paper is an overview of the key ingredients of the design of constraint
answer set solvers drawing distinctions and parallels between integrational and
translational approaches. The paper also provides a glimpse at the kind of
programs its users develop by utilizing a CASP encoding of Travelling Salesman
problem for illustration. In addition, we place the CASP technology on the map
among its automated reasoning peers as well as discuss future possibilities for
the development of CASP.
| [
{
"version": "v1",
"created": "Sat, 17 Jul 2021 14:58:57 GMT"
}
] | 1,626,739,200,000 | [
[
"Lierler",
"Yuliya",
""
]
] |
2107.08403 | Laura Giordano | Laura Giordano, Alberto Martelli, and Daniele Theseider Dupr\'e | Reasoning about actions with EL ontologies with temporal answer sets | 15 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an approach based on Answer Set Programming for reasoning about
actions with domain descriptions including ontological knowledge, expressed in
the lightweight description logic EL^\bot. We consider a temporal action
theory, which allows for non-deterministic actions and causal rules to deal
with ramifications, and whose extensions are defined by temporal answer sets.
We provide conditions under which action consistency can be guaranteed with
respect to an ontology, by a polynomial encoding of an action theory extended
with an EL^\bot knowledge base (in normal form) into a temporal action theory.
| [
{
"version": "v1",
"created": "Sun, 18 Jul 2021 09:43:53 GMT"
}
] | 1,626,739,200,000 | [
[
"Giordano",
"Laura",
""
],
[
"Martelli",
"Alberto",
""
],
[
"Dupré",
"Daniele Theseider",
""
]
] |
2107.09129 | Luis Olsina PhD | Luis Olsina | Thing Foundational Ontology: ThingFO v1.3's Terms, Properties,
Relationships and Axioms | 10 pgs | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This preprint specifies and defines all terms, properties, relationships and
axioms of ThingFO (Thing Foundational Ontology) v1.3, which is a slightly
updated version of its predecessor, ThingFO v1.2. It is an ontology for
particular and universal Things placed at the foundational level in the context
of a five-tier ontological architecture named FCD-OntoArch (Foundational, Core,
Domain and instance Ontological Architecture for sciences). Figure 2 depicts
its five tiers, which entail Foundational, Core, Top-Domain, Low-Domain and
Instance levels. Two guidelines and three rules that guide the placement and
constraints of ontologies in this ontological architecture are documented in a
separate section. Each level is populated with ontological components or, in
other words, ontologies. Ontologies at the same level can be related to each
other, except at the foundational level, where only the ThingFO ontology is
found. In addition, ontologies' terms and relationships at lower levels can be
semantically enriched by ontologies' terms and relationships from the higher
levels. ThingFO and ontologies at the core level such as ProcessCO,
SituationCO, among others, are domain independent or neutral. ThingFO is made
up of three main concepts, namely: Thing, Thing Category, and Assertion that
represents human expressions about different aspects of particular and
universal Things. Figure 1 shows the conceptualization of ThingFO specified in
the UML language. Note that annotations of updates from the previous version
(v1.2) to the current one (v1.3) can be found in Appendix A.
| [
{
"version": "v1",
"created": "Mon, 19 Jul 2021 20:04:05 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Feb 2022 13:41:22 GMT"
}
] | 1,646,092,800,000 | [
[
"Olsina",
"Luis",
""
]
] |
2107.09288 | Xueping Peng | Xueping Peng and Guodong Long and Sen Wang and Jing Jiang and Allison
Clarke and Clement Schlegel and Chengqi Zhang | MIPO: Mutual Integration of Patient Journey and Medical Ontology for
Healthcare Representation Learning | 9 pages, 5 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Healthcare representation learning on the Electronic Health Records is
crucial for downstream medical prediction tasks in health informatics. Many NLP
techniques, such as RNN and self-attention, have been adapted to learn medical
representations from hierarchical and time-stamped EHRs data, but fail when
they lack either general or task-specific data. Hence, some recent works train
healthcare representations by incorporating medical ontology, by
self-supervised tasks like diagnosis prediction, but (1) the small-scale,
monotonous ontology is insufficient for robust learning, and (2) critical
contexts or dependencies underlying patient journeys are barely exploited to
enhance ontology learning. To address the challenges, we propose a
Transformer-based representation learning approach: Mutual Integration of
Patient journey and medical Ontology (MIPO), which is a robust end-to-end
framework. Specifically, the proposed method focuses on task-specific
representation learning by a sequential diagnoses predictive task, which is
also beneficial to the ontology-based disease typing task. To integrate
information in the patient's visiting records, we further introduce a
graph-embedding module, which can mitigate the challenge of data insufficiency
in healthcare. In this way, MIPO creates a mutual integration to benefit both
healthcare representation learning and medical ontology embedding. Such an
effective integration is guaranteed by joint training over fused embeddings of
the two modules, targeting both task-specific prediction and ontology-based
disease typing tasks simultaneously. Extensive experiments conducted on two
real-world benchmark datasets have shown MIPO consistently achieves better
performance than state-of-the-art methods no matter whether the training data
is sufficient or not. Also, MIPO derives more interpretable diagnose embedding
results compared to its counterparts.
| [
{
"version": "v1",
"created": "Tue, 20 Jul 2021 07:04:52 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jul 2021 01:00:00 GMT"
},
{
"version": "v3",
"created": "Fri, 23 Jul 2021 03:01:26 GMT"
},
{
"version": "v4",
"created": "Sat, 12 Feb 2022 03:52:22 GMT"
}
] | 1,644,883,200,000 | [
[
"Peng",
"Xueping",
""
],
[
"Long",
"Guodong",
""
],
[
"Wang",
"Sen",
""
],
[
"Jiang",
"Jing",
""
],
[
"Clarke",
"Allison",
""
],
[
"Schlegel",
"Clement",
""
],
[
"Zhang",
"Chengqi",
""
]
] |
2107.09801 | Borko Bo\v{s}kovi\'c | Borko Bo\v{s}kovi\'c, Janez Brest | Two-phase Optimization of Binary Sequences with Low Peak Sidelobe Level
Value | 8 pages, 4 figures, 5 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The search for binary sequences with low peak sidelobe level value represents
a formidable computational problem. To locate better sequences for this
problem, we designed a stochastic algorithm that uses two fitness functions. In
these fitness functions, the value of the autocorrelation function has a
different impact on the final fitness value. It is defined with the value of
the exponent over the autocorrelation function values. Each function is used in
the corresponding optimization phase, and the optimization process switches
between these two phases until the stopping condition is satisfied. The
proposed algorithm was implemented using the compute unified device
architecture and therefore allowed us to exploit the computational power of
graphics processing units. This algorithm was tested on sequences with lengths
$L = 2^m - 1$, for $14 \le m \le 20$. From the obtained results it is evident
that the usage of two fitness functions improved the efficiency of the
algorithm significantly, new-best known solutions were achieved, and the
achieved PSL values were significantly less than $\sqrt{L}$.
| [
{
"version": "v1",
"created": "Wed, 30 Jun 2021 13:59:55 GMT"
}
] | 1,626,912,000,000 | [
[
"Bošković",
"Borko",
""
],
[
"Brest",
"Janez",
""
]
] |
2107.10083 | Luis Olsina PhD | Luis Olsina, Guido Tebes, Pablo Becker | SituationCO v1.2's Terms, Properties, Relationships and Axioms -- A Core
Ontology for Particular and Generic Situations | 9 pgs | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The current preprint is an update to SituationCO v1.1 (Situation Core
Ontology), which represents its new version 1.2. It specifies and defines all
the terms, properties, relationships and axioms of SituationCO v1.2, being an
ontology for particular and generic Situations placed at the core level in the
context of a four-layered ontological architecture called FCD-OntoArch
(Foundational, Core, and Domain Ontological Architecture for Sciences). This is
a four-layered ontological architecture, which considers Foundational, Core,
Domain and Instance levels. In turn, the domain level is split down in two
sub-levels, namely: Top-domain and Low-domain ontological levels. So in fact,
we can consider it to be a five-tier architecture. Ontologies at the same level
can be related to each other, except for the foundational level where only
ThingFO (Thing Foundational Ontology) is found. In addition, ontologies' terms
and relationships at lower levels can be semantically enriched by ontologies'
terms and relationships from the higher levels. Note that both ThingFO and
ontologies at the core level such as SituationCO, ProcessCO, among others, are
domain independent. SituationCO's terms and relationships are specialized
primarily from ThingFO. It also completely reuses terms primarily from
ProcessCO, ProjectCO and GoalCO ontologies. Stereotypes are the used mechanism
for enriching SituationCO terms. Note that in the end of this document, we
address the SituationCO vs. ThingFO non-taxonomic relationship verification
matrix.
| [
{
"version": "v1",
"created": "Wed, 21 Jul 2021 13:54:40 GMT"
}
] | 1,626,912,000,000 | [
[
"Olsina",
"Luis",
""
],
[
"Tebes",
"Guido",
""
],
[
"Becker",
"Pablo",
""
]
] |
2107.10390 | Xuan Zhao | Xuan Zhao and Marcos Campos | Reinforcement Learning Agent Training with Goals for Real World Tasks | Accepted to Reinforcement Learning for Real Life (RL4RealLife)
Workshop in the 38th International Conference on Machine Learning | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Reinforcement Learning (RL) is a promising approach for solving various
control, optimization, and sequential decision making tasks. However, designing
reward functions for complex tasks (e.g., with multiple objectives and safety
constraints) can be challenging for most users and usually requires multiple
expensive trials (reward function hacking). In this paper we propose a
specification language (Inkling Goal Specification) for complex control and
optimization tasks, which is very close to natural language and allows a
practitioner to focus on problem specification instead of reward function
hacking. The core elements of our framework are: (i) mapping the high level
language to a predicate temporal logic tailored to control and optimization
tasks, (ii) a novel automaton-guided dense reward generation that can be used
to drive RL algorithms, and (iii) a set of performance metrics to assess the
behavior of the system. We include a set of experiments showing that the
proposed method provides great ease of use to specify a wide range of real
world tasks; and that the reward generated is able to drive the policy training
to achieve the specified goal.
| [
{
"version": "v1",
"created": "Wed, 21 Jul 2021 23:21:16 GMT"
}
] | 1,626,998,400,000 | [
[
"Zhao",
"Xuan",
""
],
[
"Campos",
"Marcos",
""
]
] |
2107.10715 | Michael Timothy Bennett | Michael Timothy Bennett, Yoshihiro Maruyama | Philosophical Specification of Empathetic Ethical Artificial
Intelligence | To appear in IEEE Transactions in Cognitive and Developmental Systems | IEEE Transactions on Cognitive and Developmental Systems, vol. 14,
no. 2, pp. 292-300, June 2022 | 10.1109/TCDS.2021.3099945 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In order to construct an ethical artificial intelligence (AI) two complex
problems must be overcome. Firstly, humans do not consistently agree on what is
or is not ethical. Second, contemporary AI and machine learning methods tend to
be blunt instruments which either search for solutions within the bounds of
predefined rules, or mimic behaviour. An ethical AI must be capable of
inferring unspoken rules, interpreting nuance and context, possess and be able
to infer intent, and explain not just its actions but its intent. Using
enactivism, semiotics, perceptual symbol systems and symbol emergence, we
specify an agent that learns not just arbitrary relations between signs but
their meaning in terms of the perceptual states of its sensorimotor system.
Subsequently it can learn what is meant by a sentence and infer the intent of
others in terms of its own experiences. It has malleable intent because the
meaning of symbols changes as it learns, and its intent is represented
symbolically as a goal. As such it may learn a concept of what is most likely
to be considered ethical by the majority within a population of humans, which
may then be used as a goal. The meaning of abstract symbols is expressed using
perceptual symbols of raw sensorimotor stimuli as the weakest (consistent with
Ockham's Razor) necessary and sufficient concept, an intensional definition
learned from an ostensive definition, from which the extensional definition or
category of all ethical decisions may be obtained. Because these abstract
symbols are the same for both situation and response, the same symbol is used
when either performing or observing an action. This is akin to mirror neurons
in the human brain. Mirror symbols may allow the agent to empathise, because
its own experiences are associated with the symbol, which is also associated
with the observation of another agent experiencing something that symbol
represents.
| [
{
"version": "v1",
"created": "Thu, 22 Jul 2021 14:37:46 GMT"
}
] | 1,714,435,200,000 | [
[
"Bennett",
"Michael Timothy",
""
],
[
"Maruyama",
"Yoshihiro",
""
]
] |
2107.11150 | Bernd Ludwig | Isabella Kreller and Bernd Ludwig | User Preferences and the Shortest Path | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Indoor navigation systems leverage shortest path algorithms to calculate
routes. In order to define the "shortest path", a cost function has to be
specified based on theories and heuristics in the application domain. For the
domain of indoor routing, we survey theories and criteria identified in the
literature as essential for human path planning. We drive quantitative
definitions and integrate them into a cost function that weights each of the
criteria separately. We then apply an exhaustive grid search to find weights
that lead to an ideal cost function. "Ideal" here is defined as guiding the
algorithm to plan routes that are most similar to those chosen by humans. To
explore which criteria should be taken into account in an improved pathfinding
algorithm, eleven different factors whose favorable impact on route selection
has been established in past research were considered. Each factor was included
separately in the Dijkstra algorithm and the similarity of thus calculated
routes to the actual routes chosen by students at the University of Regensburg
was determined. This allows for a quantitative assessment of the factors'
impact and further constitutes a way to directly compare them. A reduction of
the number of turns, streets, revolving doors, entryways, elevators as well as
the combination of the aforementioned factors was found to have a positive
effect and generate paths that were favored over the shortest path. Turns and
the combination of criteria turned out to be most impactful.
| [
{
"version": "v1",
"created": "Fri, 23 Jul 2021 11:54:15 GMT"
}
] | 1,627,257,600,000 | [
[
"Kreller",
"Isabella",
""
],
[
"Ludwig",
"Bernd",
""
]
] |
2107.11444 | Iou-Jen Liu | Iou-Jen Liu, Unnat Jain, Raymond A. Yeh, Alexander G. Schwing | Cooperative Exploration for Multi-Agent Deep Reinforcement Learning | ICML 2021; Project Page: https://ioujenliu.github.io/CMAE/ | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Exploration is critical for good results in deep reinforcement learning and
has attracted much attention. However, existing multi-agent deep reinforcement
learning algorithms still use mostly noise-based techniques. Very recently,
exploration methods that consider cooperation among multiple agents have been
developed. However, existing methods suffer from a common challenge: agents
struggle to identify states that are worth exploring, and hardly coordinate
exploration efforts toward those states. To address this shortcoming, in this
paper, we propose cooperative multi-agent exploration (CMAE): agents share a
common goal while exploring. The goal is selected from multiple projected state
spaces via a normalized entropy-based technique. Then, agents are trained to
reach this goal in a coordinated manner. We demonstrate that CMAE consistently
outperforms baselines on various tasks, including a sparse-reward version of
the multiple-particle environment (MPE) and the Starcraft multi-agent challenge
(SMAC).
| [
{
"version": "v1",
"created": "Fri, 23 Jul 2021 20:06:32 GMT"
}
] | 1,627,344,000,000 | [
[
"Liu",
"Iou-Jen",
""
],
[
"Jain",
"Unnat",
""
],
[
"Yeh",
"Raymond A.",
""
],
[
"Schwing",
"Alexander G.",
""
]
] |
2107.11838 | Ali Farjami | Ali Farjami | New Algebraic Normative Theories for Ethical and Legal Reasoning in the
LogiKEy Framework | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In order to design and engineer ethical and legal reasoners and responsible
systems, Benzm\"{u}ller, Parent and van der Torre introduced the LogiKEy
methodology, based on the semantical embedding of deontic logics into classic
higher-order logic. This article considerably extends the LogiKEy deontic
logics and dataset using an algebraic approach, and develops a theory of
input/output operations for normative reasoning on top of Boolean algebras.
| [
{
"version": "v1",
"created": "Sun, 25 Jul 2021 16:33:07 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Sep 2021 13:38:44 GMT"
}
] | 1,631,577,600,000 | [
[
"Farjami",
"Ali",
""
]
] |
2107.11927 | Stelios Triantafyllou | Stelios Triantafyllou, Adish Singla, Goran Radanovic | On Blame Attribution for Accountable Multi-Agent Sequential Decision
Making | NeurIPS 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Blame attribution is one of the key aspects of accountable decision making,
as it provides means to quantify the responsibility of an agent for a decision
making outcome. In this paper, we study blame attribution in the context of
cooperative multi-agent sequential decision making. As a particular setting of
interest, we focus on cooperative decision making formalized by Multi-Agent
Markov Decision Processes (MMDPs), and we analyze different blame attribution
methods derived from or inspired by existing concepts in cooperative game
theory. We formalize desirable properties of blame attribution in the setting
of interest, and we analyze the relationship between these properties and the
studied blame attribution methods. Interestingly, we show that some of the well
known blame attribution methods, such as Shapley value, are not
performance-incentivizing, while others, such as Banzhaf index, may over-blame
agents. To mitigate these value misalignment and fairness issues, we introduce
a novel blame attribution method, unique in the set of properties it satisfies,
which trade-offs explanatory power (by under-blaming agents) for the
aforementioned properties. We further show how to account for uncertainty about
agents' decision making policies, and we experimentally: a) validate the
qualitative properties of the studied blame attribution methods, and b) analyze
their robustness to uncertainty.
| [
{
"version": "v1",
"created": "Mon, 26 Jul 2021 02:22:23 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jan 2022 12:41:29 GMT"
}
] | 1,643,155,200,000 | [
[
"Triantafyllou",
"Stelios",
""
],
[
"Singla",
"Adish",
""
],
[
"Radanovic",
"Goran",
""
]
] |
2107.11934 | Lingwei Wei | Lingwei Wei, Dou Hu, Wei Zhou, Zhaojuan Yue, Songlin Hu | Towards Propagation Uncertainty: Edge-enhanced Bayesian Graph
Convolutional Networks for Rumor Detection | Accepted by ACL 2021 main conference | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting rumors on social media is a very critical task with significant
implications to the economy, public health, etc. Previous works generally
capture effective features from texts and the propagation structure. However,
the uncertainty caused by unreliable relations in the propagation structure is
common and inevitable due to wily rumor producers and the limited collection of
spread data. Most approaches neglect it and may seriously limit the learning of
features. Towards this issue, this paper makes the first attempt to explore
propagation uncertainty for rumor detection. Specifically, we propose a novel
Edge-enhanced Bayesian Graph Convolutional Network (EBGCN) to capture robust
structural features. The model adaptively rethinks the reliability of latent
relations by adopting a Bayesian approach. Besides, we design a new edge-wise
consistency training framework to optimize the model by enforcing consistency
on relations. Experiments on three public benchmark datasets demonstrate that
the proposed model achieves better performance than baseline methods on both
rumor detection and early rumor detection tasks.
| [
{
"version": "v1",
"created": "Mon, 26 Jul 2021 03:07:07 GMT"
}
] | 1,627,344,000,000 | [
[
"Wei",
"Lingwei",
""
],
[
"Hu",
"Dou",
""
],
[
"Zhou",
"Wei",
""
],
[
"Yue",
"Zhaojuan",
""
],
[
"Hu",
"Songlin",
""
]
] |
2107.11965 | Elif Surer | Sinan Ariyurek, Elif Surer, Aysu Betin-Can | Playtesting: What is Beyond Personas | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Playtesting is an essential step in the game design process. Game designers
use the feedback from playtests to refine their designs. Game designers may
employ procedural personas to automate the playtesting process. In this paper,
we present two approaches to improve automated playtesting. First, we propose
developing persona, which allows a persona to progress to different goals. In
contrast, the procedural persona is fixed to a single goal. Second, a human
playtester knows which paths she has tested before, and during the consequent
tests, she may test different paths. However, Reinforcement Learning (RL)
agents disregard these previous paths. We propose a novel methodology that we
refer to as Alternative Path Finder (APF). We train APF with previous paths and
employ APF during the training of an RL agent. APF modulates the reward
structure of the environment while preserving the agent's goal. When evaluated,
the agent generates a different trajectory that achieves the same goal. We use
the General Video Game Artificial Intelligence (GVG-AI) and VizDoom frameworks
to test our proposed methodologies. We use Proximal Policy Optimization (PPO)
RL agent during experiments. First, we compare the playtest data generated by
developing and procedural persona. Our experiments show that developing persona
provides better insight into the game and how different players would play.
Second, we present the alternative paths found using APF and argue why
traditional RL agents cannot learn those paths.
| [
{
"version": "v1",
"created": "Mon, 26 Jul 2021 05:23:45 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2022 16:51:08 GMT"
}
] | 1,649,289,600,000 | [
[
"Ariyurek",
"Sinan",
""
],
[
"Surer",
"Elif",
""
],
[
"Betin-Can",
"Aysu",
""
]
] |
2107.12130 | Alessandro Antonucci | Alessandro Antonucci and Alessandro Facchini and Lilith Mattei | Structural Learning of Probabilistic Sentential Decision Diagrams under
Partial Closed-World Assumption | null | 4th Workshop on Tractable Probabilistic Modeling (TPM 2021) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic sentential decision diagrams are a class of
structured-decomposable probabilistic circuits especially designed to embed
logical constraints. To adapt the classical LearnSPN scheme to learn the
structure of these models, we propose a new scheme based on a partial
closed-world assumption: data implicitly provide the logical base of the
circuit. Sum nodes are thus learned by recursively clustering batches in the
initial data base, while the partitioning of the variables obeys a given input
vtree. Preliminary experiments show that the proposed approach might properly
fit training data, and generalize well to test data, provided that these remain
consistent with the underlying logical base, that is a relaxation of the
training data base.
| [
{
"version": "v1",
"created": "Mon, 26 Jul 2021 12:01:56 GMT"
}
] | 1,627,344,000,000 | [
[
"Antonucci",
"Alessandro",
""
],
[
"Facchini",
"Alessandro",
""
],
[
"Mattei",
"Lilith",
""
]
] |
2107.12178 | Nidhika Yadav | Nidhika Yadav | Novel Span Measure, Spanning Sets and Applications | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Rough Set based Spanning Sets were recently proposed to deal with
uncertainties arising in the problem in domain of natural language processing
problems. This paper presents a novel span measure using upper approximations.
The key contribution of this paper is to propose another uncertainty measure of
span and spanning sets. Firstly, this paper proposes a new definition of
computing span which use upper approximation instead of boundary regions. This
is useful in situations where computing upper approximations are much more
convenient that computing boundary region. Secondly, properties of novel span
and relation with earlier span measure are discussed. Thirdly, the paper
presents application areas where the proposed span measure can be utilized.
| [
{
"version": "v1",
"created": "Thu, 22 Jul 2021 20:20:19 GMT"
}
] | 1,627,344,000,000 | [
[
"Yadav",
"Nidhika",
""
]
] |
2107.12477 | Nidhika Yadav | Nidhika Yadav | Decision Making Using Rough Set based Spanning Sets for a Decision
System | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Rough Set based concepts of Span and Spanning Sets were recently proposed to
deal with uncertainties in data. Here, this paper, presents novel concepts for
generic decision-making process using Rough Set based span for a decision
table. Majority of problems in Artificial Intelligence deal with decision
making. This paper provides real life applications of proposed Rough Set based
span for decision tables. Here, novel concept of span for a decision table is
proposed, illustrated with real life example of flood relief and rescue team
assignment. Its uses, applications and properties are explored. The key
contribution of paper is primarily to study decision making using Rough Set
based Span for a decision tables, as against an information system in prior
works. Here, the main contribution is that decision classes are automatically
learned by the technique of Rough Set based span, for a particular problem,
hence automating the decision-making process. These decision-making tools based
on span can guide an expert in taking decisions in tough and time-bound
situations.
| [
{
"version": "v1",
"created": "Wed, 21 Jul 2021 20:58:52 GMT"
}
] | 1,627,430,400,000 | [
[
"Yadav",
"Nidhika",
""
]
] |
2107.12544 | Pedro Tsividis | Pedro A. Tsividis, Joao Loula, Jake Burga, Nathan Foss, Andres
Campero, Thomas Pouncy, Samuel J. Gershman, Joshua B. Tenenbaum | Human-Level Reinforcement Learning through Theory-Based Modeling,
Exploration, and Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning (RL) studies how an agent comes to achieve reward in
an environment through interactions over time. Recent advances in machine RL
have surpassed human expertise at the world's oldest board games and many
classic video games, but they require vast quantities of experience to learn
successfully -- none of today's algorithms account for the human ability to
learn so many different tasks, so quickly. Here we propose a new approach to
this challenge based on a particularly strong form of model-based RL which we
call Theory-Based Reinforcement Learning, because it uses human-like intuitive
theories -- rich, abstract, causal models of physical objects, intentional
agents, and their interactions -- to explore and model an environment, and plan
effectively to achieve task goals. We instantiate the approach in a video game
playing agent called EMPA (the Exploring, Modeling, and Planning Agent), which
performs Bayesian inference to learn probabilistic generative models expressed
as programs for a game-engine simulator, and runs internal simulations over
these models to support efficient object-based, relational exploration and
heuristic planning. EMPA closely matches human learning efficiency on a suite
of 90 challenging Atari-style video games, learning new games in just minutes
of game play and generalizing robustly to new game situations and new levels.
The model also captures fine-grained structure in people's exploration
trajectories and learning dynamics. Its design and behavior suggest a way
forward for building more general human-like AI systems.
| [
{
"version": "v1",
"created": "Tue, 27 Jul 2021 01:38:13 GMT"
}
] | 1,627,430,400,000 | [
[
"Tsividis",
"Pedro A.",
""
],
[
"Loula",
"Joao",
""
],
[
"Burga",
"Jake",
""
],
[
"Foss",
"Nathan",
""
],
[
"Campero",
"Andres",
""
],
[
"Pouncy",
"Thomas",
""
],
[
"Gershman",
"Samuel J.",
""
],
[
"Tenenbaum",
"Joshua B.",
""
]
] |
2107.12595 | Daping Zhang | Daping Zhang, Xin Chen, Yujia Zhang, Shihan Qin | Template-based Chatbot for Agriculture Related FAQs | we need to make some revisions about the project to improve a bit | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Agriculture is the fundamental industry of the society, which is the basis of
food supply and an important source of employment and GDP increase. However,
the insufficient expert can not fulfill the demand of farmers. To address this
problem, we design a chatbot to answer frequently asked questions in the
Agriculture field. Template-based questions will be answered by AIML while LSA
is used for other service-based questions. This chatbot will assist farmers by
dealing with industry problems conveniently and efficiently.
| [
{
"version": "v1",
"created": "Tue, 27 Jul 2021 04:46:29 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jul 2021 03:37:28 GMT"
}
] | 1,627,862,400,000 | [
[
"Zhang",
"Daping",
""
],
[
"Chen",
"Xin",
""
],
[
"Zhang",
"Yujia",
""
],
[
"Qin",
"Shihan",
""
]
] |
2107.12851 | Shiqi Zhang | Hao Yang and Tavan Eftekhar and Chad Esselink and Yan Ding and Shiqi
Zhang | Task and Situation Structures for Service Agent Planning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Everyday tasks are characterized by their varieties and variations, and
frequently are not clearly specified to service agents. This paper presents a
comprehensive approach to enable a service agent to deal with everyday tasks in
open, uncontrolled environments. We introduce a generic structure for
representing tasks, and another structure for representing situations. Based on
the two newly introduced structures, we present a methodology of situation
handling that avoids hard-coding domain rules while improving the scalability
of real-world task planning systems.
| [
{
"version": "v1",
"created": "Tue, 27 Jul 2021 14:33:49 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Aug 2021 00:20:29 GMT"
}
] | 1,627,948,800,000 | [
[
"Yang",
"Hao",
""
],
[
"Eftekhar",
"Tavan",
""
],
[
"Esselink",
"Chad",
""
],
[
"Ding",
"Yan",
""
],
[
"Zhang",
"Shiqi",
""
]
] |
2107.12877 | Anni-Yasmin Turhan | Franz Baader, Patrick Koopmann, Friedrich Michel, Anni-Yasmin Turhan,
Benjamin Zarrie{\ss} | Efficient TBox Reasoning with Value Restrictions using the
$\mathcal{FL}_{o}$wer reasoner | This paper is under consideration in Theory and Practice of Logic
Programming (TPLP) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The inexpressive Description Logic (DL) $\mathcal{FL}_0$, which has
conjunction and value restriction as its only concept constructors, had fallen
into disrepute when it turned out that reasoning in $\mathcal{FL}_0$ w.r.t.
general TBoxes is ExpTime-complete, i.e., as hard as in the considerably more
expressive logic $\mathcal{ALC}$. In this paper, we rehabilitate
$\mathcal{FL}_0$ by presenting a dedicated subsumption algorithm for
$\mathcal{FL}_0$, which is much simpler than the tableau-based algorithms
employed by highly optimized DL reasoners. Our experiments show that the
performance of our novel algorithm, as prototypically implemented in our
$\mathcal{FL}_o$wer reasoner, compares very well with that of the highly
optimized reasoners. $\mathcal{FL}_o$wer can also deal with ontologies written
in the extension $\mathcal{FL}_{\bot}$ of $\mathcal{FL}_0$ with the top and the
bottom concept by employing a polynomial-time reduction, shown in this paper,
which eliminates top and bottom. We also investigate the complexity of
reasoning in DLs related to the Horn-fragments of $\mathcal{FL}_0$ and
$\mathcal{FL}_{\bot}$.
| [
{
"version": "v1",
"created": "Tue, 27 Jul 2021 15:20:53 GMT"
}
] | 1,627,430,400,000 | [
[
"Baader",
"Franz",
""
],
[
"Koopmann",
"Patrick",
""
],
[
"Michel",
"Friedrich",
""
],
[
"Turhan",
"Anni-Yasmin",
""
],
[
"Zarrieß",
"Benjamin",
""
]
] |
2107.13085 | Romain Wallon | Romain Wallon | On Improving the Backjump Level in PB Solvers | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current PB solvers implement many techniques inspired by the CDCL
architecture of modern SAT solvers, so as to benefit from its practical
efficiency. However, they also need to deal with the fact that many of the
properties leveraged by this architecture are no longer true when considering
PB constraints. In this paper, we focus on one of these properties, namely the
optimality of the so-called first unique implication point (1-UIP). While it is
well known that learning the first assertive clause produced during conflict
analysis ensures to perform the highest possible backjump in a SAT solver, we
show that there is no such guarantee in the presence of PB constraints. We also
introduce and evaluate different approaches designed to improve the backjump
level identified during conflict analysis by allowing to continue the analysis
after reaching the 1-UIP. Our experiments show that sub-optimal backjumps are
fairly common in PB solvers, even though their impact on the solver is not
clear.
| [
{
"version": "v1",
"created": "Tue, 27 Jul 2021 21:23:03 GMT"
}
] | 1,627,516,800,000 | [
[
"Wallon",
"Romain",
""
]
] |
2107.13179 | Bing Huang | Bing Huang, Hai Dong, Athman Bouguettaya | Conflict Detection in IoT-based Smart Homes | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel framework that detects conflicts in IoT-based smart homes.
Conflicts may arise during interactions between the resident and IoT services
in smart homes. We propose a generic knowledge graph to represent the relations
between IoT services and environment entities. We also profile a generic
knowledge graph to a specific smart home setting based on the context
information. We propose a conflict taxonomy to capture different types of
conflicts in a single resident smart home setting. A conflict detection
algorithm is proposed to identify potential conflicts using the profiled
knowledge graph. We conduct a set of experiments on real datasets and
synthesized datasets to validate the effectiveness and efficiency of our
proposed approach.
| [
{
"version": "v1",
"created": "Wed, 28 Jul 2021 06:09:02 GMT"
}
] | 1,627,516,800,000 | [
[
"Huang",
"Bing",
""
],
[
"Dong",
"Hai",
""
],
[
"Bouguettaya",
"Athman",
""
]
] |
2107.13181 | Xuan Mai | Xuan Mai, Quanzhi Fu, Yi Chen | Packet Routing with Graph Attention Multi-agent Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Packet routing is a fundamental problem in communication networks that
decides how the packets are directed from their source nodes to their
destination nodes through some intermediate nodes. With the increasing
complexity of network topology and highly dynamic traffic demand, conventional
model-based and rule-based routing schemes show significant limitations, due to
the simplified and unrealistic model assumptions, and lack of flexibility and
adaption. Adding intelligence to the network control is becoming a trend and
the key to achieving high-efficiency network operation. In this paper, we
develop a model-free and data-driven routing strategy by leveraging
reinforcement learning (RL), where routers interact with the network and learn
from the experience to make some good routing configurations for the future.
Considering the graph nature of the network topology, we design a multi-agent
RL framework in combination with Graph Neural Network (GNN), tailored to the
routing problem. Three deployment paradigms, centralized, federated, and
cooperated learning, are explored respectively. Simulation results demonstrate
that our algorithm outperforms some existing benchmark algorithms in terms of
packet transmission delay and affordable load.
| [
{
"version": "v1",
"created": "Wed, 28 Jul 2021 06:20:34 GMT"
}
] | 1,627,516,800,000 | [
[
"Mai",
"Xuan",
""
],
[
"Fu",
"Quanzhi",
""
],
[
"Chen",
"Yi",
""
]
] |
2107.13306 | Hongyu He | Benno Kruit, Hongyu He, Jacopo Urbani | Tab2Know: Building a Knowledge Base from Tables in Scientific Papers | 17 pages, 4 figures, conference: The Semantic Web -- ISWC 2020 | International Semantic Web Conference 2020 Nov 2 (pp. 349-365) | 10.1007/978-3-030-62419-4_20 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tables in scientific papers contain a wealth of valuable knowledge for the
scientific enterprise. To help the many of us who frequently consult this type
of knowledge, we present Tab2Know, a new end-to-end system to build a Knowledge
Base (KB) from tables in scientific papers. Tab2Know addresses the challenge of
automatically interpreting the tables in papers and of disambiguating the
entities that they contain. To solve these problems, we propose a pipeline that
employs both statistical-based classifiers and logic-based reasoning. First,
our pipeline applies weakly supervised classifiers to recognize the type of
tables and columns, with the help of a data labeling system and an ontology
specifically designed for our purpose. Then, logic-based reasoning is used to
link equivalent entities (via sameAs links) in different tables. An empirical
evaluation of our approach using a corpus of papers in the Computer Science
domain has returned satisfactory performance. This suggests that ours is a
promising step to create a large-scale KB of scientific knowledge.
| [
{
"version": "v1",
"created": "Wed, 28 Jul 2021 11:56:53 GMT"
}
] | 1,627,516,800,000 | [
[
"Kruit",
"Benno",
""
],
[
"He",
"Hongyu",
""
],
[
"Urbani",
"Jacopo",
""
]
] |
2107.13435 | Zhenwen Liang | Zhenwen Liang, Jipeng Zhang, Lei Wang, Wei Qin, Yunshi Lan, Jie Shao,
Xiangliang Zhang | MWP-BERT: Numeracy-Augmented Pre-training for Math Word Problem Solving | Accepted by the Findings of NAACL 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Math word problem (MWP) solving faces a dilemma in number representation
learning. In order to avoid the number representation issue and reduce the
search space of feasible solutions, existing works striving for MWP solving
usually replace real numbers with symbolic placeholders to focus on logic
reasoning. However, different from common symbolic reasoning tasks like program
synthesis and knowledge graph reasoning, MWP solving has extra requirements in
numerical reasoning. In other words, instead of the number value itself, it is
the reusable numerical property that matters more in numerical reasoning.
Therefore, we argue that injecting numerical properties into symbolic
placeholders with contextualized representation learning schema can provide a
way out of the dilemma in the number representation issue here. In this work,
we introduce this idea to the popular pre-training language model (PLM)
techniques and build MWP-BERT, an effective contextual number representation
PLM. We demonstrate the effectiveness of our MWP-BERT on MWP solving and
several MWP-specific understanding tasks on both English and Chinese
benchmarks.
| [
{
"version": "v1",
"created": "Wed, 28 Jul 2021 15:28:41 GMT"
},
{
"version": "v2",
"created": "Wed, 11 May 2022 16:19:25 GMT"
}
] | 1,652,313,600,000 | [
[
"Liang",
"Zhenwen",
""
],
[
"Zhang",
"Jipeng",
""
],
[
"Wang",
"Lei",
""
],
[
"Qin",
"Wei",
""
],
[
"Lan",
"Yunshi",
""
],
[
"Shao",
"Jie",
""
],
[
"Zhang",
"Xiangliang",
""
]
] |
2107.13454 | Vince Istvan Madai | Vince I. Madai and David C. Higgins | Artificial Intelligence in Healthcare: Lost In Translation? | 10 pages, 1 figure | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Artificial intelligence (AI) in healthcare is a potentially revolutionary
tool to achieve improved healthcare outcomes while reducing overall health
costs. While many exploratory results hit the headlines in recent years there
are only few certified and even fewer clinically validated products available
in the clinical setting. This is a clear indication of failing translation due
to shortcomings of the current approach to AI in healthcare. In this work, we
highlight the major areas, where we observe current challenges for translation
in AI in healthcare, namely precision medicine, reproducible science, data
issues and algorithms, causality, and product development. For each field, we
outline possible solutions for these challenges. Our work will lead to improved
translation of AI in healthcare products into the clinical setting
| [
{
"version": "v1",
"created": "Wed, 28 Jul 2021 16:10:40 GMT"
}
] | 1,627,516,800,000 | [
[
"Madai",
"Vince I.",
""
],
[
"Higgins",
"David C.",
""
]
] |
2107.13668 | Pulkit Verma | Pulkit Verma, Shashank Rao Marpally, Siddharth Srivastava | Discovering User-Interpretable Capabilities of Black-Box Planning Agents | KR 2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several approaches have been developed for answering users' specific
questions about AI behavior and for assessing their core functionality in terms
of primitive executable actions. However, the problem of summarizing an AI
agent's broad capabilities for a user is comparatively new. This paper presents
an algorithm for discovering from scratch the suite of high-level
"capabilities" that an AI system with arbitrary internal planning
algorithms/policies can perform. It computes conditions describing the
applicability and effects of these capabilities in user-interpretable terms.
Starting from a set of user-interpretable state properties, an AI agent, and a
simulator that the agent can interact with, our algorithm returns a set of
high-level capabilities with their parameterized descriptions. Empirical
evaluation on several game-based scenarios shows that this approach efficiently
learns descriptions of various types of AI agents in deterministic, fully
observable settings. User studies show that such descriptions are easier to
understand and reason with than the agent's primitive actions.
| [
{
"version": "v1",
"created": "Wed, 28 Jul 2021 23:33:31 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Jan 2022 09:16:22 GMT"
},
{
"version": "v3",
"created": "Mon, 30 May 2022 09:37:03 GMT"
}
] | 1,653,955,200,000 | [
[
"Verma",
"Pulkit",
""
],
[
"Marpally",
"Shashank Rao",
""
],
[
"Srivastava",
"Siddharth",
""
]
] |
2107.13669 | Soujanya Poria | Wei Han, Hui Chen, Alexander Gelbukh, Amir Zadeh, Louis-philippe
Morency, and Soujanya Poria | Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal
Sentiment Analysis | Accepted at ICMI 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Multimodal sentiment analysis aims to extract and integrate semantic
information collected from multiple modalities to recognize the expressed
emotions and sentiment in multimodal data. This research area's major concern
lies in developing an extraordinary fusion scheme that can extract and
integrate key information from various modalities. However, one issue that may
restrict previous work to achieve a higher level is the lack of proper modeling
for the dynamics of the competition between the independence and relevance
among modalities, which could deteriorate fusion outcomes by causing the
collapse of modality-specific feature space or introducing extra noise. To
mitigate this, we propose the Bi-Bimodal Fusion Network (BBFN), a novel
end-to-end network that performs fusion (relevance increment) and separation
(difference increment) on pairwise modality representations. The two parts are
trained simultaneously such that the combat between them is simulated. The
model takes two bimodal pairs as input due to the known information imbalance
among modalities. In addition, we leverage a gated control mechanism in the
Transformer architecture to further improve the final output. Experimental
results on three datasets (CMU-MOSI, CMU-MOSEI, and UR-FUNNY) verifies that our
model significantly outperforms the SOTA. The implementation of this work is
available at https://github.com/declare-lab/multimodal-deep-learning.
| [
{
"version": "v1",
"created": "Wed, 28 Jul 2021 23:33:42 GMT"
},
{
"version": "v2",
"created": "Sat, 28 Aug 2021 04:43:57 GMT"
}
] | 1,630,368,000,000 | [
[
"Han",
"Wei",
""
],
[
"Chen",
"Hui",
""
],
[
"Gelbukh",
"Alexander",
""
],
[
"Zadeh",
"Amir",
""
],
[
"Morency",
"Louis-philippe",
""
],
[
"Poria",
"Soujanya",
""
]
] |
2107.13684 | Shuangyong Song | Shuangyong Song | An Online Question Answering System based on Sub-graph Searching | 4 pages, 3 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Knowledge graphs (KGs) have been widely used for question answering (QA)
applications, especially the entity based QA. However, searching an-swers from
an entire large-scale knowledge graph is very time-consuming and it is hard to
meet the speed need of real online QA systems. In this pa-per, we design a
sub-graph searching mechanism to solve this problem by creating sub-graph
index, and each answer generation step is restricted in the sub-graph level. We
use this mechanism into a real online QA chat system, and it can bring obvious
improvement on question coverage by well answer-ing entity based questions, and
it can be with a very high speed, which en-sures the user experience of online
QA.
| [
{
"version": "v1",
"created": "Thu, 29 Jul 2021 00:44:58 GMT"
}
] | 1,627,603,200,000 | [
[
"Song",
"Shuangyong",
""
]
] |
2107.13977 | Martin Zaefferer | J\"org Stork, Philip Wenzel, Severin Landwein, Maria-Elena Algorri,
Martin Zaefferer, Wolfgang Kusch, Martin Staubach, Thomas Bartz-Beielstein,
Hartmut K\"ohn, Hermann Dejager, Christian Wolf | Underwater Acoustic Networks for Security Risk Assessment in Public
Drinking Water Reservoirs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have built a novel system for the surveillance of drinking water
reservoirs using underwater sensor networks. We implement an innovative
AI-based approach to detect, classify and localize underwater events. In this
paper, we describe the technology and cognitive AI architecture of the system
based on one of the sensor networks, the hydrophone network. We discuss the
challenges of installing and using the hydrophone network in a water reservoir
where traffic, visitors, and variable water conditions create a complex,
varying environment. Our AI solution uses an autoencoder for unsupervised
learning of latent encodings for classification and anomaly detection, and time
delay estimates for sound localization. Finally, we present the results of
experiments carried out in a laboratory pool and the water reservoir and
discuss the system's potential.
| [
{
"version": "v1",
"created": "Thu, 29 Jul 2021 14:02:51 GMT"
}
] | 1,627,603,200,000 | [
[
"Stork",
"Jörg",
""
],
[
"Wenzel",
"Philip",
""
],
[
"Landwein",
"Severin",
""
],
[
"Algorri",
"Maria-Elena",
""
],
[
"Zaefferer",
"Martin",
""
],
[
"Kusch",
"Wolfgang",
""
],
[
"Staubach",
"Martin",
""
],
[
"Bartz-Beielstein",
"Thomas",
""
],
[
"Köhn",
"Hartmut",
""
],
[
"Dejager",
"Hermann",
""
],
[
"Wolf",
"Christian",
""
]
] |
2107.14000 | Luyu Qiu | Luyu Qiu, Yi Yang, Caleb Chen Cao, Jing Liu, Yueyuan Zheng, Hilary Hei
Ting Ngai, Janet Hsiao, Lei Chen | Resisting Out-of-Distribution Data Problem in Perturbation of XAI | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid development of eXplainable Artificial Intelligence (XAI),
perturbation-based XAI algorithms have become quite popular due to their
effectiveness and ease of implementation. The vast majority of
perturbation-based XAI techniques face the challenge of Out-of-Distribution
(OoD) data -- an artifact of randomly perturbed data becoming inconsistent with
the original dataset. OoD data leads to the over-confidence problem in model
predictions, making the existing XAI approaches unreliable. To our best
knowledge, the OoD data problem in perturbation-based XAI algorithms has not
been adequately addressed in the literature. In this work, we address this OoD
data problem by designing an additional module quantifying the affinity between
the perturbed data and the original dataset distribution, which is integrated
into the process of aggregation. Our solution is shown to be compatible with
the most popular perturbation-based XAI algorithms, such as RISE, OCCLUSION,
and LIME. Experiments have confirmed that our methods demonstrate a significant
improvement in general cases using both computational and cognitive metrics.
Especially in the case of degradation, our proposed approach demonstrates
outstanding performance comparing to baselines. Besides, our solution also
resolves a fundamental problem with the faithfulness indicator, a commonly used
evaluation metric of XAI algorithms that appears to be sensitive to the OoD
issue.
| [
{
"version": "v1",
"created": "Tue, 27 Jul 2021 08:29:46 GMT"
}
] | 1,627,603,200,000 | [
[
"Qiu",
"Luyu",
""
],
[
"Yang",
"Yi",
""
],
[
"Cao",
"Caleb Chen",
""
],
[
"Liu",
"Jing",
""
],
[
"Zheng",
"Yueyuan",
""
],
[
"Ngai",
"Hilary Hei Ting",
""
],
[
"Hsiao",
"Janet",
""
],
[
"Chen",
"Lei",
""
]
] |
2107.14199 | Hritam Basak | Hritam Basak, Mayukhmali Das, Susmita Modak | RSO: A Novel Reinforced Swarm Optimization Algorithm for Feature
Selection | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Swarm optimization algorithms are widely used for feature selection before
data mining and machine learning applications. The metaheuristic
nature-inspired feature selection approaches are used for single-objective
optimization tasks, though the major problem is their frequent premature
convergence, leading to weak contribution to data mining. In this paper, we
propose a novel feature selection algorithm named Reinforced Swarm Optimization
(RSO) leveraging some of the existing problems in feature selection. This
algorithm embeds the widely used Bee Swarm Optimization (BSO) algorithm along
with Reinforcement Learning (RL) to maximize the reward of a superior search
agent and punish the inferior ones. This hybrid optimization algorithm is more
adaptive and robust with a good balance between exploitation and exploration of
the search space. The proposed method is evaluated on 25 widely known UCI
datasets containing a perfect blend of balanced and imbalanced data. The
obtained results are compared with several other popular and recent feature
selection algorithms with similar classifier configurations. The experimental
outcome shows that our proposed model outperforms BSO in 22 out of 25 instances
(88%). Moreover, experimental results also show that RSO performs the best
among all the methods compared in this paper in 19 out of 25 cases (76%),
establishing the superiority of our proposed method.
| [
{
"version": "v1",
"created": "Thu, 29 Jul 2021 17:38:04 GMT"
}
] | 1,627,603,200,000 | [
[
"Basak",
"Hritam",
""
],
[
"Das",
"Mayukhmali",
""
],
[
"Modak",
"Susmita",
""
]
] |
2107.14374 | Chinnaiyan Ramasubramanian | Swarnamugi.M and Chinnaiyan.R | Modelling and Reasoning Techniques for Context Aware Computing in
Intelligent Transportation System | 18 pages,3 figures , 4 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The emergence of Internet of Things technology and recent advancement in
sensor networks enabled transportation systems to a new dimension called
Intelligent Transportation System. Due to increased usage of vehicles and
communication among entities in road traffic scenarios, the amount of raw data
generation in Intelligent Transportation System is huge. This raw data are to
be processed to infer contextual information and provide new services related
to different modes of road transport such as traffic signal management,
accident prediction, object detection etc. To understand the importance of
context, this article aims to study context awareness in the Intelligent
Transportation System. We present a review on prominent applications developed
in the literature concerning context awareness in the intelligent
transportation system. The objective of this research paper is to highlight
context and its features in ITS and to address the applicability of modelling
techniques and reasoning approaches in Intelligent Transportation System. Also
to shed light on impact of Internet of Things and machine learning in
Intelligent Transportation System development.
| [
{
"version": "v1",
"created": "Thu, 29 Jul 2021 23:47:52 GMT"
}
] | 1,627,862,400,000 | [
[
"M",
"Swarnamugi.",
""
],
[
"R",
"Chinnaiyan.",
""
]
] |
2107.14654 | Dewei Yi | Hasan Bayarov Ahmedov, Dewei Yi, Jie Sui | Brain-Inspired Deep Imitation Learning for Autonomous Driving Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous driving has attracted great attention from both academics and
industries. To realise autonomous driving, Deep Imitation Learning (DIL) is
treated as one of the most promising solutions, because it improves autonomous
driving systems by automatically learning a complex mapping from human driving
data, compared to manually designing the driving policy. However, existing DIL
methods cannot generalise well across domains, that is, a network trained on
the data of source domain gives rise to poor generalisation on the data of
target domain. In the present study, we propose a novel brain-inspired deep
imitation method that builds on the evidence from human brain functions, to
improve the generalisation ability of deep neural networks so that autonomous
driving systems can perform well in various scenarios. Specifically, humans
have a strong generalisation ability which is beneficial from the structural
and functional asymmetry of the two sides of the brain. Here, we design dual
Neural Circuit Policy (NCP) architectures in deep neural networks based on the
asymmetry of human neural networks. Experimental results demonstrate that our
brain-inspired method outperforms existing methods regarding generalisation
when dealing with unseen data. Our source codes and pretrained models are
available at
https://github.com/Intenzo21/Brain-Inspired-Deep-Imitation-Learning-for-Autonomous-Driving-Systems}{https://github.com/Intenzo21/Brain-Inspired-Deep-Imitation-Learning-for-Autonomous-Driving-Systems.
| [
{
"version": "v1",
"created": "Fri, 30 Jul 2021 14:21:46 GMT"
}
] | 1,627,862,400,000 | [
[
"Ahmedov",
"Hasan Bayarov",
""
],
[
"Yi",
"Dewei",
""
],
[
"Sui",
"Jie",
""
]
] |
2108.00633 | Buser Say | Buser Say, Scott Sanner, Jo Devriendt, Jakob Nordstr\"om, Peter J.
Stuckey | Planning with Learned Binarized Neural Networks Benchmarks for MaxSAT
Evaluation 2021 | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This document provides a brief introduction to learned automated planning
problem where the state transition function is in the form of a binarized
neural network (BNN), presents a general MaxSAT encoding for this problem, and
describes the four domains, namely: Navigation, Inventory Control, System
Administrator and Cellda, that are submitted as benchmarks for MaxSAT
Evaluation 2021.
| [
{
"version": "v1",
"created": "Mon, 2 Aug 2021 04:49:38 GMT"
}
] | 1,627,948,800,000 | [
[
"Say",
"Buser",
""
],
[
"Sanner",
"Scott",
""
],
[
"Devriendt",
"Jo",
""
],
[
"Nordström",
"Jakob",
""
],
[
"Stuckey",
"Peter J.",
""
]
] |
2108.02816 | Luis Olsina PhD | Pablo Becker and Luis Olsina | ProcessCO v1.3's Terms, Properties, Relationships and Axioms - A Core
Ontology for Processes | 12 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The present preprint specifies and defines all Terms, Properties,
Relationships and Axioms of ProcessCO (Process Core Ontology). ProcessCO is an
ontology devoted mainly for Work Entities and related terms, which is placed at
the core level in the context of a multilayer ontological architecture called
FCD-OntoArch (Foundational, Core, and Domain Ontological Architecture for
Sciences). This is a five-layered ontological architecture, which considers
Foundational, Core, Domain and Instance levels, where the domain level is split
down in two sub-levels, namely: Top-domain and Low-domain. Ontologies at the
same level can be related to each other, except for the foundational level
where only ThingFO (Thing Foundational Ontology) is found. In addition,
ontologies' terms and relationships at lower levels can be semantically
enriched by ontologies' terms and relationships from the higher levels. Note
that both ThingFO and ontologies at the core level such as ProcessCO,
SituationCO, among others, are domain independent with respect to their terms.
Stereotypes are the mechanism used for enriching ProcessCO terms mainly from
the ThingFO ontology. Note that in the end of this document, we address the
ProcessCO vs. ThingFO non-taxonomic relationship verification matrix.
Additionally, note that annotations of updates from the previous version
(ProcessCO v1.2) to the current one (v1.3) can be found in Appendix A. For
instance, 6 axioms were added.
| [
{
"version": "v1",
"created": "Thu, 5 Aug 2021 19:03:59 GMT"
}
] | 1,628,467,200,000 | [
[
"Becker",
"Pablo",
""
],
[
"Olsina",
"Luis",
""
]
] |
2108.03033 | Riccardo Zese | Elena Bellodi, Marco Gavanelli, Riccardo Zese, Evelina Lamma, Fabrizio
Riguzzi | Nonground Abductive Logic Programming with Probabilistic Integrity
Constraints | Paper presented at the 37th International Conference on Logic
Programming (ICLP 2021), 16 pages | Theory and Practice of Logic Programming, 21(5), 557-574, 2021 | 10.1017/S1471068421000417 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Uncertain information is being taken into account in an increasing number of
application fields. In the meantime, abduction has been proved a powerful tool
for handling hypothetical reasoning and incomplete knowledge. Probabilistic
logical models are a suitable framework to handle uncertain information, and in
the last decade many probabilistic logical languages have been proposed, as
well as inference and learning systems for them. In the realm of Abductive
Logic Programming (ALP), a variety of proof procedures have been defined as
well. In this paper, we consider a richer logic language, coping with
probabilistic abduction with variables. In particular, we consider an ALP
program enriched with integrity constraints `a la IFF, possibly annotated with
a probability value. We first present the overall abductive language, and its
semantics according to the Distribution Semantics. We then introduce a proof
procedure, obtained by extending one previously presented, and prove its
soundness and completeness.
| [
{
"version": "v1",
"created": "Fri, 6 Aug 2021 10:22:12 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Feb 2022 14:22:05 GMT"
}
] | 1,643,932,800,000 | [
[
"Bellodi",
"Elena",
""
],
[
"Gavanelli",
"Marco",
""
],
[
"Zese",
"Riccardo",
""
],
[
"Lamma",
"Evelina",
""
],
[
"Riguzzi",
"Fabrizio",
""
]
] |
2108.03294 | Fitzroy Nembhard | Fitzroy D. Nembhard, Marco M. Carvalho | A Smart and Defensive Human-Machine Approach to Code Analysis | Presented at 1st International Workshop on Adaptive Cyber Defense,
2021 (arXiv:2108.08476) | null | null | IJCAI-ACD/2021/122 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Static analysis remains one of the most popular approaches for detecting and
correcting poor or vulnerable program code. It involves the examination of code
listings, test results, or other documentation to identify errors, violations
of development standards, or other problems, with the ultimate goal of fixing
these errors so that systems and software are as secure as possible. There
exists a plethora of static analysis tools, which makes it challenging for
businesses and programmers to select a tool to analyze their program code. It
is imperative to find ways to improve code analysis so that it can be employed
by cyber defenders to mitigate security risks. In this research, we propose a
method that employs the use of virtual assistants to work with programmers to
ensure that software are as safe as possible in order to protect
safety-critical systems from data breaches and other attacks. The proposed
method employs a recommender system that uses various metrics to help
programmers select the most appropriate code analysis tool for their project
and guides them through the analysis process. The system further tracks the
user's behavior regarding the adoption of the recommended practices.
| [
{
"version": "v1",
"created": "Fri, 6 Aug 2021 20:42:07 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Aug 2021 12:16:05 GMT"
},
{
"version": "v3",
"created": "Thu, 26 Aug 2021 15:15:19 GMT"
}
] | 1,630,022,400,000 | [
[
"Nembhard",
"Fitzroy D.",
""
],
[
"Carvalho",
"Marco M.",
""
]
] |
2108.03319 | Iou-Jen Liu | Iou-Jen Liu, Zhongzheng Ren, Raymond A. Yeh, Alexander G. Schwing | Semantic Tracklets: An Object-Centric Representation for Visual
Multi-Agent Reinforcement Learning | IROS 2021; Project page:
https://ioujenliu.github.io/SemanticTracklets/ | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solving complex real-world tasks, e.g., autonomous fleet control, often
involves a coordinated team of multiple agents which learn strategies from
visual inputs via reinforcement learning. Many existing multi-agent
reinforcement learning (MARL) algorithms however don't scale to environments
where agents operate on visual inputs. To address this issue, algorithmically,
recent works have focused on non-stationarity and exploration. In contrast, we
study whether scalability can also be achieved via a disentangled
representation. For this, we explicitly construct an object-centric
intermediate representation to characterize the states of an environment, which
we refer to as `semantic tracklets.' We evaluate `semantic tracklets' on the
visual multi-agent particle environment (VMPE) and on the challenging visual
multi-agent GFootball environment. `Semantic tracklets' consistently outperform
baselines on VMPE, and achieve a +2.4 higher score difference than baselines on
GFootball. Notably, this method is the first to successfully learn a strategy
for five players in the GFootball environment using only visual data.
| [
{
"version": "v1",
"created": "Fri, 6 Aug 2021 22:19:09 GMT"
}
] | 1,628,553,600,000 | [
[
"Liu",
"Iou-Jen",
""
],
[
"Ren",
"Zhongzheng",
""
],
[
"Yeh",
"Raymond A.",
""
],
[
"Schwing",
"Alexander G.",
""
]
] |
2108.03360 | Mingyi Liu | Mingyi Liu and Zhiying Tu and Xiaofei Xu and Zhongjie Wang | DySR: A Dynamic Representation Learning and Aligning based Model for
Service Bundle Recommendation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An increasing number and diversity of services are available, which result in
significant challenges to effective reuse service during requirement
satisfaction. There have been many service bundle recommendation studies and
achieved remarkable results. However, there is still plenty of room for
improvement in the performance of these methods. The fundamental problem with
these studies is that they ignore the evolution of services over time and the
representation gap between services and requirements. In this paper, we propose
a dynamic representation learning and aligning based model called DySR to
tackle these issues. DySR eliminates the representation gap between services
and requirements by learning a transformation function and obtains service
representations in an evolving social environment through dynamic graph
representation learning. Extensive experiments conducted on a real-world
dataset from ProgrammableWeb show that DySR outperforms existing
state-of-the-art methods in commonly used evaluation metrics, improving $F1@5$
from $36.1\%$ to $69.3\%$.
| [
{
"version": "v1",
"created": "Sat, 7 Aug 2021 03:49:08 GMT"
}
] | 1,628,553,600,000 | [
[
"Liu",
"Mingyi",
""
],
[
"Tu",
"Zhiying",
""
],
[
"Xu",
"Xiaofei",
""
],
[
"Wang",
"Zhongjie",
""
]
] |
2108.03414 | Leonardo Tanzi | Leonardo Tanzi and Andrea Audisio and Giansalvo Cirrincione and
Alessandro Aprato and Enrico Vezzetti | Vision Transformer for femur fracture classification | Under consideration at Artificial Intelligence in Medicine | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In recent years, the scientific community has focused on the development of
CAD tools that could improve bone fractures' classification, mostly based on
Convolutional Neural Network (CNN). However, the discerning accuracy of
fractures' subtypes was far from optimal. This paper proposes a modified
version of a very recent and powerful deep learning technique, the Vision
Transformer (ViT), outperforming CNNs based approaches and consequently
increasing specialists' diagnosis accuracy. 4207 manually annotated images were
used and distributed, by following the AO/OTA classification, in different
fracture types, the largest labeled dataset of proximal femur fractures used in
literature. The ViT architecture was used and compared with a classic CNN and a
multistage architecture composed of successive CNNs in cascade. To demonstrate
the reliability of this approach, 1) the attention maps were used to visualize
the most relevant areas of the images, 2) the performance of a generic CNN and
ViT was compared through unsupervised learning techniques, and 3) 11
specialists were asked to evaluate and classify 150 proximal femur fractures'
images with and without the help of the ViT, then results were compared for
potential improvement. The ViT was able to correctly predict 83% of the test
images. Precision, recall and F1-score were 0.77 (CI 0.64-0.90), 0.76 (CI
0.62-0.91) and 0.77 (CI 0.64-0.89), respectively. The average specialists'
diagnostic improvement was 29% when supported by ViT's predictions,
outperforming the algorithm alone. This paper showed the potential of Vision
Transformers in bone fracture classification. For the first time, good results
were obtained in sub-fractures classification, with the largest and richest
dataset ever. Accordingly, the assisted diagnosis yielded the best results,
proving once again the effectiveness of a coordinated work between neural
networks and specialists.
| [
{
"version": "v1",
"created": "Sat, 7 Aug 2021 10:12:42 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Oct 2021 11:28:28 GMT"
}
] | 1,635,292,800,000 | [
[
"Tanzi",
"Leonardo",
""
],
[
"Audisio",
"Andrea",
""
],
[
"Cirrincione",
"Giansalvo",
""
],
[
"Aprato",
"Alessandro",
""
],
[
"Vezzetti",
"Enrico",
""
]
] |
2108.03452 | Ruo-Ze Liu | Ruo-Ze Liu | Rethinking of AlphaStar | 23 pages, 18 figures, 16 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present a different view for AlphaStar (AS), the program achieving
Grand-Master level in the game StarCraft II. It is considered big progress for
AI research. However, in this paper, we present problems with the AS, some of
which are the defects of it, and some of which are important details that are
neglected in its article. These problems arise two questions. One is that what
can we get from the built of AS? The other is that does the battle between it
with humans fair? After the discussion, we present the future research
directions for these problems. Our study is based on a reproduction code of the
AS, and the codes are available online.
| [
{
"version": "v1",
"created": "Sat, 7 Aug 2021 13:55:46 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Aug 2021 14:36:28 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Sep 2021 02:41:28 GMT"
}
] | 1,630,886,400,000 | [
[
"Liu",
"Ruo-Ze",
""
]
] |
2108.03599 | Maxim Mozgovoy | Kaori Yuda, Shota Kamei, Riku Tanji, Ryoya Ito, Ippo Wakana and Maxim
Mozgovoy | Identification of Play Styles in Universal Fighting Engine | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI-controlled characters in fighting games are expected to possess reasonably
high skills and behave in a believable, human-like manner, exhibiting a
diversity of play styles and strategies. Thus, the development of fighting game
AI requires the ability to evaluate these properties. For instance, it should
be possible to ensure that the characters created are believable and diverse.
In this paper, we show how an automated procedure can be used to compare play
styles of individual AI- and human-controlled characters, and to assess
human-likeness and diversity of game participants.
| [
{
"version": "v1",
"created": "Sun, 8 Aug 2021 10:06:16 GMT"
}
] | 1,628,553,600,000 | [
[
"Yuda",
"Kaori",
""
],
[
"Kamei",
"Shota",
""
],
[
"Tanji",
"Riku",
""
],
[
"Ito",
"Ryoya",
""
],
[
"Wakana",
"Ippo",
""
],
[
"Mozgovoy",
"Maxim",
""
]
] |
2108.03760 | Pooja Pandit Nayak | Anand M. Shukla, Pooja D. Pandit, Vasudev M. Purandare and Anuradha
Srinivasaraghavan | Symptom based Hierarchical Classification of Diabetes and Thyroid
disorders using Fuzzy Cognitive Maps | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Fuzzy Cognitive Maps (FCMs) are soft computing technique that follows an
approach similar to human reasoning and human decision-making process, making
them a valuable modeling and simulation methodology. Medical Decision Systems
are complex systems consisting of many factors that may be complementary,
contradictory, and competitive; these factors influence each other and
determine the overall diagnosis with a different degree. Thus, FCMs are
suitable to model Medical Decision Support Systems. The proposed work therefore
uses FCMs arranged in hierarchical structure to classify between Diabetes,
Thyroid disorders and their subtypes. Subtypes include type 1 and type 2 for
diabetes and hyperthyroidism and hypothyroidism for thyroid.
| [
{
"version": "v1",
"created": "Sun, 8 Aug 2021 23:44:01 GMT"
}
] | 1,628,553,600,000 | [
[
"Shukla",
"Anand M.",
""
],
[
"Pandit",
"Pooja D.",
""
],
[
"Purandare",
"Vasudev M.",
""
],
[
"Srinivasaraghavan",
"Anuradha",
""
]
] |
2108.03793 | Deokgun Park | Deokgun Park | Toward Human-Level Artificial Intelligence | arXiv admin note: substantial text overlap with arXiv:2011.09410 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present our research on programming human-level artificial
intelligence (HLAI), including 1) a definition of HLAI, 2) an environment to
develop and test HLAI, and 3) a cognitive architecture for HLAI. The term AI is
used in a broad meaning, and HLAI is not clearly defined. I claim that the
essence of Human-Level Intelligence to be the capability to learn from others'
experiences via language. The key is that the event described by language has
the same effect as if the agent experiences it firsthand for the update of the
behavior policy. To develop and test models with such a capability, we are
developing a simulated environment called SEDRo. There is a 3D Home, and a
mother character takes care of the baby (the learning agent) and teaches
languages. The environment provides comparable experiences to that of a human
baby from birth to one year. Finally, I propose a cognitive architecture of
HLAI called Modulated Heterarchical Prediction Memory (mHPM). In mHPM, there
are three components: a universal module that learns to predict the next vector
given the sequence of vector signals, a heterarchical network of those modules,
and a reward-based modulation of learning. mHPM models the workings of the
neocortex but the innate auxiliary units such hippocampus, reward system,
instincts, and amygdala play critical roles, too.
| [
{
"version": "v1",
"created": "Mon, 9 Aug 2021 03:39:39 GMT"
}
] | 1,671,148,800,000 | [
[
"Park",
"Deokgun",
""
]
] |
2108.03890 | Charalambos Chrysostomou | Charalambos Chrysostomou, Loizos Koutsantonis, Christos Lemesios,
Costas N. Papanicolas | SPECT Angle Interpolation Based on Deep Learning Methodologies | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A novel method for SPECT angle interpolation based on deep learning
methodologies is presented. Projection data from software phantoms were used to
train the proposed model. For evaluation of the efficacy of the method,
phantoms based on Shepp Logan, with various noise levels added were used, and
the resulting interpolated sinograms are reconstructed using Ordered Subset
Expectation Maximization (OSEM) and compared to the reconstructions of the
original sinograms. The proposed method can quadruple the projections, and
denoise the original sinogram, in the same process. As the results show, the
proposed model significantly improves the reconstruction accuracy. Finally, to
demonstrate the efficacy and capability of the proposed method results from
real-world DAT-SPECT sinograms are presented.
| [
{
"version": "v1",
"created": "Mon, 9 Aug 2021 09:19:51 GMT"
}
] | 1,628,553,600,000 | [
[
"Chrysostomou",
"Charalambos",
""
],
[
"Koutsantonis",
"Loizos",
""
],
[
"Lemesios",
"Christos",
""
],
[
"Papanicolas",
"Costas N.",
""
]
] |
2108.03897 | Charalambos Chrysostomou | Charalambos Chrysostomou, Loizos Koutsantonis, Christos Lemesios and
Costas N. Papanicolas | Deep Convolutional Neural Network for Low Projection SPECT Imaging
Reconstruction | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we present a novel method for tomographic image reconstruction
in SPECT imaging with a low number of projections. Deep convolutional neural
networks (CNN) are employed in the new reconstruction method. Projection data
from software phantoms were used to train the CNN network. For evaluation of
the efficacy of the proposed method, software phantoms and hardware phantoms
based on the FOV SPECT system were used. The resulting tomographic images are
compared to those produced by the "Maximum Likelihood Expectation Maximisation"
(MLEM).
| [
{
"version": "v1",
"created": "Mon, 9 Aug 2021 09:30:45 GMT"
}
] | 1,628,553,600,000 | [
[
"Chrysostomou",
"Charalambos",
""
],
[
"Koutsantonis",
"Loizos",
""
],
[
"Lemesios",
"Christos",
""
],
[
"Papanicolas",
"Costas N.",
""
]
] |
2108.03899 | Filippo Bistaffa | Filippo Bistaffa | Faster Exact MPE and Constrained Optimization with Deterministic Finite
State Automata | Published in the Proceedings of the 2023 International Joint
Conference on Artificial Intelligence (IJCAI) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We propose a concise function representation based on deterministic finite
state automata for exact most probable explanation and constrained optimization
tasks in graphical models. We then exploit our concise representation within
Bucket Elimination (BE). We denote our version of BE as FABE. FABE
significantly improves the performance of BE in terms of runtime and memory
requirements by minimizing redundancy. Results on most probable explanation and
weighted constraint satisfaction benchmarks show that FABE often outperforms
the state of the art, leading to significant runtime improvements (up to 5
orders of magnitude in our tests).
| [
{
"version": "v1",
"created": "Mon, 9 Aug 2021 09:31:46 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Apr 2022 10:00:29 GMT"
},
{
"version": "v3",
"created": "Tue, 9 May 2023 21:44:32 GMT"
}
] | 1,683,763,200,000 | [
[
"Bistaffa",
"Filippo",
""
]
] |
2108.03900 | Jiexia Ye | Jiexia Ye, Juanjuan Zhao, Furong Zheng, Chengzhong Xu | Completion and Augmentation based Spatiotemporal Deep Learning Approach
for Short-Term Metro Origin-Destination Matrix Prediction under Limited
Observable Data | 16 pages, 13 figures, 6 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Short-term OD flow (i.e. the number of passenger traveling between stations)
prediction is crucial to traffic management in metro systems. Due to the
delayed effect in latest complete OD flow collection, complex spatiotemporal
correlations of OD flows in high dimension, it is more challengeable than other
traffic prediction tasks of time series. Existing methods need to be improved
due to not fully utilizing the real-time passenger mobility data and not
sufficiently modeling the implicit correlation of the mobility patterns between
stations. In this paper, we propose a Completion based Adaptive Heterogeneous
Graph Convolution Spatiotemporal Predictor. The novelty is mainly reflected in
two aspects. The first is to model real-time mobility evolution by establishing
the implicit correlation between observed OD flows and the prediction target OD
flows in high dimension based on a key data-driven insight: the destination
distributions of the passengers departing from a station are correlated with
other stations sharing similar attributes (e.g. geographical location, region
function). The second is to complete the latest incomplete OD flows by
estimating the destination distribution of unfinished trips through considering
the real-time mobility evolution and the time cost between stations, which is
the base of time series prediction and can improve the model's dynamic
adaptability. Extensive experiments on two real world metro datasets
demonstrate the superiority of our model over other competitors with the
biggest model performance improvement being nearly 4\%. In addition, the data
complete framework we propose can be integrated into other models to improve
their performance up to 2.1\%.
| [
{
"version": "v1",
"created": "Mon, 9 Aug 2021 09:32:42 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Aug 2021 03:15:34 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Oct 2021 01:51:43 GMT"
},
{
"version": "v4",
"created": "Fri, 12 Nov 2021 08:32:25 GMT"
},
{
"version": "v5",
"created": "Tue, 15 Feb 2022 09:46:13 GMT"
},
{
"version": "v6",
"created": "Fri, 18 Feb 2022 02:34:36 GMT"
},
{
"version": "v7",
"created": "Mon, 28 Mar 2022 08:02:26 GMT"
},
{
"version": "v8",
"created": "Tue, 18 Oct 2022 06:22:05 GMT"
}
] | 1,666,137,600,000 | [
[
"Ye",
"Jiexia",
""
],
[
"Zhao",
"Juanjuan",
""
],
[
"Zheng",
"Furong",
""
],
[
"Xu",
"Chengzhong",
""
]
] |
2108.03903 | Charalambos Chrysostomou | Charalambos Chrysostomou | Sinogram Denoise Based on Generative Adversarial Networks | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A novel method for sinogram denoise based on Generative Adversarial Networks
(GANs) in the field of SPECT imaging is presented. Projection data from
software phantoms were used to train the proposed model. For evaluation of the
efficacy of the method Shepp Logan based phantom, with various noise levels
added where used. The resulting denoised sinograms are reconstructed using
Ordered Subset Expectation Maximization (OSEM) and compared to the
reconstructions of the original noised sinograms. As the results show, the
proposed method significantly denoise the sinograms and significantly improves
the reconstructions. Finally, to demonstrate the efficacy and capability of the
proposed method results from real-world DAT-SPECT sinograms are presented.
| [
{
"version": "v1",
"created": "Mon, 9 Aug 2021 09:37:51 GMT"
}
] | 1,628,553,600,000 | [
[
"Chrysostomou",
"Charalambos",
""
]
] |
2108.03988 | Zhuoran Xu | Zhuoran Xu and Hao Liu | Knowledge accumulating: The general pattern of learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial Intelligence has been developed for decades with the achievement
of great progress. Recently, deep learning shows its ability to solve many real
world problems, e.g. image classification and detection, natural language
processing, playing GO. Theoretically speaking, an artificial neural network
can fit any function and reinforcement learning can learn from any delayed
reward. But in solving real world tasks, we still need to spend a lot of effort
to adjust algorithms to fit task unique features. This paper proposes that the
reason of this phenomenon is the sparse feedback feature of the nature, and a
single algorithm, no matter how we improve it, can only solve dense feedback
tasks or specific sparse feedback tasks. This paper first analyses how sparse
feedback affects algorithm perfomance, and then proposes a pattern that
explains how to accumulate knowledge to solve sparse feedback problems.
| [
{
"version": "v1",
"created": "Mon, 9 Aug 2021 12:41:28 GMT"
}
] | 1,628,553,600,000 | [
[
"Xu",
"Zhuoran",
""
],
[
"Liu",
"Hao",
""
]
] |
2108.03989 | Fei Xiong | Yu Li, Fei Xiong, Ziyi Wang, Zulong Chen, Chuanfei Xu, Yuyu Yin, Li
Zhou | Spatial-Temporal Deep Intention Destination Networks for Online Travel
Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, artificial neural networks are widely used for users' online travel
planning. Personalized travel planning has many real applications and is
affected by various factors, such as transportation type, intention destination
estimation, budget limit and crowdness prediction. Among those factors, users'
intention destination prediction is an essential task in online travel
platforms. The reason is that, the user may be interested in the travel plan
only when the plan matches his real intention destination. Therefore, in this
paper, we focus on predicting users' intention destinations in online travel
platforms. In detail, we act as online travel platforms (such as Fliggy and
Airbnb) to recommend travel plans for users, and the plan consists of various
vacation items including hotel package, scenic packages and so on. Predicting
the actual intention destination in travel planning is challenging. Firstly,
users' intention destination is highly related to their travel status (e.g.,
planning for a trip or finishing a trip). Secondly, users' actions (e.g.
clicking, searching) over different product types (e.g. train tickets, visa
application) have different indications in destination prediction. Thirdly,
users may mostly visit the travel platforms just before public holidays, and
thus user behaviors in online travel platforms are more sparse, low-frequency
and long-period. Therefore, we propose a Deep Multi-Sequences fused neural
Networks (DMSN) to predict intention destinations from fused multi-behavior
sequences. Real datasets are used to evaluate the performance of our proposed
DMSN models. Experimental results indicate that the proposed DMSN models can
achieve high intention destination prediction accuracy.
| [
{
"version": "v1",
"created": "Mon, 9 Aug 2021 12:41:57 GMT"
}
] | 1,628,553,600,000 | [
[
"Li",
"Yu",
""
],
[
"Xiong",
"Fei",
""
],
[
"Wang",
"Ziyi",
""
],
[
"Chen",
"Zulong",
""
],
[
"Xu",
"Chuanfei",
""
],
[
"Yin",
"Yuyu",
""
],
[
"Zhou",
"Li",
""
]
] |
2108.04001 | G C Nandi | Shekhar Gupta, Gaurav Kumar Yadav, G. C. Nandi | Development of Human Motion Prediction Strategy using Inception Residual
Block | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human Motion Prediction is a crucial task in computer vision and robotics. It
has versatile application potentials such as in the area of human-robot
interactions, human action tracking for airport security systems, autonomous
car navigation, computer gaming to name a few. However, predicting human motion
based on past actions is an extremely challenging task due to the difficulties
in detecting spatial and temporal features correctly. To detect temporal
features in human poses, we propose an Inception Residual Block(IRB), due to
its inherent capability of processing multiple kernels to capture salient
features. Here, we propose to use multiple 1-D Convolution Neural Network (CNN)
with different kernel sizes and input sequence lengths and concatenate them to
get proper embedding. As kernels strides over different receptive fields, they
detect smaller and bigger salient features at multiple temporal scales. Our
main contribution is to propose a residual connection between input and the
output of the inception block to have a continuity between the previously
observed pose and the next predicted pose. With this proposed architecture, it
learns prior knowledge much better about human poses and we achieve much higher
prediction accuracy as detailed in the paper. Subsequently, we further propose
to feed the output of the inception residual block as an input to the Graph
Convolution Neural Network (GCN) due to its better spatial feature learning
capability. We perform a parametric analysis for better designing of our model
and subsequently, we evaluate our approach on the Human 3.6M dataset and
compare our short-term as well as long-term predictions with the state of the
art papers, where our model outperforms most of the pose results, the detailed
reasons of which have been elaborated in the paper.
| [
{
"version": "v1",
"created": "Mon, 9 Aug 2021 12:49:48 GMT"
}
] | 1,628,553,600,000 | [
[
"Gupta",
"Shekhar",
""
],
[
"Yadav",
"Gaurav Kumar",
""
],
[
"Nandi",
"G. C.",
""
]
] |
2108.04020 | Kylian Van Dessel | Kylian Van Dessel, Jo Devriendt, and Joost Vennekens | FOLASP: FO(.) as Input Language for Answer Ser Solvers | Paper presented at the 37th International Conference on Logic
Programming (ICLP 2021), 15 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Over the past decades, Answer Set Programming (ASP) has emerged as an
important paradigm for declarative problem solving. Technological progress in
this area has been stimulated by the use of common standards, such as the
ASP-Core-2 language. While ASP has its roots in non-monotonic reasoning,
efforts have also been made to reconcile ASP with classical first-order logic
(FO). This has resulted in the development of FO(.), an expressive extension of
FO, which allows ASP-like problem solving in a purely classical setting. This
language may be more accessible to domain experts already familiar with FO, and
may be easier to combine with other formalisms that are based on classical
logic. It is supported by the IDP inference system, which has successfully
competed in a number of ASP competitions. Here, however, technological progress
has been hampered by the limited number of systems that are available for
FO(.). In this paper, we aim to address this gap by means of a translation tool
that transforms an FO(.) specification into ASP-Core-2, thereby allowing
ASP-Core-2 solvers to be used as solvers for FO(.) as well. We present
experimental results to show that the resulting combination of our translation
with an off-the-shelf ASP solver is competitive with the IDP system as a way of
solving problems formulated in FO(.).
Under consideration for acceptance in TPLP.
| [
{
"version": "v1",
"created": "Mon, 9 Aug 2021 13:20:26 GMT"
}
] | 1,628,553,600,000 | [
[
"Van Dessel",
"Kylian",
""
],
[
"Devriendt",
"Jo",
""
],
[
"Vennekens",
"Joost",
""
]
] |
2108.04115 | Shervin Halat | Shervin Halat, Mohammad Mehdi Ebadzadeh | Modified Double DQN: addressing stability | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Inspired by double q learning algorithm, the double DQN algorithm was
originally proposed in order to address the overestimation issue in the
original DQN algorithm. The double DQN has successfully shown both
theoretically and empirically the importance of decoupling in terms of action
evaluation and selection in computation of targets values; although, all the
benefits were acquired with only a simple adaption to DQN algorithm, minimal
possible change as it was mentioned by the authors. Nevertheless, there seems a
roll-back in the proposed algorithm of Double-DQN since the parameters of
policy network are emerged again in the target value function which were
initially withdrawn by DQN with the hope of tackling the serious issue of
moving targets and the instability caused by it (i.e., by moving targets) in
the process of learning. Therefore, in this paper three modifications to the
Double-DQN algorithm are proposed with the hope of maintaining the performance
in the terms of both stability and overestimation. These modifications are
focused on the logic of decoupling the best action selection and evaluation in
the target value function and the logic of tackling the moving targets issue.
Each of these modifications have their own pros and cons compared to the
others. The mentioned pros and cons mainly refer to the execution time required
for the corresponding algorithm and the stability provided by the corresponding
algorithm. Also, in terms of overestimation, none of the modifications seem to
underperform compared to the original Double-DQN if not outperform it. With the
intention of evaluating the efficacy of the proposed modifications, multiple
empirical experiments along with theoretical experiments were conducted. The
results obtained are represented and discussed in this article.
| [
{
"version": "v1",
"created": "Mon, 9 Aug 2021 15:27:22 GMT"
}
] | 1,628,553,600,000 | [
[
"Halat",
"Shervin",
""
],
[
"Ebadzadeh",
"Mohammad Mehdi",
""
]
] |
2108.04194 | George Baryannis | Mario Alviano, Sotiris Batsakis, George Baryannis | Modal Logic S5 Satisfiability in Answer Set Programming | Paper presented at the 37th International Conference on Logic
Programming (ICLP 2021), September 2021, 16 pages, 3 figures | Theory and Practice of Logic Programming 21 (2021) 527-542 | 10.1017/S1471068421000247 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modal logic S5 has attracted significant attention and has led to several
practical applications, owing to its simplified approach to dealing with
nesting modal operators. Efficient implementations for evaluating
satisfiability of S5 formulas commonly rely on Skolemisation to convert them
into propositional logic formulas, essentially by introducing copies of
propositional atoms for each set of interpretations (possible worlds). This
approach is simple, but often results into large formulas that are too
difficult to process, and therefore more parsimonious constructions are
required. In this work, we propose to use Answer Set Programming for
implementing such constructions, and in particular for identifying the
propositional atoms that are relevant in every world by means of a reachability
relation. The proposed encodings are designed to take advantage of other
properties such as entailment relations of subformulas rooted by modal
operators. An empirical assessment of the proposed encodings shows that the
reachability relation is very effective and leads to comparable performance to
a state-of-the-art S5 solver based on SAT, while entailment relations are
possibly too expensive to reason about and may result in overhead. This paper
is under consideration for acceptance in TPLP.
| [
{
"version": "v1",
"created": "Mon, 9 Aug 2021 17:35:31 GMT"
}
] | 1,687,392,000,000 | [
[
"Alviano",
"Mario",
""
],
[
"Batsakis",
"Sotiris",
""
],
[
"Baryannis",
"George",
""
]
] |
2108.04371 | Vinod Muthusamy | Sohini Upadhyay, Vatche Isahagian, Vinod Muthusamy, Yara Rizk | Extending LIME for Business Process Automation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AI business process applications automate high-stakes business decisions
where there is an increasing demand to justify or explain the rationale behind
algorithmic decisions. Business process applications have ordering or
constraints on tasks and feature values that cause lightweight, model-agnostic,
existing explanation methods like LIME to fail. In response, we propose a local
explanation framework extending LIME for explaining AI business process
applications. Empirical evaluation of our extension underscores the advantage
of our approach in the business process setting.
| [
{
"version": "v1",
"created": "Mon, 9 Aug 2021 21:30:46 GMT"
}
] | 1,628,640,000,000 | [
[
"Upadhyay",
"Sohini",
""
],
[
"Isahagian",
"Vatche",
""
],
[
"Muthusamy",
"Vinod",
""
],
[
"Rizk",
"Yara",
""
]
] |
2108.04541 | Xiaoshu Xiang | Shangshang Yang, Ye Tian, Xiaoshu Xiang, Shichen Peng, and Xingyi
Zhang | Accelerating Evolutionary Neural Architecture Search via Multi-Fidelity
Evaluation | 15 pages, 11 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Evolutionary neural architecture search (ENAS) has recently received
increasing attention by effectively finding high-quality neural architectures,
which however consumes high computational cost by training the architecture
encoded by each individual for complete epochs in individual evaluation.
Numerous ENAS approaches have been developed to reduce the evaluation cost, but
it is often difficult for most of these approaches to achieve high evaluation
accuracy. To address this issue, in this paper we propose an accelerated ENAS
via multifidelity evaluation termed MFENAS, where the individual evaluation
cost is significantly reduced by training the architecture encoded by each
individual for only a small number of epochs. The balance between evaluation
cost and evaluation accuracy is well maintained by suggesting a multi-fidelity
evaluation, which identifies the potentially good individuals that cannot
survive from previous generations by integrating multiple evaluations under
different numbers of training epochs. For high diversity of neural
architectures, a population initialization strategy is devised to produce
different neural architectures varying from ResNet-like architectures to
Inception-like ones. Experimental results on CIFAR-10 show that the
architecture obtained by the proposed MFENAS achieves a 2.39% test error rate
at the cost of only 0.6 GPU days on one NVIDIA 2080TI GPU, demonstrating the
superiority of the proposed MFENAS over state-of-the-art NAS approaches in
terms of both computational cost and architecture quality. The architecture
obtained by the proposed MFENAS is then transferred to CIFAR-100 and ImageNet,
which also exhibits competitive performance to the architectures obtained by
existing NAS approaches. The source code of the proposed MFENAS is available at
https://github.com/DevilYangS/MFENAS/.
| [
{
"version": "v1",
"created": "Tue, 10 Aug 2021 09:32:26 GMT"
}
] | 1,628,640,000,000 | [
[
"Yang",
"Shangshang",
""
],
[
"Tian",
"Ye",
""
],
[
"Xiang",
"Xiaoshu",
""
],
[
"Peng",
"Shichen",
""
],
[
"Zhang",
"Xingyi",
""
]
] |
2108.04555 | Changhyun Park | Changhyun Park and Heung-Il Suk | Deep Joint Learning of Pathological Region Localization and Alzheimer's
Disease Diagnosis | 31 pages, 9 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The identification of Alzheimer's disease (AD) and its early stages using
structural magnetic resonance imaging (MRI) has been attracting the attention
of researchers. Various data-driven approaches have been introduced to capture
subtle and local morphological changes of the brain accompanied by the disease
progression. One of the typical approaches for capturing subtle changes is
patch-level feature representation. However, the predetermined regions to
extract patches can limit classification performance by interrupting the
exploration of potential biomarkers. In addition, the existing patch-level
analyses have difficulty explaining their decision-making. To address these
problems, we propose the BrainBagNet with a position-based gate
(PG-BrainBagNet), a framework for jointly learning pathological region
localization and AD diagnosis in an end-to-end manner. In advance, as all scans
are aligned to a template in image processing, the position of brain images can
be represented through the 3D Cartesian space shared by the overall MRI scans.
The proposed method represents the patch-level response from whole-brain MRI
scans and discriminative brain-region from position information. Based on the
outcomes, the patch-level class evidence is calculated, and then the
image-level prediction is inferred by a transparent aggregation. The proposed
models were evaluated on the ADNI datasets. In five-fold cross-validation, the
classification performance of the proposed method outperformed that of the
state-of-the-art methods in both AD diagnosis (AD vs. normal control) and mild
cognitive impairment (MCI) conversion prediction (progressive MCI vs. stable
MCI) tasks. In addition, changes in the identified discriminant regions and
patch-level class evidence according to the patch size used for model training
are presented and analyzed.
| [
{
"version": "v1",
"created": "Tue, 10 Aug 2021 10:06:54 GMT"
}
] | 1,628,640,000,000 | [
[
"Park",
"Changhyun",
""
],
[
"Suk",
"Heung-Il",
""
]
] |
2108.04751 | Jean-Claude Belfiore | Jean-Claude Belfiore, Daniel Bennequin and Xavier Giraud | Logical Information Cells I | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this study we explore the spontaneous apparition of visible intelligible
reasoning in simple artificial networks, and we connect this experimental
observation with a notion of semantic information. We start with the
reproduction of a DNN model of natural neurons in monkeys, studied by
Neromyliotis and Moschovakis in 2017 and 2018, to explain how "motor equivalent
neurons", coding only for the action of pointing, are supplemented by other
neurons for specifying the actor of the action, the eye E, the hand H, or the
eye and the hand together EH. There appear inner neurons performing a logical
work, making intermediary proposition, for instance E V EH. Then, we remarked
that adding a second hidden layer and choosing a symmetric metric for learning,
the activities of the neurons become almost quantized and more informative.
Using the work of Carnap and Bar-Hillel 1952, we define a measure of the
logical value for collections of such cells. The logical score growths with the
depth of the layer, i.e. the information on the output decision increases,
which confirms a kind of bottleneck principle. Then we study a bit more complex
tasks, a priori involving predicate logic. We compare the logic and the
measured weights. This shows, for groups of neurons, a neat correlation between
the logical score and the size of the weights. It exhibits a form of sparsity
between the layers. The most spectacular result concerns the triples which can
conclude for all conditions: when applying their weight matrices to their
logical matrix, we recover the classification. This shows that weights
precisely perform the proofs.
| [
{
"version": "v1",
"created": "Tue, 10 Aug 2021 15:31:26 GMT"
}
] | 1,628,640,000,000 | [
[
"Belfiore",
"Jean-Claude",
""
],
[
"Bennequin",
"Daniel",
""
],
[
"Giraud",
"Xavier",
""
]
] |
2108.04760 | Dmitry Maximov | Dmitry Maximov | Multi-Valued Cognitive Maps: Calculations with Linguistic Variables
without Using Numbers | The article have been submitted to Fuzzy Sets & Systems on 11 march
2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A concept of multi-valued cognitive maps is introduced in this paper. The
concept expands the fuzzy one. However, all variables and weights are not
linearly ordered in the concept, but are only partially-ordered. Such an ap-
proach allows us to operate in cognitive maps with partially-ordered linguis-
tic variables directly, without vague fuzzification/defuzzification methods.
Hence, we may consider more subtle differences in degrees of experts' uncer-
tainty, than in the fuzzy case. We prove the convergence of such cognitive maps
and give a simple computational example which demonstrates using such a
partially-ordered uncertainty degree scale.
| [
{
"version": "v1",
"created": "Tue, 10 Aug 2021 15:55:17 GMT"
}
] | 1,628,640,000,000 | [
[
"Maximov",
"Dmitry",
""
]
] |
2108.04769 | Roland Kaminski | Roland Kaminski and Torsten Schaub | On the Foundations of Grounding in Answer Set Programming | unpublished draft | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | We provide a comprehensive elaboration of the theoretical foundations of
variable instantiation, or grounding, in Answer Set Programming (ASP). Building
on the semantics of ASP's modeling language, we introduce a formal
characterization of grounding algorithms in terms of (fixed point) operators. A
major role is played by dedicated well-founded operators whose associated
models provide semantic guidance for delineating the result of grounding along
with on-the-fly simplifications. We address an expressive class of logic
programs that incorporates recursive aggregates and thus amounts to the scope
of existing ASP modeling languages. This is accompanied with a plain
algorithmic framework detailing the grounding of recursive aggregates. The
given algorithms correspond essentially to the ones used in the ASP grounder
gringo.
| [
{
"version": "v1",
"created": "Tue, 10 Aug 2021 16:23:49 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Jan 2022 09:22:11 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Jul 2022 11:29:01 GMT"
}
] | 1,658,793,600,000 | [
[
"Kaminski",
"Roland",
""
],
[
"Schaub",
"Torsten",
""
]
] |
2108.05020 | Zhi-Wei Wang | Wen-ming Zhang, Zhi-wei Wang, Dan-dian Feng, Zhao Liu | Frequency-based tension assessment of an inclined cable with complex
boundary conditions using the PSO algorithm | to be published in Structural Engineering and Mechanics | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The frequency-based method is the most commonly used method for measuring
cable tension. However, the calculation formulas for the conventional
frequency-based method are generally based on the ideally hinged or fixed
boundary conditions without a comprehensive consideration of the inclination
angle, sag-extensibility, and flexural stiffness of cables, leading to a
significant error in cable tension identification. This study aimed to propose
a frequency-based method of cable tension identification considering the
complex boundary conditions at the two ends of cables using the particle swarm
optimization (PSO) algorithm. First, the refined stay cable model was
established considering the inclination angle, flexural stiffness, and
sag-extensibility, as well as the rotational constraint stiffness and lateral
support stiffness for the unknown boundaries of cables. The vibration mode
equation of the stay cable model was discretized and solved using the finite
difference method. Then, a multiparameter identification method based on the
PSO algorithm was proposed. This method was able to identify the tension,
flexural stiffness, axial stiffness, boundary rotational constraint stiffness,
and boundary lateral support stiffness according to the measured multiorder
frequencies in a synchronous manner. The feasibility and accuracy of this
method were validated through numerical cases. Finally, the proposed approach
was applied to the tension identification of the anchor span strands of a
suspension bridge (Jindong Bridge) in China. The results of cable tension
identification using the proposed method and the existing methods discussed in
previous studies were compared with the on-site pressure ring measurement
results. The comparison showed that the proposed approach had a high accuracy
in cable tension identification.
| [
{
"version": "v1",
"created": "Wed, 11 Aug 2021 04:07:27 GMT"
}
] | 1,628,726,400,000 | [
[
"Zhang",
"Wen-ming",
""
],
[
"Wang",
"Zhi-wei",
""
],
[
"Feng",
"Dan-dian",
""
],
[
"Liu",
"Zhao",
""
]
] |
2108.05123 | Zijian Zhang | Zijian Zhang, Chang Shu, Youxin Chen, Jing Xiao, Qian Zhang and Lu
Zheng | ICAF: Iterative Contrastive Alignment Framework for Multimodal
Abstractive Summarization | Accepted by WCCI-IJCNN 2022 as an oral paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Integrating multimodal knowledge for abstractive summarization task is a
work-in-progress research area, with present techniques inheriting
fusion-then-generation paradigm. Due to semantic gaps between computer vision
and natural language processing, current methods often treat multiple data
points as separate objects and rely on attention mechanisms to search for
connection in order to fuse together. In addition, missing awareness of
cross-modal matching from many frameworks leads to performance reduction. To
solve these two drawbacks, we propose an Iterative Contrastive Alignment
Framework (ICAF) that uses recurrent alignment and contrast to capture the
coherences between images and texts. Specifically, we design a recurrent
alignment (RA) layer to gradually investigate fine-grained semantical
relationships between image patches and text tokens. At each step during the
encoding process, cross-modal contrastive losses are applied to directly
optimize the embedding space. According to ROUGE, relevance scores, and human
evaluation, our model outperforms the state-of-the-art baselines on MSMO
dataset. Experiments on the applicability of our proposed framework and
hyperparameters settings have been also conducted.
| [
{
"version": "v1",
"created": "Wed, 11 Aug 2021 09:59:34 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 03:14:21 GMT"
},
{
"version": "v3",
"created": "Mon, 8 Aug 2022 11:02:16 GMT"
}
] | 1,660,003,200,000 | [
[
"Zhang",
"Zijian",
""
],
[
"Shu",
"Chang",
""
],
[
"Chen",
"Youxin",
""
],
[
"Xiao",
"Jing",
""
],
[
"Zhang",
"Qian",
""
],
[
"Zheng",
"Lu",
""
]
] |
2108.05165 | Selin Eyupoglu | Selin Eyupoglu, Muge Fidan, Yavuz Gulesen, Ilayda Begum Izci, Berkan
Teber, Baturay Yilmaz, Ahmet Alkan, Esra Erdem | Stable Marriage Problems with Ties and Incomplete Preferences: An
Empirical Comparison of ASP, SAT, ILP, CP, and Local Search Methods | This paper is under consideration for acceptance in Theory and
Practice of Logic Programming (TPLP) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a variation of the Stable Marriage problem, where every man and
every woman express their preferences as preference lists which may be
incomplete and contain ties. This problem is called the Stable Marriage problem
with Ties and Incomplete preferences (SMTI). We consider three optimization
variants of SMTI, Max Cardinality, Sex-Equal and Egalitarian, and empirically
compare the following methods to solve them: Answer Set Programming, Constraint
Programming, Integer Linear Programming. For Max Cardinality, we compare these
methods with Local Search methods as well. We also empirically compare Answer
Set Programming with Propositional Satisfiability, for SMTI instances. This
paper is under consideration for acceptance in Theory and Practice of Logic
Programming (TPLP).
| [
{
"version": "v1",
"created": "Wed, 11 Aug 2021 11:39:51 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Aug 2021 12:43:22 GMT"
}
] | 1,629,244,800,000 | [
[
"Eyupoglu",
"Selin",
""
],
[
"Fidan",
"Muge",
""
],
[
"Gulesen",
"Yavuz",
""
],
[
"Izci",
"Ilayda Begum",
""
],
[
"Teber",
"Berkan",
""
],
[
"Yilmaz",
"Baturay",
""
],
[
"Alkan",
"Ahmet",
""
],
[
"Erdem",
"Esra",
""
]
] |
2108.05266 | Jean-Marie Lagniez | Gilles Audemard and Steve Bellart and Louenas Bounia and Fr\'ed\'eric
Koriche and Jean-Marie Lagniez and Pierre Marquis | On the Explanatory Power of Decision Trees | 22 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Decision trees have long been recognized as models of choice in sensitive
applications where interpretability is of paramount importance. In this paper,
we examine the computational ability of Boolean decision trees in deriving,
minimizing, and counting sufficient reasons and contrastive explanations. We
prove that the set of all sufficient reasons of minimal size for an instance
given a decision tree can be exponentially larger than the size of the input
(the instance and the decision tree). Therefore, generating the full set of
sufficient reasons can be out of reach. In addition, computing a single
sufficient reason does not prove enough in general; indeed, two sufficient
reasons for the same instance may differ on many features. To deal with this
issue and generate synthetic views of the set of all sufficient reasons, we
introduce the notions of relevant features and of necessary features that
characterize the (possibly negated) features appearing in at least one or in
every sufficient reason, and we show that they can be computed in polynomial
time. We also introduce the notion of explanatory importance, that indicates
how frequent each (possibly negated) feature is in the set of all sufficient
reasons. We show how the explanatory importance of a feature and the number of
sufficient reasons can be obtained via a model counting operation, which turns
out to be practical in many cases. We also explain how to enumerate sufficient
reasons of minimal size. We finally show that, unlike sufficient reasons, the
set of all contrastive explanations for an instance given a decision tree can
be derived, minimized and counted in polynomial time.
| [
{
"version": "v1",
"created": "Wed, 11 Aug 2021 15:08:11 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Sep 2021 07:06:26 GMT"
}
] | 1,630,972,800,000 | [
[
"Audemard",
"Gilles",
""
],
[
"Bellart",
"Steve",
""
],
[
"Bounia",
"Louenas",
""
],
[
"Koriche",
"Frédéric",
""
],
[
"Lagniez",
"Jean-Marie",
""
],
[
"Marquis",
"Pierre",
""
]
] |
2108.05276 | Jean-Marie Lagniez | Gilles Audemard and Steve Bellart and Louenas Bounia and Fr\'ed\'eric
Koriche and Jean-Marie Lagniez and Pierre Marquis | Trading Complexity for Sparsity in Random Forest Explanations | 21 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Random forests have long been considered as powerful model ensembles in
machine learning. By training multiple decision trees, whose diversity is
fostered through data and feature subsampling, the resulting random forest can
lead to more stable and reliable predictions than a single decision tree. This
however comes at the cost of decreased interpretability: while decision trees
are often easily interpretable, the predictions made by random forests are much
more difficult to understand, as they involve a majority vote over hundreds of
decision trees. In this paper, we examine different types of reasons that
explain "why" an input instance is classified as positive or negative by a
Boolean random forest. Notably, as an alternative to sufficient reasons taking
the form of prime implicants of the random forest, we introduce majoritary
reasons which are prime implicants of a strict majority of decision trees. For
these different abductive explanations, the tractability of the generation
problem (finding one reason) and the minimization problem (finding one shortest
reason) are investigated. Experiments conducted on various datasets reveal the
existence of a trade-off between runtime complexity and sparsity. Sufficient
reasons - for which the identification problem is DP-complete - are slightly
larger than majoritary reasons that can be generated using a simple linear-
time greedy algorithm, and significantly larger than minimal majoritary reasons
that can be approached using an anytime P ARTIAL M AX SAT algorithm.
| [
{
"version": "v1",
"created": "Wed, 11 Aug 2021 15:19:46 GMT"
}
] | 1,628,726,400,000 | [
[
"Audemard",
"Gilles",
""
],
[
"Bellart",
"Steve",
""
],
[
"Bounia",
"Louenas",
""
],
[
"Koriche",
"Frédéric",
""
],
[
"Lagniez",
"Jean-Marie",
""
],
[
"Marquis",
"Pierre",
""
]
] |
2108.05410 | Filip Ilievski | Filip Ilievski and Pedro Szekely and Gleb Satyukov and Amandeep Singh | User-friendly Comparison of Similarity Algorithms on Wikidata | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | While the similarity between two concept words has been evaluated and studied
for decades, much less attention has been devoted to algorithms that can
compute the similarity of nodes in very large knowledge graphs, like Wikidata.
To facilitate investigations and head-to-head comparisons of similarity
algorithms on Wikidata, we present a user-friendly interface that allows
flexible computation of similarity between Qnodes in Wikidata. At present, the
similarity interface supports four algorithms, based on: graph embeddings
(TransE, ComplEx), text embeddings (BERT), and class-based similarity. We
demonstrate the behavior of the algorithms on representative examples about
semantically similar, related, and entirely unrelated entity pairs. To support
anticipated applications that require efficient similarity computations, like
entity linking and recommendation, we also provide a REST API that can compute
most similar neighbors for any Qnode in Wikidata.
| [
{
"version": "v1",
"created": "Wed, 11 Aug 2021 18:59:25 GMT"
}
] | 1,628,812,800,000 | [
[
"Ilievski",
"Filip",
""
],
[
"Szekely",
"Pedro",
""
],
[
"Satyukov",
"Gleb",
""
],
[
"Singh",
"Amandeep",
""
]
] |
2108.05412 | Filip Ilievski | Zaina Shaik, Filip Ilievski, Fred Morstatter | Analyzing Race and Country of Citizenship Bias in Wikidata | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As an open and collaborative knowledge graph created by users and bots, it is
possible that the knowledge in Wikidata is biased in regards to multiple
factors such as gender, race, and country of citizenship. Previous work has
mostly studied the representativeness of Wikidata knowledge in terms of genders
of people. In this paper, we examine the race and citizenship bias in general
and in regards to STEM representation for scientists, software developers, and
engineers. By comparing Wikidata queries to real-world datasets, we identify
the differences in representation to characterize the biases present in
Wikidata. Through this analysis, we discovered that there is an
overrepresentation of white individuals and those with citizenship in Europe
and North America; the rest of the groups are generally underrepresented. Based
on these findings, we have found and linked to Wikidata additional data about
STEM scientists from the minorities. This data is ready to be inserted into
Wikidata with a bot. Increasing representation of minority race and country of
citizenship groups can create a more accurate portrayal of individuals in STEM.
| [
{
"version": "v1",
"created": "Wed, 11 Aug 2021 19:04:15 GMT"
}
] | 1,628,812,800,000 | [
[
"Shaik",
"Zaina",
""
],
[
"Ilievski",
"Filip",
""
],
[
"Morstatter",
"Fred",
""
]
] |
2108.05428 | Michael Morak | Wolfgang Faber, Michael Morak, and Luk\'a\v{s} Chrpa | Determining ActionReversibility in STRIPS Using Answer Set and Epistemic
Logic Programming | Paper presented at the 37th International Conference on Logic
Programming (ICLP 2021), 16 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In the context of planning and reasoning about actions and change, we call an
action reversible when its effects can be reverted by applying other actions,
returning to the original state. Renewed interest in this area has led to
several results in the context of the PDDL language, widely used for describing
planning tasks.
In this paper, we propose several solutions to the computational problem of
deciding the reversibility of an action. In particular, we leverage an existing
translation from PDDL to Answer Set Programming (ASP), and then use several
different encodings to tackle the problem of action reversibility for the
STRIPS fragment of PDDL. For these, we use ASP, as well as Epistemic Logic
Programming (ELP), an extension of ASP with epistemic operators, and compare
and contrast their strengths and weaknesses.
Under consideration for acceptance in TPLP.
| [
{
"version": "v1",
"created": "Wed, 11 Aug 2021 20:00:34 GMT"
}
] | 1,628,812,800,000 | [
[
"Faber",
"Wolfgang",
""
],
[
"Morak",
"Michael",
""
],
[
"Chrpa",
"Lukáš",
""
]
] |
2108.05436 | Abdelrahman Elsharawy | Abdelrahman Elsharawy | Friddy multiagent price stabilization model | 20 Pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In a multiagent network model consisting of nodes, each network node has an
agent and priced Friddy coins, and the agent can buy or sell Friddy coins in
the marketplace. Though every node may not effectively have an equal price
during the transaction time, the prices have to reach equilibrium by iterating
buy and sell transactions on a macro level.
| [
{
"version": "v1",
"created": "Wed, 11 Aug 2021 20:33:42 GMT"
}
] | 1,628,812,800,000 | [
[
"Elsharawy",
"Abdelrahman",
""
]
] |
2108.05525 | Suzanna Sia | Ayush Dalmia and Suzanna Sia | Clustering with UMAP: Why and How Connectivity Matters | Published as a long paper at 2nd Graphs and more Complex structures
for Learning and Reasoning Workshop in AAAI 2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Topology based dimensionality reduction methods such as t-SNE and UMAP have
seen increasing success and popularity in high-dimensional data. These methods
have strong mathematical foundations and are based on the intuition that the
topology in low dimensions should be close to that of high dimensions. Given
that the initial topological structure is a precursor to the success of the
algorithm, this naturally raises the question: What makes a "good" topological
structure for dimensionality reduction? Insight into this will enable us to
design better algorithms which take into account both local and global
structure. In this paper which focuses on UMAP, we study the effects of node
connectivity (k-Nearest Neighbors vs mutual k-Nearest Neighbors) and relative
neighborhood (Adjacent via Path Neighbors) on dimensionality reduction. We
explore these concepts through extensive ablation studies on 4 standard image
and text datasets; MNIST, FMNIST, 20NG, AG, reducing to 2 and 64 dimensions.
Our findings indicate that a more refined notion of connectivity (mutual
k-Nearest Neighbors with minimum spanning tree) together with a flexible method
of constructing the local neighborhood (Path Neighbors), can achieve a much
better representation than default UMAP, as measured by downstream clustering
performance.
| [
{
"version": "v1",
"created": "Thu, 12 Aug 2021 04:25:03 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Dec 2021 17:59:33 GMT"
}
] | 1,639,699,200,000 | [
[
"Dalmia",
"Ayush",
""
],
[
"Sia",
"Suzanna",
""
]
] |
2108.05800 | Yong Ren | Jimmy Yin and Mac Ren | On Liquidity Mining for Uniswap v3 | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The recently proposed Uniswap v3 replaces the fungible liquidity provider
token (LP token) into non-fungible ones, making the design for liquidity mining
more difficult. In this paper, we propose a flexible liquidity mining scheme
that realizes the overall liquidity distribution through the fine control of
local rewards. From the liquidity provider's point of view, the liquidity
provision strategy forms a multiplayer zero-sum game. We analyze the Nash
Equilibrium and the corresponding strategy, approximately, deploying the
liquidity proportional to the reward distribution, in some special cases and
use it to guide the general situations. Based on the strategic response above,
such a scheme allows the mining rewards provider to optimize the distribution
of liquidity for the purpose such as low slippage and price stabilization.
| [
{
"version": "v1",
"created": "Thu, 12 Aug 2021 15:29:12 GMT"
}
] | 1,628,812,800,000 | [
[
"Yin",
"Jimmy",
""
],
[
"Ren",
"Mac",
""
]
] |
2108.05809 | Ryan Watkins PhD | Farhana Faruqe, Ryan Watkins, Larry Medsker | Competency Model Approach to AI Literacy: Research-based Path from
Initial Framework to Model | Presented as part of AI4EDU at IJCAI2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The recent developments in Artificial Intelligence (AI) technologies
challenge educators and educational institutions to respond with curriculum and
resources that prepare students of all ages with the foundational knowledge and
skills for success in the AI workplace. Research on AI Literacy could lead to
an effective and practical platform for developing these skills. We propose and
advocate for a pathway for developing AI Literacy as a pragmatic and useful
tool for AI education. Such a discipline requires moving beyond a conceptual
framework to a multi-level competency model with associated competency
assessments. This approach to an AI Literacy could guide future development of
instructional content as we prepare a range of groups (i.e., consumers,
co-workers, collaborators, and creators). We propose here a research matrix as
an initial step in the development of a roadmap for AI Literacy research, which
requires a systematic and coordinated effort with the support of publication
outlets and research funding, to expand the areas of competency and
assessments.
| [
{
"version": "v1",
"created": "Thu, 12 Aug 2021 15:42:32 GMT"
}
] | 1,628,812,800,000 | [
[
"Faruqe",
"Farhana",
""
],
[
"Watkins",
"Ryan",
""
],
[
"Medsker",
"Larry",
""
]
] |
2108.05872 | Willie McClinton | Willie McClinton, Andrew Levy, George Konidaris | HAC Explore: Accelerating Exploration with Hierarchical Reinforcement
Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparse rewards and long time horizons remain challenging for reinforcement
learning algorithms. Exploration bonuses can help in sparse reward settings by
encouraging agents to explore the state space, while hierarchical approaches
can assist with long-horizon tasks by decomposing lengthy tasks into shorter
subtasks. We propose HAC Explore (HACx), a new method that combines these
approaches by integrating the exploration bonus method Random Network
Distillation (RND) into the hierarchical approach Hierarchical Actor-Critic
(HAC). HACx outperforms either component method on its own, as well as an
existing approach to combining hierarchy and exploration, in a set of difficult
simulated robotics tasks. HACx is the first RL method to solve a sparse reward,
continuous-control task that requires over 1,000 actions.
| [
{
"version": "v1",
"created": "Thu, 12 Aug 2021 17:42:12 GMT"
}
] | 1,628,812,800,000 | [
[
"McClinton",
"Willie",
""
],
[
"Levy",
"Andrew",
""
],
[
"Konidaris",
"George",
""
]
] |
2108.05948 | Uche Osahor | Uche M. Osahor and Nasser M. Nasrabadi | Deep adversarial attack on target detection systems | Trying to improve the paper | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Target detection systems identify targets by localizing their coordinates on
the input image of interest. This is ideally achieved by labeling each pixel in
an image as a background or a potential target pixel. Deep Convolutional Neural
Network (DCNN) classifiers have proven to be successful tools for computer
vision applications. However,prior research confirms that even state of the art
classifier models are susceptible to adversarial attacks. In this paper, we
show how to generate adversarial infrared images by adding small perturbations
to the targets region to deceive a DCNN-based target detector at remarkable
levels. We demonstrate significant progress in developing visually
imperceptible adversarial infrared images where the targets are visually
recognizable by an expert but a DCNN-based target detector cannot detect the
targets in the image.
| [
{
"version": "v1",
"created": "Thu, 12 Aug 2021 20:00:55 GMT"
},
{
"version": "v2",
"created": "Sun, 29 Aug 2021 04:17:59 GMT"
}
] | 1,630,368,000,000 | [
[
"Osahor",
"Uche M.",
""
],
[
"Nasrabadi",
"Nasser M.",
""
]
] |
2108.06247 | Abhiram Gnanasambandam | Abhiram Gnanasambandam, Alex M. Sherman, Stanley H. Chan | Optical Adversarial Attack | ICCV Workshop 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce OPtical ADversarial attack (OPAD). OPAD is an adversarial attack
in the physical space aiming to fool image classifiers without physically
touching the objects (e.g., moving or painting the objects). The principle of
OPAD is to use structured illumination to alter the appearance of the target
objects. The system consists of a low-cost projector, a camera, and a computer.
The challenge of the problem is the non-linearity of the radiometric response
of the projector and the spatially varying spectral response of the scene.
Attacks generated in a conventional approach do not work in this setting unless
they are calibrated to compensate for such a projector-camera model. The
proposed solution incorporates the projector-camera model into the adversarial
attack optimization, where a new attack formulation is derived. Experimental
results prove the validity of the solution. It is demonstrated that OPAD can
optically attack a real 3D object in the presence of background lighting for
white-box, black-box, targeted, and untargeted attacks. Theoretical analysis is
presented to quantify the fundamental performance limit of the system.
| [
{
"version": "v1",
"created": "Fri, 13 Aug 2021 13:55:33 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Aug 2021 02:50:24 GMT"
}
] | 1,629,158,400,000 | [
[
"Gnanasambandam",
"Abhiram",
""
],
[
"Sherman",
"Alex M.",
""
],
[
"Chan",
"Stanley H.",
""
]
] |
2108.06405 | Javier Romero | Jorge Fandinno (2 and 3), Fran\c{c}ois Laferri\`ere (3), Javier Romero
(3), Torsten Schaub (3) and Tran Cao Son (1) ((1) New Mexico State
University, USA, (2) Omaha State University, USA, (3) University of Potsdam,
Germany) | Planning with Incomplete Information in Quantified Answer Set
Programming | Under consideration for publication in Theory and Practice of Logic
Programming (TPLP) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a general approach to planning with incomplete information in
Answer Set Programming (ASP). More precisely, we consider the problems of
conformant and conditional planning with sensing actions and assumptions. We
represent planning problems using a simple formalism where logic programs
describe the transition function between states, the initial states and the
goal states. For solving planning problems, we use Quantified Answer Set
Programming (QASP), an extension of ASP with existential and universal
quantifiers over atoms that is analogous to Quantified Boolean Formulas (QBFs).
We define the language of quantified logic programs and use it to represent the
solutions to different variants of conformant and conditional planning. On the
practical side, we present a translation-based QASP solver that converts
quantified logic programs into QBFs and then executes a QBF solver, and we
evaluate experimentally the approach on conformant and conditional planning
benchmarks. Under consideration for acceptance in TPLP.
| [
{
"version": "v1",
"created": "Fri, 13 Aug 2021 21:24:47 GMT"
}
] | 1,629,158,400,000 | [
[
"Fandinno",
"Jorge",
"",
"2 and 3"
],
[
"Laferrière",
"François",
""
],
[
"Romero",
"Javier",
""
],
[
"Schaub",
"Torsten",
""
],
[
"Son",
"Tran Cao",
""
]
] |
2108.06481 | Taisuke Sato | Taisuke Sato (1) and Ryosuke Kojima (2) ((1) National Institute of
Informatics (NII), (2) Graduate School of Medicine, Kyoto University) | MatSat: a matrix-based differentiable SAT solver | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new approach to SAT solving which solves SAT problems in vector
spaces as a cost minimization problem of a non-negative differentiable cost
function J^sat. In our approach, a solution, i.e., satisfying assignment, for a
SAT problem in n variables is represented by a binary vector u in {0,1}^n that
makes J^sat(u) zero. We search for such u in a vector space R^n by cost
minimization, i.e., starting from an initial u_0 and minimizing J to zero while
iteratively updating u by Newton's method. We implemented our approach as a
matrix-based differential SAT solver MatSat. Although existing main-stream SAT
solvers decide each bit of a solution assignment one by one, be they of
conflict driven clause learning (CDCL) type or of stochastic local search (SLS)
type, MatSat fundamentally differs from them in that it continuously approach a
solution in a vector space. We conducted an experiment to measure the
scalability of MatSat with random 3-SAT problems in which MatSat could find a
solution up to n=10^5 variables. We also compared MatSat with four
state-of-the-art SAT solvers including winners of SAT competition 2018 and SAT
Race 2019 in terms of time for finding a solution, using a random benchmark set
from SAT 2018 competition and an artificial random 3-SAT instance set. The
result shows that MatSat comes in second in both test sets and outperforms all
the CDCL type solvers.
| [
{
"version": "v1",
"created": "Sat, 14 Aug 2021 07:38:06 GMT"
}
] | 1,629,158,400,000 | [
[
"Sato",
"Taisuke",
""
],
[
"Kojima",
"Ryosuke",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.