id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2108.06742 | Archana Patel | Archana Patel and Narayan C Debnath | Development of the InBan_CIDO Ontology by Reusing the Concepts along
with Detecting Overlapping Information | 3rd International Conference on Inventive Computation and Information
Technologies (ICICIT 2021) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The covid19 pandemic is a global emergency that badly impacted the economies
of various countries. Covid19 hit India when the growth rate of the country was
at the lowest in the last 10 years. To semantically analyze the impact of this
pandemic on the economy, it is curial to have an ontology. CIDO ontology is a
well standardized ontology that is specially designed to assess the impact of
coronavirus disease and utilize its results for future decision forecasting for
the government, industry experts, and professionals in the field of various
domains like research, medical advancement, technical innovative adoptions, and
so on. However, this ontology does not analyze the impact of the Covid19
pandemic on the Indian banking sector. On the other side, Covid19IBO ontology
has been developed to analyze the impact of the Covid19 pandemic on the Indian
banking sector but this ontology does not reflect complete information of
Covid19 data. Resultantly, users cannot get all the relevant information about
Covid19 and its impact on the Indian economy. This article aims to extend the
CIDO ontology to show the impact of Covid19 on the Indian economy sector by
reusing the concepts from other data sources. We also provide a simplified
schema matching approach that detects the overlapping information among the
ontologies. The experimental analysis proves that the proposed approach has
reasonable results.
| [
{
"version": "v1",
"created": "Sun, 15 Aug 2021 13:37:29 GMT"
}
] | 1,629,158,400,000 | [
[
"Patel",
"Archana",
""
],
[
"Debnath",
"Narayan C",
""
]
] |
2108.07119 | Filip Ilievski | Hans Chalupsky, Pedro Szekely, Filip Ilievski, Daniel Garijo and
Kartik Shenoy | Creating and Querying Personalized Versions of Wikidata on a Laptop | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Application developers today have three choices for exploiting the knowledge
present in Wikidata: they can download the Wikidata dumps in JSON or RDF
format, they can use the Wikidata API to get data about individual entities, or
they can use the Wikidata SPARQL endpoint. None of these methods can support
complex, yet common, query use cases, such as retrieval of large amounts of
data or aggregations over large fractions of Wikidata. This paper introduces
KGTK Kypher, a query language and processor that allows users to create
personalized variants of Wikidata on a laptop. We present several use cases
that illustrate the types of analyses that Kypher enables users to run on the
full Wikidata KG on a laptop, combining data from external resources such as
DBpedia. The Kypher queries for these use cases run much faster on a laptop
than the equivalent SPARQL queries on a Wikidata clone running on a powerful
server with 24h time-out limits.
| [
{
"version": "v1",
"created": "Fri, 6 Aug 2021 00:00:33 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Aug 2021 06:31:15 GMT"
}
] | 1,629,331,200,000 | [
[
"Chalupsky",
"Hans",
""
],
[
"Szekely",
"Pedro",
""
],
[
"Ilievski",
"Filip",
""
],
[
"Garijo",
"Daniel",
""
],
[
"Shenoy",
"Kartik",
""
]
] |
2108.08227 | Thomas Hinrichs | Tom Hinrichs, Greg Dunham and Ken Forbus | Analogical Learning in Tactical Decision Games | 6 pages, 2 figures, unpublished | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Tactical Decision Games (TDGs) are military conflict scenarios presented both
textually and graphically on a map. These scenarios provide a challenging
domain for machine learning because they are open-ended, highly structured, and
typically contain many details of varying relevance. We have developed a
problem-solving component of an interactive companion system that proposes
military tasks to solve TDG scenarios using a combination of analogical
retrieval, mapping, and constraint propagation. We use this problem-solving
component to explore analogical learning.
In this paper, we describe the problems encountered in learning for this
domain, and the methods we have developed to address these, such as partition
constraints on analogical mapping correspondences and the use of incremental
remapping to improve robustness. We present the results of learning experiments
that show improvement in performance through the simple accumulation of
examples, despite a weak domain theory.
| [
{
"version": "v1",
"created": "Wed, 18 Aug 2021 16:35:43 GMT"
}
] | 1,629,331,200,000 | [
[
"Hinrichs",
"Tom",
""
],
[
"Dunham",
"Greg",
""
],
[
"Forbus",
"Ken",
""
]
] |
2108.08234 | Andrea Bontempelli | Fausto Giunchiglia, Marcelo Rodas Britez, Andrea Bontempelli, Xiaoyue
Li | Streaming and Learning the Personal Context | 9 pages, 4 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The representation of the personal context is complex and essential to
improve the help machines can give to humans for making sense of the world, and
the help humans can give to machines to improve their efficiency. We aim to
design a novel model representation of the personal context and design a
learning process for better integration with machine learning. We aim to
implement these elements into a modern system architecture focus in real-life
environments. Also, we show how our proposal can improve in specifically
related work papers. Finally, we are moving forward with a better personal
context representation with an improved model, the implementation of the
learning process, and the architectural design of these components.
| [
{
"version": "v1",
"created": "Wed, 18 Aug 2021 16:55:12 GMT"
}
] | 1,629,331,200,000 | [
[
"Giunchiglia",
"Fausto",
""
],
[
"Britez",
"Marcelo Rodas",
""
],
[
"Bontempelli",
"Andrea",
""
],
[
"Li",
"Xiaoyue",
""
]
] |
2108.08297 | Yao Zhang | Yao Zhang, Peiyao Li, Hongru Liang, Adam Jatowt, Zhenglu Yang | Fact-Tree Reasoning for N-ary Question Answering over Knowledge Graphs | ACL 2022 (Findings) | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | In the question answering(QA) task, multi-hop reasoning framework has been
extensively studied in recent years to perform more efficient and interpretable
answer reasoning on the Knowledge Graph(KG). However, multi-hop reasoning is
inapplicable for answering n-ary fact questions due to its linear reasoning
nature. We discover that there are two feasible improvements: 1) upgrade the
basic reasoning unit from entity or relation to fact; and 2) upgrade the
reasoning structure from chain to tree. Based on these, we propose a novel
fact-tree reasoning framework, through transforming the question into a fact
tree and performing iterative fact reasoning on it to predict the correct
answer. Through a comprehensive evaluation on the n-ary fact KGQA dataset
introduced by this work, we demonstrate that the proposed fact-tree reasoning
framework has the desired advantage of high answer prediction accuracy. In
addition, we also evaluate the fact-tree reasoning framework on two binary KGQA
datasets and show that our approach also has a strong reasoning ability
compared with several excellent baselines. This work has direct implications
for exploring complex reasoning scenarios and provides a preliminary baseline
approach.
| [
{
"version": "v1",
"created": "Tue, 17 Aug 2021 13:27:49 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 02:13:29 GMT"
}
] | 1,647,302,400,000 | [
[
"Zhang",
"Yao",
""
],
[
"Li",
"Peiyao",
""
],
[
"Liang",
"Hongru",
""
],
[
"Jatowt",
"Adam",
""
],
[
"Yang",
"Zhenglu",
""
]
] |
2108.08615 | Marco Pegoraro | Marco Pegoraro, Bianka Bakullari, Merih Seran Uysal, Wil M.P. van der
Aalst | Probability Estimation of Uncertain Process Trace Realizations | 12 pages, 7 figures, 4 tables, 11 references | ICPM Workshops (2021) 21-33 | 10.1007/978-3-030-98581-3_2 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Process mining is a scientific discipline that analyzes event data, often
collected in databases called event logs. Recently, uncertain event logs have
become of interest, which contain non-deterministic and stochastic event
attributes that may represent many possible real-life scenarios. In this paper,
we present a method to reliably estimate the probability of each of such
scenarios, allowing their analysis. Experiments show that the probabilities
calculated with our method closely match the true chances of occurrence of
specific outcomes, enabling more trustworthy analyses on uncertain data.
| [
{
"version": "v1",
"created": "Thu, 19 Aug 2021 10:50:52 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Aug 2021 04:35:02 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Sep 2021 13:28:24 GMT"
}
] | 1,649,116,800,000 | [
[
"Pegoraro",
"Marco",
""
],
[
"Bakullari",
"Bianka",
""
],
[
"Uysal",
"Merih Seran",
""
],
[
"van der Aalst",
"Wil M. P.",
""
]
] |
2108.09003 | Francisco Cruz | Richard Dazeley, Peter Vamplew, Francisco Cruz | Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework
and Survey | 22 pages, 7 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Broad Explainable Artificial Intelligence moves away from interpreting
individual decisions based on a single datum and aims to provide integrated
explanations from multiple machine learning algorithms into a coherent
explanation of an agent's behaviour that is aligned to the communication needs
of the explainee. Reinforcement Learning (RL) methods, we propose, provide a
potential backbone for the cognitive model required for the development of
Broad-XAI. RL represents a suite of approaches that have had increasing success
in solving a range of sequential decision-making problems. However, these
algorithms all operate as black-box problem solvers, where they obfuscate their
decision-making policy through a complex array of values and functions.
EXplainable RL (XRL) is relatively recent field of research that aims to
develop techniques to extract concepts from the agent's: perception of the
environment; intrinsic/extrinsic motivations/beliefs; Q-values, goals and
objectives. This paper aims to introduce a conceptual framework, called the
Causal XRL Framework (CXF), that unifies the current XRL research and uses RL
as a backbone to the development of Broad-XAI. Additionally, we recognise that
RL methods have the ability to incorporate a range of technologies to allow
agents to adapt to their environment. CXF is designed for the incorporation of
many standard RL extensions and integrated with external ontologies and
communication facilities so that the agent can answer questions that explain
outcomes and justify its decisions.
| [
{
"version": "v1",
"created": "Fri, 20 Aug 2021 05:18:50 GMT"
}
] | 1,629,676,800,000 | [
[
"Dazeley",
"Richard",
""
],
[
"Vamplew",
"Peter",
""
],
[
"Cruz",
"Francisco",
""
]
] |
2108.09372 | Archana Patel | Archana Patel, Sarika Jain, Narayan C. Debnath, Vishal Lama | InBiodiv-O: An Ontology for Indian Biodiversity Knowledge Management | This paper has been withdrawn by the author due to many grammatical
errors, and inconsistent content | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | To present the biodiversity information, a semantic model is required that
connects all kinds of data about living creatures and their habitats. The model
must be able to encode human knowledge for machines to be understood. Ontology
offers the richest machine-interpretable (rather than just machine-processable)
and explicit semantics that are being extensively used in the biodiversity
domain. Various ontologies are developed for the biodiversity domain however a
review of the current landscape shows that these ontologies are not capable to
define the Indian biodiversity information though India is one of the
megadiverse countries. To semantically analyze the Indian biodiversity
information, it is crucial to build an ontology that describes all the
essential terms of this domain from the unstructured format of the data
available on the web. Since, the curation of the ontologies heavily depends on
the domain where these are implemented hence there is no ideal methodology is
defined yet to be ready for universal use. The aim of this article is to
develop an ontology that semantically encodes all the terms of Indian
biodiversity information in all its dimensions based on the proposed
methodology. The comprehensive evaluation of the proposed ontology depicts that
ontology is well built in the specified domain.
| [
{
"version": "v1",
"created": "Fri, 20 Aug 2021 21:07:46 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Oct 2022 08:10:43 GMT"
}
] | 1,667,174,400,000 | [
[
"Patel",
"Archana",
""
],
[
"Jain",
"Sarika",
""
],
[
"Debnath",
"Narayan C.",
""
],
[
"Lama",
"Vishal",
""
]
] |
2108.09443 | Samira Ghodratnama | Samira Ghodratnama | Towards Personalized and Human-in-the-Loop Document Summarization | PhD thesis | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ubiquitous availability of computing devices and the widespread use of
the internet have generated a large amount of data continuously. Therefore, the
amount of available information on any given topic is far beyond humans'
processing capacity to properly process, causing what is known as information
overload. To efficiently cope with large amounts of information and generate
content with significant value to users, we require identifying, merging and
summarising information. Data summaries can help gather related information and
collect it into a shorter format that enables answering complicated questions,
gaining new insight and discovering conceptual boundaries.
This thesis focuses on three main challenges to alleviate information
overload using novel summarisation techniques. It further intends to facilitate
the analysis of documents to support personalised information extraction. This
thesis separates the research issues into four areas, covering (i) feature
engineering in document summarisation, (ii) traditional static and inflexible
summaries, (iii) traditional generic summarisation approaches, and (iv) the
need for reference summaries. We propose novel approaches to tackle these
challenges, by: i)enabling automatic intelligent feature engineering, ii)
enabling flexible and interactive summarisation, iii) utilising intelligent and
personalised summarisation approaches. The experimental results prove the
efficiency of the proposed approaches compared to other state-of-the-art
models. We further propose solutions to the information overload problem in
different domains through summarisation, covering network traffic data, health
data and business process data.
| [
{
"version": "v1",
"created": "Sat, 21 Aug 2021 05:34:46 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Oct 2021 00:57:27 GMT"
}
] | 1,689,033,600,000 | [
[
"Ghodratnama",
"Samira",
""
]
] |
2108.09586 | Pulkit Verma | Pulkit Verma, Siddharth Srivastava | Learning Causal Models of Autonomous Agents using Interventions | IJCAI 2021 Workshop on Generalization in Planning | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the several obstacles in the widespread use of AI systems is the lack
of requirements of interpretability that can enable a layperson to ensure the
safe and reliable behavior of such systems. We extend the analysis of an agent
assessment module that lets an AI system execute high-level instruction
sequences in simulators and answer the user queries about its execution of
sequences of actions. We show that such a primitive query-response capability
is sufficient to efficiently derive a user-interpretable causal model of the
system in stationary, fully observable, and deterministic settings. We also
introduce dynamic causal decision networks (DCDNs) that capture the causal
structure of STRIPS-like domains. A comparative analysis of different classes
of queries is also presented in terms of the computational requirements needed
to answer them and the efforts required to evaluate their responses to learn
the correct model.
| [
{
"version": "v1",
"created": "Sat, 21 Aug 2021 21:33:26 GMT"
}
] | 1,629,763,200,000 | [
[
"Verma",
"Pulkit",
""
],
[
"Srivastava",
"Siddharth",
""
]
] |
2108.09628 | Junkang Wu | Junkang Wu, Wentao Shi, Xuezhi Cao, Jiawei Chen, Wenqiang Lei, Fuzheng
Zhang, Wei Wu and Xiangnan He | DisenKGAT: Knowledge Graph Embedding with Disentangled Graph Attention
Network | CIKM2021 | null | 10.1145/3459637.3482424 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge graph completion (KGC) has become a focus of attention across deep
learning community owing to its excellent contribution to numerous downstream
tasks. Although recently have witnessed a surge of work on KGC, they are still
insufficient to accurately capture complex relations, since they adopt the
single and static representations. In this work, we propose a novel
Disentangled Knowledge Graph Attention Network (DisenKGAT) for KGC, which
leverages both micro-disentanglement and macro-disentanglement to exploit
representations behind Knowledge graphs (KGs). To achieve
micro-disentanglement, we put forward a novel relation-aware aggregation to
learn diverse component representation. For macro-disentanglement, we leverage
mutual information as a regularization to enhance independence. With the
assistance of disentanglement, our model is able to generate adaptive
representations in terms of the given scenario. Besides, our work has strong
robustness and flexibility to adapt to various score functions. Extensive
experiments on public benchmark datasets have been conducted to validate the
superiority of DisenKGAT over existing methods in terms of both accuracy and
explainability.
| [
{
"version": "v1",
"created": "Sun, 22 Aug 2021 04:10:35 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Oct 2021 12:26:09 GMT"
}
] | 1,633,996,800,000 | [
[
"Wu",
"Junkang",
""
],
[
"Shi",
"Wentao",
""
],
[
"Cao",
"Xuezhi",
""
],
[
"Chen",
"Jiawei",
""
],
[
"Lei",
"Wenqiang",
""
],
[
"Zhang",
"Fuzheng",
""
],
[
"Wu",
"Wei",
""
],
[
"He",
"Xiangnan",
""
]
] |
2108.09988 | Jiongzhi Zheng | Jiongzhi Zheng and Kun He and Jianrong Zhou | Farsighted Probabilistic Sampling: A General Strategy for Boosting Local
Search MaxSAT Solvers | Accepted by AAAI 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Local search has been demonstrated as an efficient approach for two practical
generalizations of the MaxSAT problem, namely Partial MaxSAT (PMS) and Weighted
PMS (WPMS). In this work, we observe that most local search (W)PMS solvers
usually flip a single variable per iteration. Such a mechanism may lead to
relatively low-quality local optimal solutions, and may limit the diversity of
search directions to escape from local optima. To address this issue, we
propose a general strategy, called farsighted probabilistic sampling (FPS), to
replace the single flipping mechanism so as to boost the local search (W)PMS
algorithms. FPS considers the benefit of continuously flipping a pair of
variables in order to find higher-quality local optimal solutions. Moreover,
FPS proposes an effective approach to escape from local optima by preferring
the best to flip among the best sampled single variable and the best sampled
variable pair. Extensive experiments demonstrate that our proposed FPS strategy
significantly improves the state-of-the-art (W)PMS solvers, and FPS has an
excellent generalization capability to various local search MaxSAT solvers.
| [
{
"version": "v1",
"created": "Mon, 23 Aug 2021 07:41:56 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Nov 2021 09:41:55 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Jan 2022 16:26:33 GMT"
},
{
"version": "v4",
"created": "Sat, 2 Jul 2022 09:46:56 GMT"
},
{
"version": "v5",
"created": "Fri, 25 Nov 2022 12:03:29 GMT"
}
] | 1,669,593,600,000 | [
[
"Zheng",
"Jiongzhi",
""
],
[
"He",
"Kun",
""
],
[
"Zhou",
"Jianrong",
""
]
] |
2108.09996 | Ping-Yang Chen | Jun-Wei Hsieh, Ming-Ching Chang, Ping-Yang Chen, Santanu Santra,
Cheng-Han Chou, Chih-Sheng Huang | MS-DARTS: Mean-Shift Based Differentiable Architecture Search | 14pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Differentiable Architecture Search (DARTS) is an effective continuous
relaxation-based network architecture search (NAS) method with low search cost.
It has attracted significant attentions in Auto-ML research and becomes one of
the most useful paradigms in NAS. Although DARTS can produce superior
efficiency over traditional NAS approaches with better control of complex
parameters, oftentimes it suffers from stabilization issues in producing
deteriorating architectures when discretizing the continuous architecture. We
observed considerable loss of validity causing dramatic decline in performance
at this final discretization step of DARTS. To address this issue, we propose a
Mean-Shift based DARTS (MS-DARTS) to improve stability based on sampling and
perturbation. Our approach can improve bot the stability and accuracy of DARTS,
by smoothing the loss landscape and sampling architecture parameters within a
suitable bandwidth. We investigate the convergence of our mean-shift approach,
together with the effects of bandwidth selection that affects stability and
accuracy. Evaluations performed on CIFAR-10, CIFAR-100, and ImageNet show that
MS-DARTS archives higher performance over other state-of-the-art NAS methods
with reduced search cost.
| [
{
"version": "v1",
"created": "Mon, 23 Aug 2021 08:06:45 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Aug 2021 06:49:46 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Sep 2021 05:20:03 GMT"
},
{
"version": "v4",
"created": "Wed, 9 Mar 2022 10:21:14 GMT"
}
] | 1,646,870,400,000 | [
[
"Hsieh",
"Jun-Wei",
""
],
[
"Chang",
"Ming-Ching",
""
],
[
"Chen",
"Ping-Yang",
""
],
[
"Santra",
"Santanu",
""
],
[
"Chou",
"Cheng-Han",
""
],
[
"Huang",
"Chih-Sheng",
""
]
] |
2108.10005 | Jitendra Kumar | Pooja Tiwari, Simran Mehta, Nishtha Sakhuja, Jitendra Kumar, Ashutosh
Kumar Singh | Credit Card Fraud Detection using Machine Learning: A Study | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As the world is rapidly moving towards digitization and money transactions
are becoming cashless, the use of credit cards has rapidly increased. The fraud
activities associated with it have also been increasing which leads to a huge
loss to the financial institutions. Therefore, we need to analyze and detect
the fraudulent transaction from the non-fraudulent ones. In this paper, we
present a comprehensive review of various methods used to detect credit card
fraud. These methodologies include Hidden Markov Model, Decision Trees,
Logistic Regression, Support Vector Machines (SVM), Genetic algorithm, Neural
Networks, Random Forests, Bayesian Belief Network. A comprehensive analysis of
various techniques is presented. We conclude the paper with the pros and cons
of the same as stated in the respective papers.
| [
{
"version": "v1",
"created": "Mon, 23 Aug 2021 08:30:24 GMT"
}
] | 1,629,763,200,000 | [
[
"Tiwari",
"Pooja",
""
],
[
"Mehta",
"Simran",
""
],
[
"Sakhuja",
"Nishtha",
""
],
[
"Kumar",
"Jitendra",
""
],
[
"Singh",
"Ashutosh Kumar",
""
]
] |
2108.10021 | Federico Croce | Gianluca Cima, Federico Croce, Maurizio Lenzerini | QDEF and Its Approximations in OBDM | A more compact version of this paper will be published at the
proceedings of the 30th ACM International Conference on Information and
Knowledge Management. The associated DOI is:
https://doi.org/10.1145/3459637.34824661 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Given an input dataset (i.e., a set of tuples), query definability in
Ontology-based Data Management (OBDM) amounts to find a query over the ontology
whose certain answers coincide with the tuples in the given dataset. We refer
to such a query as a characterization of the dataset with respect to the OBDM
system. Our first contribution is to propose approximations of perfect
characterizations in terms of recall (complete characterizations) and precision
(sound characterizations). A second contribution is to present a thorough
complexity analysis of three computational problems, namely verification (check
whether a given query is a perfect, or an approximated characterization of a
given dataset), existence (check whether a perfect, or a best approximated
characterization of a given dataset exists), and computation (compute a
perfect, or best approximated characterization of a given dataset).
| [
{
"version": "v1",
"created": "Mon, 23 Aug 2021 09:14:11 GMT"
}
] | 1,629,763,200,000 | [
[
"Cima",
"Gianluca",
""
],
[
"Croce",
"Federico",
""
],
[
"Lenzerini",
"Maurizio",
""
]
] |
2108.10125 | Jitendra Kumar | Harsh Mittal, Deepak Rikhari, Jitendra Kumar, Ashutosh Kumar Singh | A study on Machine Learning Approaches for Player Performance and Match
Results Prediction | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Cricket is unarguably one of the most popular sports in the world. Predicting
the outcome of a cricket match has become a fundamental problem as we are
advancing in the field of machine learning. Multiple researchers have tried to
predict the outcome of a cricket match or a tournament, or to predict the
performance of players during a match, or to predict the players who should be
selected as per their current performance, form, morale, etc. using machine
learning and artificial intelligence techniques keeping in mind extensive
detailing, features, and parameters. We discuss some of these techniques along
with a brief comparison among these techniques.
| [
{
"version": "v1",
"created": "Mon, 23 Aug 2021 12:49:57 GMT"
}
] | 1,629,763,200,000 | [
[
"Mittal",
"Harsh",
""
],
[
"Rikhari",
"Deepak",
""
],
[
"Kumar",
"Jitendra",
""
],
[
"Singh",
"Ashutosh Kumar",
""
]
] |
2108.10141 | Joseph Ramsey | Joseph D. Ramsey | Improving Accuracy of Permutation DAG Search using Best Order Score
Search | 25 pages, 12 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The Sparsest Permutation (SP) algorithm is accurate but limited to about 9
variables in practice; the Greedy Sparest Permutation (GSP) algorithm is faster
but less weak theoretically. A compromise can be given, the Best Order Score
Search, which gives results as accurate as SP but for much larger and denser
graphs. BOSS (Best Order Score Search) is more accurate for two reason: (a) It
assumes the "brute faithfuness" assumption, which is weaker than faithfulness,
and (b) it uses a different traversal of permutations than the depth first
traversal used by GSP, obtained by taking each variable in turn and moving it
to the position in the permutation that optimizes the model score. Results are
given comparing BOSS to several related papers in the literature in terms of
performance, for linear, Gaussian data. In all cases, with the proper parameter
settings, accuracy of BOSS is lifted considerably with respect to competing
approaches. In configurations tested, models with 60 variables are feasible
with large samples out to about an average degree of 12 in reasonable time,
with near-perfect accuracy, and sparse models with an average degree of 4 are
feasible out to about 300 variables on a laptop, again with near-perfect
accuracy. Mixed continuous discrete and all-discrete datasets were also tested.
The mixed data analysis showed advantage for BOSS over GES more apparent at
higher depths with the same score; the discrete data analysis showed a very
small advantage for BOSS over GES with the same score, perhaps not enough to
prefer it.
| [
{
"version": "v1",
"created": "Tue, 17 Aug 2021 13:46:34 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Sep 2021 18:06:57 GMT"
}
] | 1,630,627,200,000 | [
[
"Ramsey",
"Joseph D.",
""
]
] |
2108.10168 | Aishwarya N | Aishwarya Narasimhan (1), Krishna Prasad Agara Venkatesha Rao (2),
Veena M B (1) ((1) B M S College of Engineering, (2) Sony India Software
Centre Pvt. Ltd.) | CGEMs: A Metric Model for Automatic Code Generation using GPT-3 | 11 pages, 6 figures, 2 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Today, AI technology is showing its strengths in almost every industry and
walks of life. From text generation, text summarization, chatbots, NLP is being
used widely. One such paradigm is automatic code generation. An AI could be
generating anything; hence the output space is unconstrained. A self-driving
car is driven for 100 million miles to validate its safety, but tests cannot be
written to monitor and cover an unconstrained space. One of the solutions to
validate AI-generated content is to constrain the problem and convert it from
abstract to realistic, and this can be accomplished by either validating the
unconstrained algorithm using theoretical proofs or by using Monte-Carlo
simulation methods. In this case, we use the latter approach to test/validate a
statistically significant number of samples. This hypothesis of validating the
AI-generated code is the main motive of this work and to know if AI-generated
code is reliable, a metric model CGEMs is proposed. This is an extremely
challenging task as programs can have different logic with different naming
conventions, but the metrics must capture the structure and logic of the
program. This is similar to the importance grammar carries in AI-based text
generation, Q&A, translations, etc. The various metrics that are garnered in
this work to support the evaluation of generated code are as follows:
Compilation, NL description to logic conversion, number of edits needed, some
of the commonly used static-code metrics and NLP metrics. These metrics are
applied to 80 codes generated using OpenAI's GPT-3. Post which a Neural network
is designed for binary classification (acceptable/not acceptable quality of the
generated code). The inputs to this network are the values of the features
obtained from the metrics. The model achieves a classification accuracy of
76.92% and an F1 score of 55.56%. XAI is augmented for model interpretability.
| [
{
"version": "v1",
"created": "Mon, 23 Aug 2021 13:28:57 GMT"
}
] | 1,629,763,200,000 | [
[
"Narasimhan",
"Aishwarya",
""
],
[
"Rao",
"Krishna Prasad Agara Venkatesha",
""
],
[
"B",
"Veena M",
""
]
] |
2108.10363 | Manil Shrestha | Rosina Weber, Manil Shrestha, Adam J Johs | Knowledge-based XAI through CBR: There is more to explanations than
models can tell | 12 pages, 8 figures. This paper was accepted at workshop XCBR:
Case-Based Reasoning for the Explanation of Intelligent Systems at ICCBR 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The underlying hypothesis of knowledge-based explainable artificial
intelligence is the data required for data-centric artificial intelligence
agents (e.g., neural networks) are less diverse in contents than the data
required to explain the decisions of such agents to humans. The idea is that a
classifier can attain high accuracy using data that express a phenomenon from
one perspective whereas the audience of explanations can entail multiple
stakeholders and span diverse perspectives. We hence propose to use domain
knowledge to complement the data used by agents. We formulate knowledge-based
explainable artificial intelligence as a supervised data classification problem
aligned with the CBR methodology. In this formulation, the inputs are case
problems composed of both the inputs and outputs of the data-centric agent and
case solutions, the outputs, are explanation categories obtained from domain
knowledge and subject matter experts. This formulation does not typically lead
to an accurate classification, preventing the selection of the correct
explanation category. Knowledge-based explainable artificial intelligence
extends the data in this formulation by adding features aligned with domain
knowledge that can increase accuracy when selecting explanation categories.
| [
{
"version": "v1",
"created": "Mon, 23 Aug 2021 19:01:43 GMT"
}
] | 1,629,849,600,000 | [
[
"Weber",
"Rosina",
""
],
[
"Shrestha",
"Manil",
""
],
[
"Johs",
"Adam J",
""
]
] |
2108.10437 | Prateek Goel | Rosina O. Weber, Prateek Goel, Shideh Amiri, and Gideon Simpson | Longitudinal Distance: Towards Accountable Instance Attribution | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Previous research in interpretable machine learning (IML) and explainable
artificial intelligence (XAI) can be broadly categorized as either focusing on
seeking interpretability in the agent's model (i.e., IML) or focusing on the
context of the user in addition to the model (i.e., XAI). The former can be
categorized as feature or instance attribution. Example- or sample-based
methods such as those using or inspired by case-based reasoning (CBR) rely on
various approaches to select instances that are not necessarily attributing
instances responsible for an agent's decision. Furthermore, existing approaches
have focused on interpretability and explainability but fall short when it
comes to accountability. Inspired in case-based reasoning principles, this
paper introduces a pseudo-metric we call Longitudinal distance and its use to
attribute instances to a neural network agent's decision that can be
potentially used to build accountable CBR agents.
| [
{
"version": "v1",
"created": "Mon, 23 Aug 2021 22:50:23 GMT"
}
] | 1,629,849,600,000 | [
[
"Weber",
"Rosina O.",
""
],
[
"Goel",
"Prateek",
""
],
[
"Amiri",
"Shideh",
""
],
[
"Simpson",
"Gideon",
""
]
] |
2108.10818 | Yemin Shi | Gang Yu, Zhongzhi Yu, Yemin Shi, Yingshuo Wang, Xiaoqing Liu, Zheming
Li, Yonggen Zhao, Fenglei Sun, Yizhou Yu, Qiang Shu | Identification of Pediatric Respiratory Diseases Using Fine-grained
Diagnosis System | null | Journal of Biomedical Informatics, 2021, 117: 103754 | 10.1016/j.jbi.2021.103754 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Respiratory diseases, including asthma, bronchitis, pneumonia, and upper
respiratory tract infection (RTI), are among the most common diseases in
clinics. The similarities among the symptoms of these diseases precludes prompt
diagnosis upon the patients' arrival. In pediatrics, the patients' limited
ability in expressing their situation makes precise diagnosis even harder. This
becomes worse in primary hospitals, where the lack of medical imaging devices
and the doctors' limited experience further increase the difficulty of
distinguishing among similar diseases. In this paper, a pediatric fine-grained
diagnosis-assistant system is proposed to provide prompt and precise diagnosis
using solely clinical notes upon admission, which would assist clinicians
without changing the diagnostic process. The proposed system consists of two
stages: a test result structuralization stage and a disease identification
stage. The first stage structuralizes test results by extracting relevant
numerical values from clinical notes, and the disease identification stage
provides a diagnosis based on text-form clinical notes and the structured data
obtained from the first stage. A novel deep learning algorithm was developed
for the disease identification stage, where techniques including adaptive
feature infusion and multi-modal attentive fusion were introduced to fuse
structured and text data together. Clinical notes from over 12000 patients with
respiratory diseases were used to train a deep learning model, and clinical
notes from a non-overlapping set of about 1800 patients were used to evaluate
the performance of the trained model. The average precisions (AP) for
pneumonia, RTI, bronchitis and asthma are 0.878, 0.857, 0.714, and 0.825,
respectively, achieving a mean AP (mAP) of 0.819.
| [
{
"version": "v1",
"created": "Tue, 24 Aug 2021 16:09:39 GMT"
}
] | 1,629,849,600,000 | [
[
"Yu",
"Gang",
""
],
[
"Yu",
"Zhongzhi",
""
],
[
"Shi",
"Yemin",
""
],
[
"Wang",
"Yingshuo",
""
],
[
"Liu",
"Xiaoqing",
""
],
[
"Li",
"Zheming",
""
],
[
"Zhao",
"Yonggen",
""
],
[
"Sun",
"Fenglei",
""
],
[
"Yu",
"Yizhou",
""
],
[
"Shu",
"Qiang",
""
]
] |
2108.11451 | Giuseppe Marra | Giuseppe Marra and Sebastijan Duman\v{c}i\'c and Robin Manhaeve and
Luc De Raedt | From Statistical Relational to Neurosymbolic Artificial Intelligence: a
Survey | To appear in Artificial Intelligence. Shorter version at IJCAI 2020
survey track, https://www.ijcai.org/proceedings/2020/0688.pdf | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This survey explores the integration of learning and reasoning in two
different fields of artificial intelligence: neurosymbolic and statistical
relational artificial intelligence. Neurosymbolic artificial intelligence
(NeSy) studies the integration of symbolic reasoning and neural networks, while
statistical relational artificial intelligence (StarAI) focuses on integrating
logic with probabilistic graphical models. This survey identifies seven shared
dimensions between these two subfields of AI. These dimensions can be used to
characterize different NeSy and StarAI systems. They are concerned with (1) the
approach to logical inference, whether model or proof-based; (2) the syntax of
the used logical theories; (3) the logical semantics of the systems and their
extensions to facilitate learning; (4) the scope of learning, encompassing
either parameter or structure learning; (5) the presence of symbolic and
subsymbolic representations; (6) the degree to which systems capture the
original logic, probabilistic, and neural paradigms; and (7) the classes of
learning tasks the systems are applied to. By positioning various NeSy and
StarAI systems along these dimensions and pointing out similarities and
differences between them, this survey contributes fundamental concepts for
understanding the integration of learning and reasoning.
| [
{
"version": "v1",
"created": "Wed, 25 Aug 2021 19:47:12 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Mar 2022 09:29:16 GMT"
},
{
"version": "v3",
"created": "Sun, 21 May 2023 08:52:06 GMT"
},
{
"version": "v4",
"created": "Tue, 2 Jan 2024 07:55:58 GMT"
}
] | 1,704,240,000,000 | [
[
"Marra",
"Giuseppe",
""
],
[
"Dumančić",
"Sebastijan",
""
],
[
"Manhaeve",
"Robin",
""
],
[
"De Raedt",
"Luc",
""
]
] |
2108.11635 | Hongru Wang | Hongru Wang, Zezhong Wang, Wai Chung Kwan, Kam-Fai Wong | MCML: A Novel Memory-based Contrastive Meta-Learning Method for Few Shot
Slot Tagging | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Meta-learning is widely used for few-shot slot tagging in task of few-shot
learning. The performance of existing methods is, however, seriously affected
by \textit{sample forgetting issue}, where the model forgets the historically
learned meta-training tasks while solely relying on support sets when adapting
to new tasks. To overcome this predicament, we propose the
\textbf{M}emory-based \textbf{C}ontrastive \textbf{M}eta-\textbf{L}earning
(aka, MCML) method, including \textit{learn-from-the-memory} and
\textit{adaption-from-the-memory} modules, which bridge the distribution gap
between training episodes and between training and testing respectively.
Specifically, the former uses an explicit memory bank to keep track of the
label representations of previously trained episodes, with a contrastive
constraint between the label representations in the current episode with the
historical ones stored in the memory. In addition, the
\emph{adaption-from-memory} mechanism is introduced to learn more accurate and
robust representations based on the shift between the same labels embedded in
the testing episodes and memory. Experimental results show that the MCML
outperforms several state-of-the-art methods on both SNIPS and NER datasets and
demonstrates strong scalability with consistent improvement when the number of
shots gets greater.
| [
{
"version": "v1",
"created": "Thu, 26 Aug 2021 08:02:21 GMT"
},
{
"version": "v2",
"created": "Sat, 28 Aug 2021 02:03:15 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Sep 2023 08:39:17 GMT"
}
] | 1,694,476,800,000 | [
[
"Wang",
"Hongru",
""
],
[
"Wang",
"Zezhong",
""
],
[
"Kwan",
"Wai Chung",
""
],
[
"Wong",
"Kam-Fai",
""
]
] |
2108.11645 | Wanpeng Zhang | Wanpeng Zhang, Xiaoyan Cao, Yao Yao, Zhicheng An, Xi Xiao, Dijun Luo | Robust Model-based Reinforcement Learning for Autonomous Greenhouse
Control | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the high efficiency and less weather dependency, autonomous
greenhouses provide an ideal solution to meet the increasing demand for fresh
food. However, managers are faced with some challenges in finding appropriate
control strategies for crop growth, since the decision space of the greenhouse
control problem is an astronomical number. Therefore, an intelligent
closed-loop control framework is highly desired to generate an automatic
control policy. As a powerful tool for optimal control, reinforcement learning
(RL) algorithms can surpass human beings' decision-making and can also be
seamlessly integrated into the closed-loop control framework. However, in
complex real-world scenarios such as agricultural automation control, where the
interaction with the environment is time-consuming and expensive, the
application of RL algorithms encounters two main challenges, i.e., sample
efficiency and safety. Although model-based RL methods can greatly mitigate the
efficiency problem of greenhouse control, the safety problem has not got too
much attention. In this paper, we present a model-based robust RL framework for
autonomous greenhouse control to meet the sample efficiency and safety
challenges. Specifically, our framework introduces an ensemble of environment
models to work as a simulator and assist in policy optimization, thereby
addressing the low sample efficiency problem. As for the safety concern, we
propose a sample dropout module to focus more on worst-case samples, which can
help improve the adaptability of the greenhouse planting policy in extreme
cases. Experimental results demonstrate that our approach can learn a more
effective greenhouse planting policy with better robustness than existing
methods.
| [
{
"version": "v1",
"created": "Thu, 26 Aug 2021 08:27:10 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Oct 2021 15:26:24 GMT"
}
] | 1,634,688,000,000 | [
[
"Zhang",
"Wanpeng",
""
],
[
"Cao",
"Xiaoyan",
""
],
[
"Yao",
"Yao",
""
],
[
"An",
"Zhicheng",
""
],
[
"Xiao",
"Xi",
""
],
[
"Luo",
"Dijun",
""
]
] |
2108.11711 | Fengyu Cai | Fengyu Cai, Wanhao Zhou, Fei Mi and Boi Faltings | SLIM: Explicit Slot-Intent Mapping with BERT for Joint Multi-Intent
Detection and Slot Filling | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Utterance-level intent detection and token-level slot filling are two key
tasks for natural language understanding (NLU) in task-oriented systems. Most
existing approaches assume that only a single intent exists in an utterance.
However, there are often multiple intents within an utterance in real-life
scenarios. In this paper, we propose a multi-intent NLU framework, called SLIM,
to jointly learn multi-intent detection and slot filling based on BERT. To
fully exploit the existing annotation data and capture the interactions between
slots and intents, SLIM introduces an explicit slot-intent classifier to learn
the many-to-one mapping between slots and intents. Empirical results on three
public multi-intent datasets demonstrate (1) the superior performance of SLIM
compared to the current state-of-the-art for NLU with multiple intents and (2)
the benefits obtained from the slot-intent classifier.
| [
{
"version": "v1",
"created": "Thu, 26 Aug 2021 11:33:39 GMT"
}
] | 1,630,022,400,000 | [
[
"Cai",
"Fengyu",
""
],
[
"Zhou",
"Wanhao",
""
],
[
"Mi",
"Fei",
""
],
[
"Faltings",
"Boi",
""
]
] |
2108.11762 | Toon Van De Maele | Toon Van de Maele, Tim Verbelen, Ozan Catal and Bart Dhoedt | Disentangling What and Where for 3D Object-Centric Representations
Through Active Inference | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Although modern object detection and classification models achieve high
accuracy, these are typically constrained in advance on a fixed train set and
are therefore not flexible to deal with novel, unseen object categories.
Moreover, these models most often operate on a single frame, which may yield
incorrect classifications in case of ambiguous viewpoints. In this paper, we
propose an active inference agent that actively gathers evidence for object
classifications, and can learn novel object categories over time. Drawing
inspiration from the human brain, we build object-centric generative models
composed of two information streams, a what- and a where-stream. The
what-stream predicts whether the observed object belongs to a specific
category, while the where-stream is responsible for representing the object in
its internal 3D reference frame. We show that our agent (i) is able to learn
representations for many object categories in an unsupervised way, (ii)
achieves state-of-the-art classification accuracies, actively resolving
ambiguity when required and (iii) identifies novel object categories.
Furthermore, we validate our system in an end-to-end fashion where the agent is
able to search for an object at a given pose from a pixel-based rendering. We
believe that this is a first step towards building modular, intelligent systems
that can be used for a wide range of tasks involving three dimensional objects.
| [
{
"version": "v1",
"created": "Thu, 26 Aug 2021 12:49:07 GMT"
}
] | 1,630,022,400,000 | [
[
"Van de Maele",
"Toon",
""
],
[
"Verbelen",
"Tim",
""
],
[
"Catal",
"Ozan",
""
],
[
"Dhoedt",
"Bart",
""
]
] |
2108.12134 | Sahil Sharma Dr. | Arjit Sharma and Sahil Sharma | WAD: A Deep Reinforcement Learning Agent for Urban Autonomous Driving | 10 pages, 8 figures, and 4 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Urban autonomous driving is an open and challenging problem to solve as the
decision-making system has to account for several dynamic factors like
multi-agent interactions, diverse scene perceptions, complex road geometries,
and other rarely occurring real-world events. On the other side, with deep
reinforcement learning (DRL) techniques, agents have learned many complex
policies. They have even achieved super-human-level performances in various
Atari Games and Deepmind's AlphaGo. However, current DRL techniques do not
generalize well on complex urban driving scenarios. This paper introduces the
DRL driven Watch and Drive (WAD) agent for end-to-end urban autonomous driving.
Motivated by recent advancements, the study aims to detect important
objects/states in high dimensional spaces of CARLA and extract the latent state
from them. Further, passing on the latent state information to WAD agents based
on TD3 and SAC methods to learn the optimal driving policy. Our novel approach
utilizing fewer resources, step-by-step learning of different driving tasks,
hard episode termination policy, and reward mechanism has led our agents to
achieve a 100% success rate on all driving tasks in the original CARLA
benchmark and set a new record of 82% on further complex NoCrash benchmark,
outperforming the state-of-the-art model by more than +30% on NoCrash
benchmark.
| [
{
"version": "v1",
"created": "Fri, 27 Aug 2021 06:48:31 GMT"
}
] | 1,630,281,600,000 | [
[
"Sharma",
"Arjit",
""
],
[
"Sharma",
"Sahil",
""
]
] |
2108.12149 | Sabiha Tahrat | Mourad Ouziri (LIPADE - EA 2517), Sabiha Tahrat (LIPADE - EA 2517),
Salima Benbernou (LIPADE - EA 2517), Mourad Ouzirri | Cleaning Inconsistent Data in Temporal DL-Lite Under Best Repair
Semantics | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of handling inconsistent data in
Temporal Description Logic (TDL) knowledge bases. Considering the data part of
the Knowledge Base as the source of inconsistency over time, we propose an ABox
repair approach. This is the first work handling the repair in TDL Knowledge
bases. To do so, our goal is twofold: 1) detect temporal inconsistencies and 2)
propose a data temporal reparation. For the inconsistency detection, we propose
a reduction approach from TDL to DL which allows to provide a tight NP-complete
upper bound for TDL concept satisfiability and to use highly optimised DL
reasoners that can bring precise explanation (the set of inconsistent data
assertions). Thereafter, from the obtained explanation, we propose a method for
automatically computing the best repair in the temporal setting based on the
allowed rigid predicates and the time order of assertions.
| [
{
"version": "v1",
"created": "Fri, 27 Aug 2021 07:45:01 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Aug 2021 07:21:14 GMT"
}
] | 1,630,368,000,000 | [
[
"Ouziri",
"Mourad",
"",
"LIPADE - EA 2517"
],
[
"Tahrat",
"Sabiha",
"",
"LIPADE - EA 2517"
],
[
"Benbernou",
"Salima",
"",
"LIPADE - EA 2517"
],
[
"Ouzirri",
"Mourad",
""
]
] |
2108.12330 | Alessandro Gianola | Diego Calvanese and Alessandro Gianola and Andrea Mazzullo and Marco
Montali | SMT-Based Safety Verification of Data-Aware Processes under Ontologies
(Extended Version) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of verification of data-aware processes (DAPs), a formal
approach based on satisfiability modulo theories (SMT) has been considered to
verify parameterised safety properties of so-called artifact-centric systems.
This approach requires a combination of model-theoretic notions and algorithmic
techniques based on backward reachability. We introduce here a variant of one
of the most investigated models in this spectrum, namely simple artifact
systems (SASs), where, instead of managing a database, we operate over a
description logic (DL) ontology expressed in (a slight extension of) RDFS. This
DL, enjoying suitable model-theoretic properties, allows us to define DL-based
SASs to which backward reachability can still be applied, leading to
decidability in PSPACE of the corresponding safety problems.
| [
{
"version": "v1",
"created": "Fri, 27 Aug 2021 15:04:11 GMT"
}
] | 1,630,281,600,000 | [
[
"Calvanese",
"Diego",
""
],
[
"Gianola",
"Alessandro",
""
],
[
"Mazzullo",
"Andrea",
""
],
[
"Montali",
"Marco",
""
]
] |
2108.12333 | Remo Pareschi Prof. | Remo Pareschi, Federico Zappone | Integrating Heuristics and Learning in a Computational Architecture for
Cognitive Trading | 16 pages, with 5 figures; figure 5 groups 5 subfigures a, b, c, d.
Currently under peer review for publication in volume to be published by
Elgar on "AI and Behavioral Finance" | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The successes of Artificial Intelligence in recent years in areas such as
image analysis, natural language understanding and strategy games have sparked
interest from the world of finance. Specifically, there are high expectations,
and ongoing engineering projects, regarding the creation of artificial agents,
known as robotic traders, capable of juggling the financial markets with the
skill of experienced human traders. Obvious economic implications aside, this
is certainly an area of great scientific interest, due to the challenges that
such a real context poses to the use of AI techniques. Precisely for this
reason, we must be aware that artificial agents capable of operating at such
levels are not just round the corner, and that there will be no simple answers,
but rather a concurrence of various technologies and methods to the success of
the effort. In the course of this article, we review the issues inherent in the
design of effective robotic traders as well as the consequently applicable
solutions, having in view the general objective of bringing the current state
of the art of robo-trading up to the next level of intelligence, which we refer
to as Cognitive Trading. Key to our approach is the joining of two
methodological and technological directions which, although both deeply rooted
in the disciplinary field of artificial intelligence, have so far gone their
separate ways: heuristics and learning.
| [
{
"version": "v1",
"created": "Fri, 27 Aug 2021 15:09:33 GMT"
}
] | 1,630,281,600,000 | [
[
"Pareschi",
"Remo",
""
],
[
"Zappone",
"Federico",
""
]
] |
2108.13024 | Kangzheng Liu | Kangzheng Liu and Yuhong Zhang | A Temporal Knowledge Graph Completion Method Based on Balanced Timestamp
Distribution | 14 pages, 1 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Completion through the embedding representation of the knowledge graph (KGE)
has been a research hotspot in recent years. Realistic knowledge graphs are
mostly related to time, while most of the existing KGE algorithms ignore the
time information. A few existing methods directly or indirectly encode the time
information, ignoring the balance of timestamp distribution, which greatly
limits the performance of temporal knowledge graph completion (KGC). In this
paper, a temporal KGC method is proposed based on the direct encoding time
information framework, and a given time slice is treated as the finest
granularity for balanced timestamp distribution. A large number of experiments
on temporal knowledge graph datasets extracted from the real world demonstrate
the effectiveness of our method.
| [
{
"version": "v1",
"created": "Mon, 30 Aug 2021 07:27:19 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Nov 2021 13:20:38 GMT"
}
] | 1,636,416,000,000 | [
[
"Liu",
"Kangzheng",
""
],
[
"Zhang",
"Yuhong",
""
]
] |
2108.13025 | Lucas De Lara | Lucas de Lara (IMT), Alberto Gonz\'alez-Sanz (IMT), Nicholas Asher
(IRIT-MELODI, CNRS), Laurent Risser (IMT, CNRS), Jean-Michel Loubes (IMT) | Transport-based Counterfactual Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Counterfactual frameworks have grown popular in machine learning for both
explaining algorithmic decisions but also defining individual notions of
fairness, more intuitive than typical group fairness conditions. However,
state-of-the-art models to compute counterfactuals are either unrealistic or
unfeasible. In particular, while Pearl's causal inference provides appealing
rules to calculate counterfactuals, it relies on a model that is unknown and
hard to discover in practice. We address the problem of designing realistic and
feasible counterfactuals in the absence of a causal model. We define
transport-based counterfactual models as collections of joint probability
distributions between observable distributions, and show their connection to
causal counterfactuals. More specifically, we argue that optimal-transport
theory defines relevant transport-based counterfactual models, as they are
numerically feasible, statistically-faithful, and can coincide under some
assumptions with causal counterfactual models. Finally, these models make
counterfactual approaches to fairness feasible, and we illustrate their
practicality and efficiency on fair learning. With this paper, we aim at laying
out the theoretical foundations for a new, implementable approach to
counterfactual thinking.
| [
{
"version": "v1",
"created": "Mon, 30 Aug 2021 07:28:19 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Jan 2023 16:08:36 GMT"
}
] | 1,673,222,400,000 | [
[
"de Lara",
"Lucas",
"",
"IMT"
],
[
"González-Sanz",
"Alberto",
"",
"IMT"
],
[
"Asher",
"Nicholas",
"",
"IRIT-MELODI, CNRS"
],
[
"Risser",
"Laurent",
"",
"IMT, CNRS"
],
[
"Loubes",
"Jean-Michel",
"",
"IMT"
]
] |
2108.13063 | Paolo Pareti Dr. | Paolo Pareti, George Konstantinidis, Fabio Mogavero | Satisfiability and Containment of Recursive SHACL | null | null | 10.1016/j.websem.2022.100721 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Shapes Constraint Language (SHACL) is the recent W3C recommendation
language for validating RDF data, by verifying certain shapes on graphs.
Previous work has largely focused on the validation problem and the standard
decision problems of satisfiability and containment, crucial for design and
optimisation purposes, have only been investigated for simplified versions of
SHACL. Moreover, the SHACL specification does not define the semantics of
recursively-defined constraints, which led to several alternative recursive
semantics being proposed in the literature. The interaction between these
different semantics and important decision problems has not been investigated
yet. In this article we provide a comprehensive study of the different features
of SHACL, by providing a translation to a new first-order language, called SCL,
that precisely captures the semantics of SHACL. We also present MSCL, a
second-order extension of SCL, which allows us to define, in a single formal
logic framework, the main recursive semantics of SHACL. Within this language we
also provide an effective treatment of filter constraints which are often
neglected in the related literature. Using this logic we provide a detailed map
of (un)decidability and complexity results for the satisfiability and
containment decision problems for different SHACL fragments. Notably, we prove
that both problems are undecidable for the full language, but we present
decidable combinations of interesting features, even in the face of recursion.
| [
{
"version": "v1",
"created": "Mon, 30 Aug 2021 08:51:03 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2022 20:39:13 GMT"
}
] | 1,655,337,600,000 | [
[
"Pareti",
"Paolo",
""
],
[
"Konstantinidis",
"George",
""
],
[
"Mogavero",
"Fabio",
""
]
] |
2108.13343 | Beren Millidge Mr | Beren Millidge, Anil Seth, Christopher L Buckley | A Mathematical Walkthrough and Discussion of the Free Energy Principle | 30/08/21 initial upload; 02/10/21 minor maths fixes | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Free-Energy-Principle (FEP) is an influential and controversial theory
which postulates a deep and powerful connection between the stochastic
thermodynamics of self-organization and learning through variational inference.
Specifically, it claims that any self-organizing system which can be
statistically separated from its environment, and which maintains itself at a
non-equilibrium steady state, can be construed as minimizing an
information-theoretic functional -- the variational free energy -- and thus
performing variational Bayesian inference to infer the hidden state of its
environment. This principle has also been applied extensively in neuroscience,
and is beginning to make inroads in machine learning by spurring the
construction of novel and powerful algorithms by which action, perception, and
learning can all be unified under a single objective. While its expansive and
often grandiose claims have spurred significant debates in both philosophy and
theoretical neuroscience, the mathematical depth and lack of accessible
introductions and tutorials for the core claims of the theory have often
precluded a deep understanding within the literature. Here, we aim to provide a
mathematically detailed, yet intuitive walk-through of the formulation and
central claims of the FEP while also providing a discussion of the assumptions
necessary and potential limitations of the theory. Additionally, since the FEP
is a still a living theory, subject to internal controversy, change, and
revision, we also present a detailed appendix highlighting and condensing
current perspectives as well as controversies about the nature, applicability,
and the mathematical assumptions and formalisms underlying the FEP.
| [
{
"version": "v1",
"created": "Mon, 30 Aug 2021 16:11:49 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Oct 2021 23:02:07 GMT"
}
] | 1,633,392,000,000 | [
[
"Millidge",
"Beren",
""
],
[
"Seth",
"Anil",
""
],
[
"Buckley",
"Christopher L",
""
]
] |
2108.13744 | Gonzalo Imaz | Gonzalo E. Imaz | The Horn Non-Clausal Class and its Polynomiality | 31 pages + references, 6 figures, submitted version | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The expressiveness of propositional non-clausal (NC) formulas is
exponentially richer than that of clausal formulas. Yet, clausal efficiency
outperforms non-clausal one. Indeed, a major weakness of the latter is that,
while Horn clausal formulas, along with Horn algorithms, are crucial for the
high efficiency of clausal reasoning, no Horn-like formulas in non-clausal form
had been proposed. To overcome such weakness, we define the hybrid class
$\mathbb{H_{NC}}$ of Horn Non-Clausal (Horn-NC) formulas, by adequately lifting
the Horn pattern to NC form, and argue that $\mathbb{H_{NC}}$, along with
future Horn-NC algorithms, shall increase non-clausal efficiency just as the
Horn class has increased clausal efficiency. Secondly, we: (i) give the
compact, inductive definition of $\mathbb{H_{NC}}$; (ii) prove that
syntactically $\mathbb{H_{NC}}$ subsumes the Horn class but semantically both
classes are equivalent, and (iii) characterize the non-clausal formulas
belonging to $\mathbb{H_{NC}}$. Thirdly, we define the Non-Clausal
Unit-Resolution calculus, $UR_{NC}$, and prove that it checks the
satisfiability of $\mathbb{H_{NC}}$ in polynomial time. This fact, to our
knowledge, makes $\mathbb{H_{NC}}$ the first characterized polynomial class in
NC reasoning. Finally, we prove that $\mathbb{H_{NC}}$ is linearly
recognizable, and also that it is both strictly succincter and exponentially
richer than the Horn class. We discuss that in NC automated reasoning, e.g.
satisfiability solving, theorem proving, logic programming, etc., can directly
benefit from $\mathbb{H_{NC}}$ and $UR_{NC}$ and that, as a by-product of its
proved properties, $\mathbb{H_{NC}}$ arises as a new alternative to analyze
Horn functions and implication systems.
| [
{
"version": "v1",
"created": "Tue, 31 Aug 2021 10:55:19 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Sep 2021 14:03:00 GMT"
},
{
"version": "v3",
"created": "Wed, 17 Nov 2021 12:30:27 GMT"
}
] | 1,637,193,600,000 | [
[
"Imaz",
"Gonzalo E.",
""
]
] |
2108.13772 | Bryar Hassan Dr. | Bryar A. Hassan and Tarik A. Rashid | Artificial Intelligence Algorithms for Natural Language Processing and
the Semantic Web Ontology Learning | arXiv admin note: text overlap with arXiv:1911.13011,
arXiv:2102.08361 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Evolutionary clustering algorithms have considered as the most popular and
widely used evolutionary algorithms for minimising optimisation and practical
problems in nearly all fields. In this thesis, a new evolutionary clustering
algorithm star (ECA*) is proposed. Additionally, a number of experiments were
conducted to evaluate ECA* against five state-of-the-art approaches. For this,
32 heterogeneous and multi-featured datasets were used to examine their
performance using internal and external clustering measures, and to measure the
sensitivity of their performance towards dataset features in the form of
operational framework. The results indicate that ECA* overcomes its competitive
techniques in terms of the ability to find the right clusters. Based on its
superior performance, exploiting and adapting ECA* on the ontology learning had
a vital possibility. In the process of deriving concept hierarchies from
corpora, generating formal context may lead to a time-consuming process.
Therefore, formal context size reduction results in removing uninterested and
erroneous pairs, taking less time to extract the concept lattice and concept
hierarchies accordingly. In this premise, this work aims to propose a framework
to reduce the ambiguity of the formal context of the existing framework using
an adaptive version of ECA*. In turn, an experiment was conducted by applying
385 sample corpora from Wikipedia on the two frameworks to examine the
reduction of formal context size, which leads to yield concept lattice and
concept hierarchy. The resulting lattice of formal context was evaluated to the
original one using concept lattice-invariants. Accordingly, the homomorphic
between the two lattices preserves the quality of resulting concept hierarchies
by 89% in contrast to the basic ones, and the reduced concept lattice inherits
the structural relation of the original one.
| [
{
"version": "v1",
"created": "Tue, 31 Aug 2021 11:57:41 GMT"
}
] | 1,630,454,400,000 | [
[
"Hassan",
"Bryar A.",
""
],
[
"Rashid",
"Tarik A.",
""
]
] |
2109.00318 | Jeroen Paul Spaans | Jeroen Paul Spaans | Intrinsic Argument Strength in Structured Argumentation: a Principled
Approach | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Abstract argumentation provides us with methods such as gradual and Dung
semantics with which to evaluate arguments after potential attacks by other
arguments. Some of these methods can take intrinsic strengths of arguments as
input, with which to modulate the effects of attacks between arguments. Coming
from abstract argumentation, these methods look only at the relations between
arguments and not at the structure of the arguments themselves. In structured
argumentation the way an argument is constructed, by chaining inference rules
starting from premises, is taken into consideration. In this paper we study
methods for assigning an argument its intrinsic strength, based on the
strengths of the premises and inference rules used to form said argument. We
first define a set of principles, which are properties that strength assigning
methods might satisfy. We then propose two such methods and analyse which
principles they satisfy. Finally, we present a generalised system for creating
novel strength assigning methods and speak to the properties of this system
regarding the proposed principles.
| [
{
"version": "v1",
"created": "Wed, 1 Sep 2021 11:54:15 GMT"
}
] | 1,630,540,800,000 | [
[
"Spaans",
"Jeroen Paul",
""
]
] |
2109.00414 | Ryo Nakahashi | Ryo Nakahashi and Seiji Yamada | Balancing Performance and Human Autonomy with Implicit Guidance Agent | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human-agent team, which is a problem in which humans and autonomous
agents collaborate to achieve one task, is typical in human-AI collaboration.
For effective collaboration, humans want to have an effective plan, but in
realistic situations, they might have difficulty calculating the best plan due
to cognitive limitations. In this case, guidance from an agent that has many
computational resources may be useful. However, if an agent guides the human
behavior explicitly, the human may feel that they have lost autonomy and are
being controlled by the agent. We therefore investigated implicit guidance
offered by means of an agent's behavior. With this type of guidance, the agent
acts in a way that makes it easy for the human to find an effective plan for a
collaborative task, and the human can then improve the plan. Since the human
improves their plan voluntarily, he or she maintains autonomy. We modeled a
collaborative agent with implicit guidance by integrating the Bayesian Theory
of Mind into existing collaborative-planning algorithms and demonstrated
through a behavioral experiment that implicit guidance is effective for
enabling humans to maintain a balance between improving their plans and
retaining autonomy.
| [
{
"version": "v1",
"created": "Wed, 1 Sep 2021 14:47:29 GMT"
}
] | 1,630,540,800,000 | [
[
"Nakahashi",
"Ryo",
""
],
[
"Yamada",
"Seiji",
""
]
] |
2109.00449 | Ignacio Vellido | Ignacio Vellido, Carlos N\'u\~nez-Molina, Vladislav Nikolov, Juan
Fdez-Olivares | Planning from video game descriptions | To be submitted to the Knowledge Engineering Review (KER) journal | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This project proposes a methodology for the automatic generation of action
models from video game dynamics descriptions, as well as its integration with a
planning agent for the execution and monitoring of the plans. Planners use
these action models to get the deliberative behaviour for an agent in many
different video games and, combined with a reactive module, solve deterministic
and no-deterministic levels. Experimental results validate the methodology and
prove that the effort put by a knowledge engineer can be greatly reduced in the
definition of such complex domains. Furthermore, benchmarks of the domains has
been produced that can be of interest to the international planning community
to evaluate planners in international planning competitions.
| [
{
"version": "v1",
"created": "Wed, 1 Sep 2021 15:49:09 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Sep 2021 15:44:24 GMT"
}
] | 1,631,059,200,000 | [
[
"Vellido",
"Ignacio",
""
],
[
"Núñez-Molina",
"Carlos",
""
],
[
"Nikolov",
"Vladislav",
""
],
[
"Fdez-Olivares",
"Juan",
""
]
] |
2109.00838 | Rui Zhao | Rui Zhao | An Automated Framework for Supporting Data-Governance Rule Compliance in
Decentralized MIMO Contexts | Accepted to IJCAI 2021 DC | null | 10.24963/ijcai.2021/696 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Dr.Aid, a logic-based AI framework for automated compliance
checking of data governance rules over data-flow graphs. The rules are modelled
using a formal language based on situation calculus and are suitable for
decentralized contexts with multi-input-multi-output (MIMO) processes. Dr.Aid
models data rules and flow rules and checks compliance by reasoning about the
propagation, combination, modification and application of data rules over the
data flow graphs. Our approach is driven and evaluated by real-world datasets
using provenance graphs from data-intensive research.
| [
{
"version": "v1",
"created": "Thu, 2 Sep 2021 10:53:03 GMT"
}
] | 1,630,627,200,000 | [
[
"Zhao",
"Rui",
""
]
] |
2109.01013 | Romain Wallon | Daniel Le Berre and Romain Wallon | On Dedicated CDCL Strategies for PB Solvers | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current implementations of pseudo-Boolean (PB) solvers working on native PB
constraints are based on the CDCL architecture which empowers highly efficient
modern SAT solvers. In particular, such PB solvers not only implement a
(cutting-planes-based) conflict analysis procedure, but also complementary
strategies for components that are crucial for the efficiency of CDCL, namely
branching heuristics, learned constraint deletion and restarts. However, these
strategies are mostly reused by PB solvers without considering the particular
form of the PB constraints they deal with. In this paper, we present and
evaluate different ways of adapting CDCL strategies to take the specificities
of PB constraints into account while preserving the behavior they have in the
clausal setting. We implemented these strategies in two different solvers,
namely Sat4j (for which we consider three configurations) and RoundingSat. Our
experiments show that these dedicated strategies allow to improve, sometimes
significantly, the performance of these solvers, both on decision and
optimization problems.
| [
{
"version": "v1",
"created": "Thu, 2 Sep 2021 15:22:27 GMT"
}
] | 1,630,627,200,000 | [
[
"Berre",
"Daniel Le",
""
],
[
"Wallon",
"Romain",
""
]
] |
2109.01220 | James Plank | James S. Plank, Catherine D. Schuman and Robert M. Patton | An Oracle and Observations for the OpenAI Gym / ALE Freeway Environment | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The OpenAI Gym project contains hundreds of control problems whose goal is to
provide a testbed for reinforcement learning algorithms. One such problem is
Freeway-ram-v0, where the observations presented to the agent are 128 bytes of
RAM. While the goals of the project are for non-expert AI agents to solve the
control problems with general training, in this work, we seek to learn more
about the problem, so that we can better evaluate solutions. In particular, we
develop on oracle to play the game, so that we may have baselines for success.
We present details of the oracle, plus optimal game-playing situations that can
be used for training and testing AI agents.
| [
{
"version": "v1",
"created": "Thu, 2 Sep 2021 21:38:06 GMT"
}
] | 1,630,886,400,000 | [
[
"Plank",
"James S.",
""
],
[
"Schuman",
"Catherine D.",
""
],
[
"Patton",
"Robert M.",
""
]
] |
2109.01281 | Michael Timothy Bennett | Michael Timothy Bennett | Symbol Emergence and The Solutions to Any Task | Accepted to the 14th conference on Artificial General Intelligence | Proceedings of the 14th International Conference on Artificial
General Intelligence. 2021. Lecture Notes in Computer Science, vol 13154.
Springer. pp. 30-40 | 10.1007/978-3-030-93758-4_4 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The following defines intent, an arbitrary task and its solutions, and then
argues that an agent which always constructs what is called an Intensional
Solution would qualify as artificial general intelligence. We then explain how
natural language may emerge and be acquired by such an agent, conferring the
ability to model the intent of other individuals labouring under similar
compulsions, because an abstract symbol system and the solution to a task are
one and the same.
| [
{
"version": "v1",
"created": "Fri, 3 Sep 2021 02:44:35 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Oct 2021 08:42:05 GMT"
}
] | 1,714,435,200,000 | [
[
"Bennett",
"Michael Timothy",
""
]
] |
2109.01634 | Cristina Cornelio PhD | Cristina Cornelio, Sanjeeb Dash, Vernon Austel, Tyler Josephson, Joao
Goncalves, Kenneth Clarkson, Nimrod Megiddo, Bachir El Khadir, Lior Horesh | AI Descartes: Combining Data and Theory for Derivable Scientific
Discovery | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scientists have long aimed to discover meaningful formulae which accurately
describe experimental data. A common approach is to manually create
mathematical models of natural phenomena using domain knowledge, and then fit
these models to data. In contrast, machine-learning algorithms automate the
construction of accurate data-driven models while consuming large amounts of
data. The problem of incorporating prior knowledge in the form of constraints
on the functional form of a learned model (e.g., nonnegativity) has been
explored in the literature. However, finding models that are consistent with
prior knowledge expressed in the form of general logical axioms (e.g.,
conservation of energy) is an open problem. We develop a method to enable
principled derivations of models of natural phenomena from axiomatic knowledge
and experimental data by combining logical reasoning with symbolic regression.
We demonstrate these concepts for Kepler's third law of planetary motion,
Einstein's relativistic time-dilation law, and Langmuir's theory of adsorption,
automatically connecting experimental data with background theory in each case.
We show that laws can be discovered from few data points when using formal
logical reasoning to distinguish the correct formula from a set of plausible
formulas that have similar error on the data. The combination of reasoning with
machine learning provides generalizeable insights into key aspects of natural
phenomena. We envision that this combination will enable derivable discovery of
fundamental laws of science and believe that our work is an important step
towards automating the scientific method.
| [
{
"version": "v1",
"created": "Fri, 3 Sep 2021 17:19:17 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Oct 2021 15:08:54 GMT"
},
{
"version": "v3",
"created": "Thu, 4 Aug 2022 12:08:29 GMT"
},
{
"version": "v4",
"created": "Mon, 9 Jan 2023 12:19:30 GMT"
}
] | 1,673,308,800,000 | [
[
"Cornelio",
"Cristina",
""
],
[
"Dash",
"Sanjeeb",
""
],
[
"Austel",
"Vernon",
""
],
[
"Josephson",
"Tyler",
""
],
[
"Goncalves",
"Joao",
""
],
[
"Clarkson",
"Kenneth",
""
],
[
"Megiddo",
"Nimrod",
""
],
[
"Khadir",
"Bachir El",
""
],
[
"Horesh",
"Lior",
""
]
] |
2109.01703 | Liming Xu | Liming Xu, Stephen Mak and Alexandra Brintrup | Will bots take over the supply chain? Revisiting Agent-based supply
chain automation | 38 pages, 5 figures | International Journal of Production Economics, Volume 241, 2021 | 10.1016/j.ijpe.2021.108279 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Agent-based systems have the capability to fuse information from many
distributed sources and create better plans faster. This feature makes
agent-based systems naturally suitable to address the challenges in Supply
Chain Management (SCM). Although agent-based supply chains systems have been
proposed since early 2000; industrial uptake of them has been lagging. The
reasons quoted include the immaturity of the technology, a lack of
interoperability with supply chain information systems, and a lack of trust in
Artificial Intelligence (AI). In this paper, we revisit the agent-based supply
chain and review the state of the art. We find that agent-based technology has
matured, and other supporting technologies that are penetrating supply chains;
are filling in gaps, leaving the concept applicable to a wider range of
functions. For example, the ubiquity of IoT technology helps agents "sense" the
state of affairs in a supply chain and opens up new possibilities for
automation. Digital ledgers help securely transfer data between third parties,
making agent-based information sharing possible, without the need to integrate
Enterprise Resource Planning (ERP) systems. Learning functionality in agents
enables agents to move beyond automation and towards autonomy. We note this
convergence effect through conceptualising an agent-based supply chain
framework, reviewing its components, and highlighting research challenges that
need to be addressed in moving forward.
| [
{
"version": "v1",
"created": "Fri, 3 Sep 2021 18:44:26 GMT"
}
] | 1,630,972,800,000 | [
[
"Xu",
"Liming",
""
],
[
"Mak",
"Stephen",
""
],
[
"Brintrup",
"Alexandra",
""
]
] |
2109.01765 | Bencheng Wei | Bencheng Wei | Effective user intent mining with unsupervised word representation
models and topic modelling | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Understanding the intent behind chat between customers and customer service
agents has become a crucial problem nowadays due to an exponential increase in
the use of the Internet by people from different cultures and educational
backgrounds. More importantly, the explosion of e-commerce has led to a
significant increase in text conversation between customers and agents. In this
paper, we propose an approach to data mining the conversation intents behind
the textual data. Using the customer service data set, we train unsupervised
text representation models, and then develop an intent mapping model which
would rank the predefined intents base on cosine similarity between sentences
and intents. Topic-modeling techniques are used to define intents and domain
experts are also involved to interpret topic modelling results. With this
approach, we can get a good understanding of the user intentions behind the
unlabelled customer service textual data.
| [
{
"version": "v1",
"created": "Sat, 4 Sep 2021 01:52:12 GMT"
}
] | 1,630,972,800,000 | [
[
"Wei",
"Bencheng",
""
]
] |
2109.01797 | Sijie Mai | Sijie Mai, Ying Zeng, Shuangjia Zheng, Haifeng Hu | Hybrid Contrastive Learning of Tri-Modal Representation for Multimodal
Sentiment Analysis | Under Review | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The wide application of smart devices enables the availability of multimodal
data, which can be utilized in many tasks. In the field of multimodal sentiment
analysis (MSA), most previous works focus on exploring intra- and inter-modal
interactions. However, training a network with cross-modal information
(language, visual, audio) is still challenging due to the modality gap, and
existing methods still cannot ensure to sufficiently learn intra-/inter-modal
dynamics. Besides, while learning dynamics within each sample draws great
attention, the learning of inter-class relationships is neglected. Moreover,
the size of datasets limits the generalization ability of existing methods. To
address the afore-mentioned issues, we propose a novel framework HyCon for
hybrid contrastive learning of tri-modal representation. Specifically, we
simultaneously perform intra-/inter-modal contrastive learning and
semi-contrastive learning (that is why we call it hybrid contrastive learning),
with which the model can fully explore cross-modal interactions, preserve
inter-class relationships and reduce the modality gap. Besides, a refinement
term is devised to prevent the model falling into a sub-optimal solution.
Moreover, HyCon can naturally generate a large amount of training pairs for
better generalization and reduce the negative effect of limited datasets.
Extensive experiments on public datasets demonstrate that our proposed method
outperforms existing works.
| [
{
"version": "v1",
"created": "Sat, 4 Sep 2021 06:04:21 GMT"
}
] | 1,630,972,800,000 | [
[
"Mai",
"Sijie",
""
],
[
"Zeng",
"Ying",
""
],
[
"Zheng",
"Shuangjia",
""
],
[
"Hu",
"Haifeng",
""
]
] |
2109.01954 | Nikzad Khani | Nikzad Khani and Matthew Kluska | An Exploration of Deep Learning Methods in Hungry Geese | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Hungry Geese is a n-player variation of the popular game snake. This paper
looks at state of the art Deep Reinforcement Learning Value Methods. The goal
of the paper is to aggregate research of value based methods and apply it as an
exercise to other environments. A vanilla Deep Q Network, a Double Q-network
and a Dueling Q-Network were all examined and tested with the Hungry Geese
environment. The best performing model was the vanilla Deep Q Network due to
its simple state representation and smaller network structure. Converging
towards an optimal policy was found to be difficult due to random geese
initialization and food generation. Therefore we show that Deep Q Networks may
not be the appropriate model for such a stochastic environment and lastly we
present improvements that can be made along with more suitable models for the
environment.
| [
{
"version": "v1",
"created": "Sun, 5 Sep 2021 00:43:37 GMT"
}
] | 1,630,972,800,000 | [
[
"Khani",
"Nikzad",
""
],
[
"Kluska",
"Matthew",
""
]
] |
2109.02053 | Zelei Liu | Zelei Liu, Yuanyuan Chen, Han Yu, Yang Liu and Lizhen Cui | GTG-Shapley: Efficient and Accurate Participant Contribution Evaluation
in Federated Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning (FL) bridges the gap between collaborative machine
learning and preserving data privacy. To sustain the long-term operation of an
FL ecosystem, it is important to attract high quality data owners with
appropriate incentive schemes. As an important building block of such incentive
schemes, it is essential to fairly evaluate participants' contribution to the
performance of the final FL model without exposing their private data. Shapley
Value (SV)-based techniques have been widely adopted to provide fair evaluation
of FL participant contributions. However, existing approaches incur significant
computation costs, making them difficult to apply in practice. In this paper,
we propose the Guided Truncation Gradient Shapley (GTG-Shapley) approach to
address this challenge. It reconstructs FL models from gradient updates for SV
calculation instead of repeatedly training with different combinations of FL
participants. In addition, we design a guided Monte Carlo sampling approach
combined with within-round and between-round truncation to further reduce the
number of model reconstructions and evaluations required, through extensive
experiments under diverse realistic data distribution settings. The results
demonstrate that GTG-Shapley can closely approximate actual Shapley values,
while significantly increasing computational efficiency compared to the state
of the art, especially under non-i.i.d. settings.
| [
{
"version": "v1",
"created": "Sun, 5 Sep 2021 12:17:00 GMT"
}
] | 1,630,972,800,000 | [
[
"Liu",
"Zelei",
""
],
[
"Chen",
"Yuanyuan",
""
],
[
"Yu",
"Han",
""
],
[
"Liu",
"Yang",
""
],
[
"Cui",
"Lizhen",
""
]
] |
2109.02161 | Kolby Nottingham | Kolby Nottingham, Litian Liang, Daeyun Shin, Charless C. Fowlkes, Roy
Fox, Sameer Singh | Modular Framework for Visuomotor Language Grounding | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural language instruction following tasks serve as a valuable test-bed for
grounded language and robotics research. However, data collection for these
tasks is expensive and end-to-end approaches suffer from data inefficiency. We
propose the structuring of language, acting, and visual tasks into separate
modules that can be trained independently. Using a Language, Action, and Vision
(LAV) framework removes the dependence of action and vision modules on
instruction following datasets, making them more efficient to train. We also
present a preliminary evaluation of LAV on the ALFRED task for visual and
interactive instruction following.
| [
{
"version": "v1",
"created": "Sun, 5 Sep 2021 20:11:53 GMT"
}
] | 1,630,972,800,000 | [
[
"Nottingham",
"Kolby",
""
],
[
"Liang",
"Litian",
""
],
[
"Shin",
"Daeyun",
""
],
[
"Fowlkes",
"Charless C.",
""
],
[
"Fox",
"Roy",
""
],
[
"Singh",
"Sameer",
""
]
] |
2109.02354 | Yuxiang Sun | Yuxiang Sun, Bo Yuan, Yufan Xue, Jiawei Zhou, Xiaoyu Zhang and
Xianzhong Zhou | Method for making multi-attribute decisions in wargames by combining
intuitionistic fuzzy numbers with reinforcement learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Researchers are increasingly focusing on intelligent games as a hot research
area.The article proposes an algorithm that combines the multi-attribute
management and reinforcement learning methods, and that combined their effect
on wargaming, it solves the problem of the agent's low rate of winning against
specific rules and its inability to quickly converge during intelligent wargame
training.At the same time, this paper studied a multi-attribute decision making
and reinforcement learning algorithm in a wargame simulation environment, and
obtained data on red and blue conflict.Calculate the weight of each attribute
based on the intuitionistic fuzzy number weight calculations. Then determine
the threat posed by each opponent's chess pieces.Using the red side
reinforcement learning reward function, the AC framework is trained on the
reward function, and an algorithm combining multi-attribute decision-making
with reinforcement learning is obtained. A simulation experiment confirms that
the algorithm of multi-attribute decision-making combined with reinforcement
learning presented in this paper is significantly more intelligent than the
pure reinforcement learning algorithm.By resolving the shortcomings of the
agent's neural network, coupled with sparse rewards in large-map combat games,
this robust algorithm effectively reduces the difficulties of convergence. It
is also the first time in this field that an algorithm design for intelligent
wargaming combines multi-attribute decision making with reinforcement
learning.Attempt interdisciplinary cross-innovation in the academic field, like
designing intelligent wargames and improving reinforcement learning algorithms.
| [
{
"version": "v1",
"created": "Mon, 6 Sep 2021 10:45:52 GMT"
}
] | 1,630,972,800,000 | [
[
"Sun",
"Yuxiang",
""
],
[
"Yuan",
"Bo",
""
],
[
"Xue",
"Yufan",
""
],
[
"Zhou",
"Jiawei",
""
],
[
"Zhang",
"Xiaoyu",
""
],
[
"Zhou",
"Xianzhong",
""
]
] |
2109.02772 | Tianxing He | Tianxing He, Kyunghyun Cho, James Glass | An Empirical Study on Few-shot Knowledge Probing for Pretrained Language
Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prompt-based knowledge probing for 1-hop relations has been used to measure
how much world knowledge is stored in pretrained language models. Existing work
uses considerable amounts of data to tune the prompts for better performance.
In this work, we compare a variety of approaches under a few-shot knowledge
probing setting, where only a small number (e.g., 10 or 20) of example triples
are available. In addition, we create a new dataset named TREx-2p, which
contains 2-hop relations. We report that few-shot examples can strongly boost
the probing performance for both 1-hop and 2-hop relations. In particular, we
find that a simple-yet-effective approach of finetuning the bias vectors in the
model outperforms existing prompt-engineering methods. Our dataset and code are
available at \url{https://github.com/cloudygoose/fewshot_lama}.
| [
{
"version": "v1",
"created": "Mon, 6 Sep 2021 23:29:36 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Sep 2021 22:31:32 GMT"
}
] | 1,631,577,600,000 | [
[
"He",
"Tianxing",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Glass",
"James",
""
]
] |
2109.02843 | Jin Xie | Jin Xie, Xinyu Li, Liang Gao, Lin Gui | A new neighborhood structure for job shop scheduling problems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Job shop scheduling problem (JSP) is a widely studied NP-complete
combinatorial optimization problem. Neighborhood structures play a critical
role in solving JSP. At present, there are three state-of-the-art neighborhood
structures, i.e., N5, N6, and N7. Improving the upper bounds of some famous
benchmarks is inseparable from the role of these neighborhood structures.
However, these existing neighborhood structures only consider the movement of
critical operations within a critical block. According to our experiments, it
is also possible to improve the makespan of a scheduling scheme by moving a
critical operation outside its critical block. According to the above finding,
this paper proposes a new N8 neighborhood structure considering the movement of
critical operations within a critical block and the movement of critical
operations outside the critical block. Besides, a neighborhood clipping method
is designed to avoid invalid movement, reducing the computational time. Tabu
search (TS) is a commonly used algorithm framework combined with neighborhood
structures. This paper uses this framework to compare the N8 neighborhood
structure with N5, N6, and N7 neighborhood structures on four famous
benchmarks. The experimental results verify that the N8 neighborhood structure
is more effective and efficient in solving JSP than the other state-of-the-art
neighborhood structures.
| [
{
"version": "v1",
"created": "Tue, 7 Sep 2021 03:52:31 GMT"
}
] | 1,631,059,200,000 | [
[
"Xie",
"Jin",
""
],
[
"Li",
"Xinyu",
""
],
[
"Gao",
"Liang",
""
],
[
"Gui",
"Lin",
""
]
] |
2109.02866 | Thomas P Quinn | Thomas P Quinn, Simon Coghlan | Readying Medical Students for Medical AI: The Need to Embed AI Ethics
Education | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Medical students will almost inevitably encounter powerful medical AI systems
early in their careers. Yet, contemporary medical education does not adequately
equip students with the basic clinical proficiency in medical AI needed to use
these tools safely and effectively. Education reform is urgently needed, but
not easily implemented, largely due to an already jam-packed medical curricula.
In this article, we propose an education reform framework as an effective and
efficient solution, which we call the Embedded AI Ethics Education Framework.
Unlike other calls for education reform to accommodate AI teaching that are
more radical in scope, our framework is modest and incremental. It leverages
existing bioethics or medical ethics curricula to develop and deliver content
on the ethical issues associated with medical AI, especially the harms of
technology misuse, disuse, and abuse that affect the risk-benefit analyses at
the heart of healthcare. In doing so, the framework provides a simple tool for
going beyond the "What?" and the "Why?" of medical AI ethics education, to
answer the "How?", giving universities, course directors, and/or professors a
broad road-map for equipping their students with the necessary clinical
proficiency in medical AI.
| [
{
"version": "v1",
"created": "Tue, 7 Sep 2021 04:57:29 GMT"
}
] | 1,631,059,200,000 | [
[
"Quinn",
"Thomas P",
""
],
[
"Coghlan",
"Simon",
""
]
] |
2109.02956 | Scott McLachlan Dr | Scott McLachlan, Martin Neil, Kudakwashe Dube, Ronny Bogani, Norman
Fenton, and Burkhard Schaffer | Smart Automotive Technology Adherence to the Law: (De)Constructing Road
Rules for Autonomous System Development, Verification and Safety | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Driving is an intuitive task that requires skills, constant alertness and
vigilance for unexpected events. The driving task also requires long
concentration spans focusing on the entire task for prolonged periods, and
sophisticated negotiation skills with other road users, including wild animals.
These requirements are particularly important when approaching intersections,
overtaking, giving way, merging, turning and while adhering to the vast body of
road rules. Modern motor vehicles now include an array of smart assistive and
autonomous driving systems capable of subsuming some, most, or in limited
cases, all of the driving task. The UK Department of Transport's response to
the Safe Use of Automated Lane Keeping System consultation proposes that these
systems are tested for compliance with relevant traffic rules. Building these
smart automotive systems requires software developers with highly technical
software engineering skills, and now a lawyer's in-depth knowledge of traffic
legislation as well. These skills are required to ensure the systems are able
to safely perform their tasks while being observant of the law. This paper
presents an approach for deconstructing the complicated legalese of traffic law
and representing its requirements and flow. The approach (de)constructs road
rules in legal terminology and specifies them in structured English logic that
is expressed as Boolean logic for automation and Lawmaps for visualisation. We
demonstrate an example using these tools leading to the construction and
validation of a Bayesian Network model. We strongly believe these tools to be
approachable by programmers and the general public, and capable of use in
developing Artificial Intelligence to underpin motor vehicle smart systems, and
in validation to ensure these systems are considerate of the law when making
decisions.
| [
{
"version": "v1",
"created": "Tue, 7 Sep 2021 09:22:15 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Sep 2021 08:07:18 GMT"
}
] | 1,631,491,200,000 | [
[
"McLachlan",
"Scott",
""
],
[
"Neil",
"Martin",
""
],
[
"Dube",
"Kudakwashe",
""
],
[
"Bogani",
"Ronny",
""
],
[
"Fenton",
"Norman",
""
],
[
"Schaffer",
"Burkhard",
""
]
] |
2109.03106 | Matthias Thimm | Matthias Thimm and Federico Cerutti and Mauro Vallati | Fudge: A light-weight solver for abstract argumentation based on SAT
reductions | Part of ICCMA 2021 proceedings | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Fudge, an abstract argumentation solver that tightly integrates
satisfiability solving technology to solve a series of abstract argumentation
problems. While most of the encodings used by Fudge derive from standard
translation approaches, Fudge makes use of completely novel encodings to solve
the skeptical reasoning problem wrt. preferred semantics and problems wrt.
ideal semantics.
| [
{
"version": "v1",
"created": "Tue, 7 Sep 2021 14:07:48 GMT"
}
] | 1,631,059,200,000 | [
[
"Thimm",
"Matthias",
""
],
[
"Cerutti",
"Federico",
""
],
[
"Vallati",
"Mauro",
""
]
] |
2109.03162 | Mario Alviano | Mario Alviano | The pyglaf argumentation reasoner (ICCMA2021) | Part of ICCMA 2021 proceedings | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The pyglaf reasoner takes advantage of circumscription to solve computational
problems of abstract argumentation frameworks. In fact, many of these problems
are reduced to circumscription by means of linear encodings, and a few others
are solved by means of a sequence of calls to an oracle for circumscription.
Within pyglaf, Python is used to build the encodings and to control the
execution of the external circumscription solver, which extends the SAT solver
glucose and implements algorithms taking advantage of unsatisfiable core
analysis and incremental computation.
| [
{
"version": "v1",
"created": "Tue, 7 Sep 2021 15:54:52 GMT"
}
] | 1,631,059,200,000 | [
[
"Alviano",
"Mario",
""
]
] |
2109.03166 | Wolfgang Dvo\v{r}\'ak | Wolfgang Dvo\v{r}\'ak, Matthias K\"onig, Johannes P. Wallner, Stefan
Woltran | Aspartix-V21 | Part of ICCMA 2021 proceedings | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this solver description we present ASPARTIX-V, in its 2021 edition, which
participates in the International Competition on Computational Models of
Argumentation (ICCMA) 2021. ASPARTIX-V is capable of solving all classical
(static) reasoning tasks part of ICCMA'21 and extends the ASPARTIX system suite
by incorporation of recent ASP language constructs (e.g. conditional literals),
domain heuristics within ASP, and multi-shot methods. In this light ASPARTIX-V
deviates from the traditional focus of ASPARTIX on monolithic approaches (i.e.,
one-shot solving via a single ASP encoding) to further enhance performance.
| [
{
"version": "v1",
"created": "Tue, 7 Sep 2021 15:59:51 GMT"
}
] | 1,631,059,200,000 | [
[
"Dvořák",
"Wolfgang",
""
],
[
"König",
"Matthias",
""
],
[
"Wallner",
"Johannes P.",
""
],
[
"Woltran",
"Stefan",
""
]
] |
2109.03202 | Renato Luiz De Freitas Cunha | Renato Luiz de Freitas Cunha, Luiz Chaimowicz | On the impact of MDP design for Reinforcement Learning agents in
Resource Management | 15 pages, 6 figures. Accepted for publication at BRACIS 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent progress in Reinforcement Learning applications to Resource
Management presents MDPs without a deeper analysis of the impacts of design
decisions on agent performance. In this paper, we compare and contrast four
different MDP variations, discussing their computational requirements and
impacts on agent performance by means of an empirical analysis. We conclude by
showing that, in our experiments, when using Multi-Layer Perceptrons as
approximation function, a compact state representation allows transfer of
agents between environments, and that transferred agents have good performance
and outperform specialized agents in 80\% of the tested scenarios, even without
retraining.
| [
{
"version": "v1",
"created": "Tue, 7 Sep 2021 17:13:11 GMT"
}
] | 1,631,059,200,000 | [
[
"Cunha",
"Renato Luiz de Freitas",
""
],
[
"Chaimowicz",
"Luiz",
""
]
] |
2109.03391 | Bing Wei | Bing Wei, Yudi Zhao, Kuangrong Hao, and Lei Gao | Visual Sensation and Perception Computational Models for Deep Learning:
State of the art, Challenges and Prospects | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Visual sensation and perception refers to the process of sensing, organizing,
identifying, and interpreting visual information in environmental awareness and
understanding. Computational models inspired by visual perception have the
characteristics of complexity and diversity, as they come from many subjects
such as cognition science, information science, and artificial intelligence. In
this paper, visual perception computational models oriented deep learning are
investigated from the biological visual mechanism and computational vision
theory systematically. Then, some points of view about the prospects of the
visual perception computational models are presented. Finally, this paper also
summarizes the current challenges of visual perception and predicts its future
development trends. Through this survey, it will provide a comprehensive
reference for research in this direction.
| [
{
"version": "v1",
"created": "Wed, 8 Sep 2021 01:51:24 GMT"
}
] | 1,631,145,600,000 | [
[
"Wei",
"Bing",
""
],
[
"Zhao",
"Yudi",
""
],
[
"Hao",
"Kuangrong",
""
],
[
"Gao",
"Lei",
""
]
] |
2109.03554 | Fan Wang | Fan Wang, Hao Tian, Haoyi Xiong, Hua Wu, Jie Fu, Yang Cao, Yu Kang,
Haifeng Wang | Evolving Decomposed Plasticity Rules for Information-Bottlenecked
Meta-Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial neural networks (ANNs) are typically confined to accomplishing
pre-defined tasks by learning a set of static parameters. In contrast,
biological neural networks (BNNs) can adapt to various new tasks by continually
updating the neural connections based on the inputs, which is aligned with the
paradigm of learning effective learning rules in addition to static parameters,
e.g., meta-learning. Among various biologically inspired learning rules,
Hebbian plasticity updates the neural network weights using local signals
without the guide of an explicit target function, thus enabling an agent to
learn automatically without human efforts. However, typical plastic ANNs using
a large amount of meta-parameters violate the nature of the genomics bottleneck
and potentially deteriorate the generalization capacity. This work proposes a
new learning paradigm decomposing those connection-dependent plasticity rules
into neuron-dependent rules thus accommodating $\Theta(n^2)$ learnable
parameters with only $\Theta(n)$ meta-parameters. We also thoroughly study the
effect of different neural modulation on plasticity. Our algorithms are tested
in challenging random 2D maze environments, where the agents have to use their
past experiences to shape the neural connections and improve their performances
for the future. The results of our experiment validate the following: 1.
Plasticity can be adopted to continually update a randomly initialized RNN to
surpass pre-trained, more sophisticated recurrent models, especially when
coming to long-term memorization. 2. Following the genomics bottleneck, the
proposed decomposed plasticity can be comparable to or even more effective than
canonical plasticity rules in some instances.
| [
{
"version": "v1",
"created": "Wed, 8 Sep 2021 11:34:14 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Sep 2021 13:23:27 GMT"
},
{
"version": "v3",
"created": "Thu, 21 Apr 2022 10:18:11 GMT"
},
{
"version": "v4",
"created": "Mon, 16 May 2022 12:55:12 GMT"
},
{
"version": "v5",
"created": "Wed, 18 May 2022 10:49:53 GMT"
},
{
"version": "v6",
"created": "Tue, 14 Jun 2022 01:48:12 GMT"
},
{
"version": "v7",
"created": "Mon, 19 Sep 2022 07:58:27 GMT"
}
] | 1,663,632,000,000 | [
[
"Wang",
"Fan",
""
],
[
"Tian",
"Hao",
""
],
[
"Xiong",
"Haoyi",
""
],
[
"Wu",
"Hua",
""
],
[
"Fu",
"Jie",
""
],
[
"Cao",
"Yang",
""
],
[
"Kang",
"Yu",
""
],
[
"Wang",
"Haifeng",
""
]
] |
2109.03813 | Sumedh Sontakke | Sumedh A Sontakke, Sumegh Roychowdhury, Mausoom Sarkar, Nikaash Puri,
Balaji Krishnamurthy, Laurent Itti | Video2Skill: Adapting Events in Demonstration Videos to Skills in an
Environment using Cyclic MDP Homomorphisms | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Humans excel at learning long-horizon tasks from demonstrations augmented
with textual commentary, as evidenced by the burgeoning popularity of tutorial
videos online. Intuitively, this capability can be separated into 2 distinct
subtasks - first, dividing a long-horizon demonstration sequence into
semantically meaningful events; second, adapting such events into meaningful
behaviors in one's own environment. Here, we present Video2Skill (V2S), which
attempts to extend this capability to artificial agents by allowing a robot arm
to learn from human cooking videos. We first use sequence-to-sequence
Auto-Encoder style architectures to learn a temporal latent space for events in
long-horizon demonstrations. We then transfer these representations to the
robotic target domain, using a small amount of offline and unrelated
interaction data (sequences of state-action pairs of the robot arm controlled
by an expert) to adapt these events into actionable representations, i.e.,
skills. Through experiments, we demonstrate that our approach results in
self-supervised analogy learning, where the agent learns to draw analogies
between motions in human demonstration data and behaviors in the robotic
environment. We also demonstrate the efficacy of our approach on model learning
- demonstrating how Video2Skill utilizes prior knowledge from human
demonstration to outperform traditional model learning of long-horizon
dynamics. Finally, we demonstrate the utility of our approach for non-tabula
rasa decision-making, i.e, utilizing video demonstration for zero-shot skill
generation.
| [
{
"version": "v1",
"created": "Wed, 8 Sep 2021 17:59:01 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Sep 2021 18:55:43 GMT"
}
] | 1,631,491,200,000 | [
[
"Sontakke",
"Sumedh A",
""
],
[
"Roychowdhury",
"Sumegh",
""
],
[
"Sarkar",
"Mausoom",
""
],
[
"Puri",
"Nikaash",
""
],
[
"Krishnamurthy",
"Balaji",
""
],
[
"Itti",
"Laurent",
""
]
] |
2109.03952 | Ninareh Mehrabi | Ninareh Mehrabi, Umang Gupta, Fred Morstatter, Greg Ver Steeg, Aram
Galstyan | Attributing Fair Decisions with Attention Interventions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The widespread use of Artificial Intelligence (AI) in consequential domains,
such as healthcare and parole decision-making systems, has drawn intense
scrutiny on the fairness of these methods. However, ensuring fairness is often
insufficient as the rationale for a contentious decision needs to be audited,
understood, and defended. We propose that the attention mechanism can be used
to ensure fair outcomes while simultaneously providing feature attributions to
account for how a decision was made. Toward this goal, we design an
attention-based model that can be leveraged as an attribution framework. It can
identify features responsible for both performance and fairness of the model
through attention interventions and attention weight manipulation. Using this
attribution framework, we then design a post-processing bias mitigation
strategy and compare it with a suite of baselines. We demonstrate the
versatility of our approach by conducting experiments on two distinct data
types, tabular and textual.
| [
{
"version": "v1",
"created": "Wed, 8 Sep 2021 22:28:44 GMT"
}
] | 1,631,232,000,000 | [
[
"Mehrabi",
"Ninareh",
""
],
[
"Gupta",
"Umang",
""
],
[
"Morstatter",
"Fred",
""
],
[
"Steeg",
"Greg Ver",
""
],
[
"Galstyan",
"Aram",
""
]
] |
2109.03958 | Duong Nguyen | Duong Nguyen and Ronan Fablet | TrAISformer -- A Transformer Network with Sparse Augmented Data
Representation and Cross Entropy Loss for AIS-based Vessel Trajectory
Prediction | null | null | 10.1109/ACCESS.2024.3349957 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Vessel trajectory prediction plays a pivotal role in numerous maritime
applications and services. While the Automatic Identification System (AIS)
offers a rich source of information to address this task, forecasting vessel
trajectory using AIS data remains challenging, even for modern machine learning
techniques, because of the inherent heterogeneous and multimodal nature of
motion data. In this paper, we propose a novel approach to tackle these
challenges. We introduce a discrete, high-dimensional representation of AIS
data and a new loss function designed to explicitly address heterogeneity and
multimodality. The proposed model-referred to as TrAISformer-is a modified
transformer network that extracts long-term temporal patterns in AIS vessel
trajectories in the proposed enriched space to forecast the positions of
vessels several hours ahead. We report experimental results on real, publicly
available AIS data. TrAISformer significantly outperforms state-of-the-art
methods, with an average prediction performance below 10 nautical miles up to
~10 hours.
| [
{
"version": "v1",
"created": "Wed, 8 Sep 2021 22:44:33 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Jan 2023 20:37:47 GMT"
},
{
"version": "v3",
"created": "Sun, 14 May 2023 13:47:01 GMT"
},
{
"version": "v4",
"created": "Wed, 3 Jan 2024 14:22:51 GMT"
}
] | 1,704,758,400,000 | [
[
"Nguyen",
"Duong",
""
],
[
"Fablet",
"Ronan",
""
]
] |
2109.04004 | Yunyou Huang | Yunyou Huang, Nana Wang, Suqin Tang, Li Ma, Tianshu Hao, Zihan Jiang,
Fan Zhang, Guoxin Kang, Xiuxia Miao, Xianglong Guan, Ruchang Zhang, Zhifei
Zhang and Jianfeng Zhan | OpenClinicalAI: enabling AI to diagnose diseases in real-world clinical
settings | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper quantitatively reveals the state-of-the-art and
state-of-the-practice AI systems only achieve acceptable performance on the
stringent conditions that all categories of subjects are known, which we call
closed clinical settings, but fail to work in real-world clinical settings.
Compared to the diagnosis task in the closed setting, real-world clinical
settings pose severe challenges, and we must treat them differently. We build a
clinical AI benchmark named Clinical AIBench to set up real-world clinical
settings to facilitate researches. We propose an open, dynamic machine learning
framework and develop an AI system named OpenClinicalAI to diagnose diseases in
real-world clinical settings. The first versions of Clinical AIBench and
OpenClinicalAI target Alzheimer's disease. In the real-world clinical setting,
OpenClinicalAI significantly outperforms the state-of-the-art AI system. In
addition, OpenClinicalAI develops personalized diagnosis strategies to avoid
unnecessary testing and seamlessly collaborates with clinicians. It is
promising to be embedded in the current medical systems to improve medical
services.
| [
{
"version": "v1",
"created": "Thu, 9 Sep 2021 02:59:36 GMT"
}
] | 1,631,232,000,000 | [
[
"Huang",
"Yunyou",
""
],
[
"Wang",
"Nana",
""
],
[
"Tang",
"Suqin",
""
],
[
"Ma",
"Li",
""
],
[
"Hao",
"Tianshu",
""
],
[
"Jiang",
"Zihan",
""
],
[
"Zhang",
"Fan",
""
],
[
"Kang",
"Guoxin",
""
],
[
"Miao",
"Xiuxia",
""
],
[
"Guan",
"Xianglong",
""
],
[
"Zhang",
"Ruchang",
""
],
[
"Zhang",
"Zhifei",
""
],
[
"Zhan",
"Jianfeng",
""
]
] |
2109.04083 | Charles Evans | Charles Evans, Atoosa Kasirzadeh | User Tampering in Reinforcement Learning Recommender Systems | In proceedings of the 6th AAAI/ACM Conference on Artificial
Intelligence, Ethics and Society (AIES '23) | null | 10.1145/3600211.3604669 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we introduce new formal methods and provide empirical evidence
to highlight a unique safety concern prevalent in reinforcement learning
(RL)-based recommendation algorithms -- 'user tampering.' User tampering is a
situation where an RL-based recommender system may manipulate a media user's
opinions through its suggestions as part of a policy to maximize long-term user
engagement. We use formal techniques from causal modeling to critically analyze
prevailing solutions proposed in the literature for implementing scalable
RL-based recommendation systems, and we observe that these methods do not
adequately prevent user tampering. Moreover, we evaluate existing mitigation
strategies for reward tampering issues, and show that these methods are
insufficient in addressing the distinct phenomenon of user tampering within the
context of recommendations. We further reinforce our findings with a simulation
study of an RL-based recommendation system focused on the dissemination of
political content. Our study shows that a Q-learning algorithm consistently
learns to exploit its opportunities to polarize simulated users with its early
recommendations in order to have more consistent success with subsequent
recommendations that align with this induced polarization. Our findings
emphasize the necessity for developing safer RL-based recommendation systems
and suggest that achieving such safety would require a fundamental shift in the
design away from the approaches we have seen in the recent literature.
| [
{
"version": "v1",
"created": "Thu, 9 Sep 2021 07:53:23 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Nov 2022 20:57:37 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Jul 2023 14:19:55 GMT"
}
] | 1,690,243,200,000 | [
[
"Evans",
"Charles",
""
],
[
"Kasirzadeh",
"Atoosa",
""
]
] |
2109.04197 | HAL CCSD | Anastasiia Usmanova (INPG), Fran\c{c}ois Portet (GETALP), Philippe
Lalanda (M-PSI), German Vega (M-PSI) | A distillation-based approach integrating continual learning and
federated learning for pervasive services | null | 3rd Workshop on Continual and Multimodal Learning for Internet of
Things -- Co-located with IJCAI 2021, Aug 2021, Montreal, Canada | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning, a new machine learning paradigm enhancing the use of edge
devices, is receiving a lot of attention in the pervasive community to support
the development of smart services. Nevertheless, this approach still needs to
be adapted to the specificity of the pervasive domain. In particular, issues
related to continual learning need to be addressed. In this paper, we present a
distillation-based approach dealing with catastrophic forgetting in federated
learning scenario. Specifically, Human Activity Recognition tasks are used as a
demonstration domain.
| [
{
"version": "v1",
"created": "Thu, 9 Sep 2021 12:09:53 GMT"
}
] | 1,631,232,000,000 | [
[
"Usmanova",
"Anastasiia",
"",
"INPG"
],
[
"Portet",
"François",
"",
"GETALP"
],
[
"Lalanda",
"Philippe",
"",
"M-PSI"
],
[
"Vega",
"German",
"",
"M-PSI"
]
] |
2109.04730 | Jintao Su | Zongtao Liu, Jing Xu, Jintao Su, Tao Xiao and Yang Yang | Boosting Graph Search with Attention Network for Solving the General
Orienteering Problem | 7 pages, 3 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recently, several studies have explored the use of neural network to solve
different routing problems, which is an auspicious direction. These studies
usually design an encoder-decoder based framework that uses encoder embeddings
of nodes and the problem-specific context to produce node sequence(path), and
further optimize the produced result on top by beam search. However, existing
models can only support node coordinates as input, ignore the self-referential
property of the studied routing problems, and lack the consideration about the
low reliability in the initial stage of node selection, thus are hard to be
applied in real-world.
In this paper, we take the orienteering problem as an example to tackle these
limitations. We propose a novel combination of a variant beam search algorithm
and a learned heuristic for solving the general orienteering problem. We
acquire the heuristic with an attention network that takes the distances among
nodes as input, and learn it via a reinforcement learning framework. The
empirical studies show that our method can surpass a wide range of baselines
and achieve results close to the optimal or highly specialized approach. Also,
our proposed framework can be easily applied to other routing problems. Our
code is publicly available.
| [
{
"version": "v1",
"created": "Fri, 10 Sep 2021 08:23:19 GMT"
}
] | 1,631,491,200,000 | [
[
"Liu",
"Zongtao",
""
],
[
"Xu",
"Jing",
""
],
[
"Su",
"Jintao",
""
],
[
"Xiao",
"Tao",
""
],
[
"Yang",
"Yang",
""
]
] |
2109.04830 | Armin Wolf | Marc Geitz, Cristian Grozea, Wolfgang Steigerwald, Robin St\"ohr, and
Armin Wolf | Solving the Extended Job Shop Scheduling Problem with AGVs -- Classical
and Quantum Approaches | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The subject of Job Scheduling Optimisation (JSO) deals with the scheduling of
jobs in an organization, so that the single working steps are optimally
organized regarding the postulated targets. In this paper a use case is
provided which deals with a sub-aspect of JSO, the Job Shop Scheduling Problem
(JSSP or JSP). As many optimization problems JSSP is NP-complete, which means
the complexity increases with every node in the system exponentially. The goal
of the use case is to show how to create an optimized duty rooster for certain
workpieces in a flexible organized machinery, combined with an Autonomous
Ground Vehicle (AGV), using Constraint Programming (CP) and Quantum Computing
(QC) alternatively. The results of a classical solution based on CP and on a
Quantum Annealing model are presented and discussed. All presented results have
been elaborated in the research project PlanQK.
| [
{
"version": "v1",
"created": "Fri, 10 Sep 2021 12:28:51 GMT"
}
] | 1,631,491,200,000 | [
[
"Geitz",
"Marc",
""
],
[
"Grozea",
"Cristian",
""
],
[
"Steigerwald",
"Wolfgang",
""
],
[
"Stöhr",
"Robin",
""
],
[
"Wolf",
"Armin",
""
]
] |
2109.05920 | Dimosthenis Tsouros | Dimosthenis C. Tsouros and Kostas Stergiou | Efficient Multiple Constraint Acquisition | null | Constraints 25.3 (2020): 180-225 | 10.1007/s10601-020-09311-4 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Constraint acquisition systems such as QuAcq and MultiAcq can assist
non-expert users to model their problems as constraint networks by classifying
(partial) examples as positive or negative. For each negative example, the
former focuses on one constraint of the target network, while the latter can
learn a maximum number of constraints. Two bottlenecks of the acquisition
process where both these algorithms encounter problems are the large number of
queries required to reach convergence, and the high cpu times needed to
generate queries, especially near convergence. In this paper we propose
algorithmic and heuristic methods to deal with both these issues. We first
describe an algorithm, called MQuAcq, that blends the main idea of MultiAcq
into QuAcq resulting in a method that learns as many constraints as MultiAcq
does after a negative example, but with a lower complexity. A detailed
theoretical analysis of the proposed algorithm is also presented. %We also
present a technique that boosts the performance of constraint acquisition by
reducing the number of queries significantly. Then we turn our attention to
query generation which is a significant but rather overlooked part of the
acquisition process. We describe %in detail how query generation in a typical
constraint acquisition system operates, and we propose heuristics for improving
its efficiency. Experiments from various domains demonstrate that our resulting
algorithm that integrates all the new techniques does not only generate
considerably fewer queries than QuAcq and MultiAcq, but it is also by far
faster than both of them, in average query generation time as well as in total
run time, and also largely alleviates the premature convergence problem.
| [
{
"version": "v1",
"created": "Mon, 13 Sep 2021 12:42:16 GMT"
}
] | 1,631,577,600,000 | [
[
"Tsouros",
"Dimosthenis C.",
""
],
[
"Stergiou",
"Kostas",
""
]
] |
2109.06505 | Saksham Consul | Saksham Consul, Jugoslav Stojcheski, Valkyrie Felso, Falk Lieder | Optimal To-Do List Gamification for Long Term Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most people struggle with prioritizing work. While inexact heuristics have
been developed over time, there is still no tractable principled algorithm for
deciding which of the many possible tasks one should tackle in any given day,
month, week, or year. Additionally, some people suffer from cognitive biases
such as the present bias, leading to prioritization of their immediate
experience over long-term consequences which manifests itself as
procrastination and inefficient task prioritization. Our method utilizes
optimal gamification to help people overcome these problems by incentivizing
each task by a number of points that convey how valuable it is in the long-run.
We extend the previous version of our optimal gamification method with added
services for helping people decide which tasks should and should not be done
when there is not enough time to do everything. To improve the efficiency and
scalability of the to-do list solver, we designed a hierarchical procedure that
tackles the problem from the top-level goals to fine-grained tasks. We test the
accuracy of the incentivised to-do list by comparing the performance of the
strategy with the points computed exactly using Value Iteration for a variety
of case studies. These case studies were specifically designed to cover the
corner cases to get an accurate judge of performance. Our method yielded the
same performance as the exact method for all case studies. To demonstrate its
functionality, we released an API that makes it easy to deploy our method in
Web and app services. We assessed the scalability of our method by applying it
to to-do lists with increasingly larger numbers of goals, sub-goals per goal,
hierarchically nested levels of subgoals. We found that the method provided
through our API is able to tackle fairly large to-do lists having a 576 tasks.
This indicates that our method is suitable for real-world applications.
| [
{
"version": "v1",
"created": "Tue, 14 Sep 2021 08:06:01 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Sep 2021 05:05:46 GMT"
}
] | 1,631,750,400,000 | [
[
"Consul",
"Saksham",
""
],
[
"Stojcheski",
"Jugoslav",
""
],
[
"Felso",
"Valkyrie",
""
],
[
"Lieder",
"Falk",
""
]
] |
2109.06580 | Boris Gutkin | Hugo Lauren\c{c}on, Charbel-Rapha\"el S\'egerie, Johann Lussange,
Boris S. Gutkin | Continuous Homeostatic Reinforcement Learning for Self-Regulated
Autonomous Agents | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Homeostasis is a prevalent process by which living beings maintain their
internal milieu around optimal levels. Multiple lines of evidence suggest that
living beings learn to act to predicatively ensure homeostasis (allostasis). A
classical theory for such regulation is drive reduction, where a function of
the difference between the current and the optimal internal state. The recently
introduced homeostatic regulated reinforcement learning theory (HRRL), by
defining within the framework of reinforcement learning a reward function based
on the internal state of the agent, makes the link between the theories of
drive reduction and reinforcement learning. The HRRL makes it possible to
explain multiple eating disorders. However, the lack of continuous change in
the internal state of the agent with the discrete-time modeling has been so far
a key shortcoming of the HRRL theory. Here, we propose an extension of the
homeostatic reinforcement learning theory to a continuous environment in space
and time, while maintaining the validity of the theoretical results and the
behaviors explained by the model in discrete time. Inspired by the
self-regulating mechanisms abundantly present in biology, we also introduce a
model for the dynamics of the agent internal state, requiring the agent to
continuously take actions to maintain homeostasis. Based on the
Hamilton-Jacobi-Bellman equation and function approximation with neural
networks, we derive a numerical scheme allowing the agent to learn directly how
its internal mechanism works, and to choose appropriate action policies via
reinforcement learning and an appropriate exploration of the environment. Our
numerical experiments show that the agent does indeed learn to behave in a way
that is beneficial to its survival in the environment, making our framework
promising for modeling animal dynamics and decision-making.
| [
{
"version": "v1",
"created": "Tue, 14 Sep 2021 11:03:58 GMT"
}
] | 1,631,664,000,000 | [
[
"Laurençon",
"Hugo",
""
],
[
"Ségerie",
"Charbel-Raphaël",
""
],
[
"Lussange",
"Johann",
""
],
[
"Gutkin",
"Boris S.",
""
]
] |
2109.06740 | Yagiz Savas | Yagiz Savas, Christos K. Verginis, Ufuk Topcu | Deceptive Decision-Making Under Uncertainty | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the design of autonomous agents that are capable of deceiving
outside observers about their intentions while carrying out tasks in
stochastic, complex environments. By modeling the agent's behavior as a Markov
decision process, we consider a setting where the agent aims to reach one of
multiple potential goals while deceiving outside observers about its true goal.
We propose a novel approach to model observer predictions based on the
principle of maximum entropy and to efficiently generate deceptive strategies
via linear programming. The proposed approach enables the agent to exhibit a
variety of tunable deceptive behaviors while ensuring the satisfaction of
probabilistic constraints on the behavior. We evaluate the performance of the
proposed approach via comparative user studies and present a case study on the
streets of Manhattan, New York, using real travel time distributions.
| [
{
"version": "v1",
"created": "Tue, 14 Sep 2021 14:56:23 GMT"
}
] | 1,631,664,000,000 | [
[
"Savas",
"Yagiz",
""
],
[
"Verginis",
"Christos K.",
""
],
[
"Topcu",
"Ufuk",
""
]
] |
2109.07195 | Hector Geffner | Hector Geffner | Target Languages (vs. Inductive Biases) for Learning to Act and Plan | null | AAAI 2022 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent breakthroughs in AI have shown the remarkable power of deep learning
and deep reinforcement learning. These developments, however, have been tied to
specific tasks, and progress in out-of-distribution generalization has been
limited. While it is assumed that these limitations can be overcome by
incorporating suitable inductive biases, the notion of inductive biases itself
is often left vague and does not provide meaningful guidance. In the paper, I
articulate a different learning approach where representations do not emerge
from biases in a neural architecture but are learned over a given target
language with a known semantics. The basic ideas are implicit in mainstream AI
where representations have been encoded in languages ranging from fragments of
first-order logic to probabilistic structural causal models. The challenge is
to learn from data the representations that have traditionally been crafted by
hand. Generalization is then a result of the semantics of the language. The
goals of this paper are to make these ideas explicit, to place them in a
broader context where the design of the target language is crucial, and to
illustrate them in the context of learning to act and plan. For this, after a
general discussion, I consider learning representations of actions, general
policies, and subgoals ("intrinsic rewards"). In these cases, learning is
formulated as a combinatorial problem but nothing prevents the use of deep
learning techniques instead. Indeed, learning representations over languages
with a known semantics provides an account of what is to be learned, while
learning representations with neural nets provides a complementary account of
how representations can be learned. The challenge and the opportunity is to
bring the two together.
| [
{
"version": "v1",
"created": "Wed, 15 Sep 2021 10:24:13 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Nov 2021 18:51:15 GMT"
}
] | 1,638,230,400,000 | [
[
"Geffner",
"Hector",
""
]
] |
2109.07436 | Sriram Gopalakrishnan | Sriram Gopalakrishnan, Mudit Verma, Subbarao Kambhampati | Computing Policies That Account For The Effects Of Human Agent
Uncertainty During Execution In Markov Decision Processes | 7 page paper, 6 pages supplemental material | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When humans are given a policy to execute, there can be policy execution
errors and deviations in policy if there is uncertainty in identifying a state.
This can happen due to the human agent's cognitive limitations and/or
perceptual errors. So an algorithm that computes a policy for a human to
execute ought to consider these effects in its computations. An optimal Markov
Decision Process (MDP) policy that is poorly executed (because of a human
agent) maybe much worse than another policy that is suboptimal in the MDP, but
considers the human-agent's execution behavior. In this paper we consider two
problems that arise from state uncertainty; these are erroneous
state-inference, and extra-sensing actions that a person might take as a result
of their uncertainty. We present a framework to model the human agent's
behavior with respect to state uncertainty, and can be used to compute MDP
policies that accounts for these problems. This is followed by a hill climbing
algorithm to search for good policies given our model of the human agent. We
also present a branch and bound algorithm which can find the optimal policy for
such problems. We show experimental results in a Gridworld domain, and
warehouse-worker domain. Finally, we present human-subject studies that support
our human model assumptions.
| [
{
"version": "v1",
"created": "Wed, 15 Sep 2021 17:10:46 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Sep 2021 21:24:20 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Mar 2022 22:00:30 GMT"
}
] | 1,646,611,200,000 | [
[
"Gopalakrishnan",
"Sriram",
""
],
[
"Verma",
"Mudit",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2109.07556 | Ang Li | Ang Li and Judea Pearl | Unit Selection with Causal Diagram | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The unit selection problem aims to identify a set of individuals who are most
likely to exhibit a desired mode of behavior, for example, selecting
individuals who would respond one way if encouraged and a different way if not
encouraged. Using a combination of experimental and observational data, Li and
Pearl derived tight bounds on the "benefit function" - the payoff/cost
associated with selecting an individual with given characteristics. This paper
shows that these bounds can be narrowed significantly (enough to change
decisions) when structural information is available in the form of a causal
model. We address the problem of estimating the benefit function using
observational and experimental data when specific graphical criteria are
assumed to hold.
| [
{
"version": "v1",
"created": "Wed, 15 Sep 2021 20:06:25 GMT"
}
] | 1,631,836,800,000 | [
[
"Li",
"Ang",
""
],
[
"Pearl",
"Judea",
""
]
] |
2109.07827 | Paul Festor | Paul Festor, Giulia Luise, Matthieu Komorowski and A. Aldo Faisal | Enabling risk-aware Reinforcement Learning for medical interventions
through uncertainty decomposition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement Learning (RL) is emerging as tool for tackling complex control
and decision-making problems. However, in high-risk environments such as
healthcare, manufacturing, automotive or aerospace, it is often challenging to
bridge the gap between an apparently optimal policy learnt by an agent and its
real-world deployment, due to the uncertainties and risk associated with it.
Broadly speaking RL agents face two kinds of uncertainty, 1. aleatoric
uncertainty, which reflects randomness or noise in the dynamics of the world,
and 2. epistemic uncertainty, which reflects the bounded knowledge of the agent
due to model limitations and finite amount of information/data the agent has
acquired about the world. These two types of uncertainty carry fundamentally
different implications for the evaluation of performance and the level of risk
or trust. Yet these aleatoric and epistemic uncertainties are generally
confounded as standard and even distributional RL is agnostic to this
difference. Here we propose how a distributional approach (UA-DQN) can be
recast to render uncertainties by decomposing the net effects of each
uncertainty. We demonstrate the operation of this method in grid world examples
to build intuition and then show a proof of concept application for an RL agent
operating as a clinical decision support system in critical care
| [
{
"version": "v1",
"created": "Thu, 16 Sep 2021 09:36:53 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 16:38:20 GMT"
}
] | 1,651,104,000,000 | [
[
"Festor",
"Paul",
""
],
[
"Luise",
"Giulia",
""
],
[
"Komorowski",
"Matthieu",
""
],
[
"Faisal",
"A. Aldo",
""
]
] |
2109.08006 | Kwabena Nuamah | Kwabena Nuamah | Deep Algorithmic Question Answering: Towards a Compositionally Hybrid AI
for Algorithmic Reasoning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An important aspect of artificial intelligence (AI) is the ability to reason
in a step-by-step "algorithmic" manner that can be inspected and verified for
its correctness. This is especially important in the domain of question
answering (QA). We argue that the challenge of algorithmic reasoning in QA can
be effectively tackled with a "systems" approach to AI which features a hybrid
use of symbolic and sub-symbolic methods including deep neural networks.
Additionally, we argue that while neural network models with end-to-end
training pipelines perform well in narrow applications such as image
classification and language modelling, they cannot, on their own, successfully
perform algorithmic reasoning, especially if the task spans multiple domains.
We discuss a few notable exceptions and point out how they are still limited
when the QA problem is widened to include other intelligence-requiring tasks.
However, deep learning, and machine learning in general, do play important
roles as components in the reasoning process. We propose an approach to
algorithm reasoning for QA, Deep Algorithmic Question Answering (DAQA), based
on three desirable properties: interpretability, generalizability, and
robustness which such an AI system should possess, and conclude that they are
best achieved with a combination of hybrid and compositional AI.
| [
{
"version": "v1",
"created": "Thu, 16 Sep 2021 14:28:18 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Sep 2021 09:55:24 GMT"
},
{
"version": "v3",
"created": "Fri, 5 Nov 2021 13:58:17 GMT"
}
] | 1,636,329,600,000 | [
[
"Nuamah",
"Kwabena",
""
]
] |
2109.08149 | Nicholas Polson Prof | Shiva Maharaj and Nick Polson | Karpov's Queen Sacrifices and AI | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Anatoly Karpov's Queen sacrifices are analyzed. Stockfish 14 NNUE -- an AI
chess engine -- evaluates how efficient Karpov's sacrifices are. For
comparative purposes, we provide a dataset on Karpov's Rook and Knight
sacrifices to test whether Karpov achieves a similar level of accuracy. Our
study has implications for human-AI interaction and how humans can better
understand the strategies employed by black-box AI algorithms. Finally, we
conclude with implications for human study in. chess with computer engines.
| [
{
"version": "v1",
"created": "Wed, 15 Sep 2021 23:57:48 GMT"
}
] | 1,632,096,000,000 | [
[
"Maharaj",
"Shiva",
""
],
[
"Polson",
"Nick",
""
]
] |
2109.08290 | EPTCS | Akihiro Takemura, Katsumi Inoue | Generating Explainable Rule Sets from Tree-Ensemble Learning Methods by
Answer Set Programming | In Proceedings ICLP 2021, arXiv:2109.07914 | EPTCS 345, 2021, pp. 127-140 | 10.4204/EPTCS.345.26 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We propose a method for generating explainable rule sets from tree-ensemble
learners using Answer Set Programming (ASP). To this end, we adopt a
decompositional approach where the split structures of the base decision trees
are exploited in the construction of rules, which in turn are assessed using
pattern mining methods encoded in ASP to extract interesting rules. We show how
user-defined constraints and preferences can be represented declaratively in
ASP to allow for transparent and flexible rule set generation, and how rules
can be used as explanations to help the user better understand the models.
Experimental evaluation with real-world datasets and popular tree-ensemble
algorithms demonstrates that our approach is applicable to a wide range of
classification tasks.
| [
{
"version": "v1",
"created": "Fri, 17 Sep 2021 01:47:38 GMT"
}
] | 1,632,096,000,000 | [
[
"Takemura",
"Akihiro",
""
],
[
"Inoue",
"Katsumi",
""
]
] |
2109.08292 | EPTCS | Ly Ly Trieu (New Mexico State University), Tran Cao Son (New Mexico
State University), Marcello Balduccini (Saint Joseph's University) | exp(ASPc) : Explaining ASP Programs with Choice Atoms and Constraint
Rules | In Proceedings ICLP 2021, arXiv:2109.07914. In Proceedings the 37th
International Conference on Logic Programming (ICLP 2021) | EPTCS 345, 2021, pp. 155-161 | 10.4204/EPTCS.345.28 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present an enhancement of exp(ASP), a system that generates explanation
graphs for a literal l - an atom a or its default negation ~a - given an answer
set A of a normal logic program P, which explain why l is true (or false) given
A and P. The new system, exp(ASPc), differs from exp(ASP) in that it supports
choice rules and utilizes constraint rules to provide explanation graphs that
include information about choices and constraints.
| [
{
"version": "v1",
"created": "Fri, 17 Sep 2021 01:48:14 GMT"
}
] | 1,632,096,000,000 | [
[
"Trieu",
"Ly Ly",
"",
"New Mexico State University"
],
[
"Son",
"Tran Cao",
"",
"New Mexico\n State University"
],
[
"Balduccini",
"Marcello",
"",
"Saint Joseph's University"
]
] |
2109.08425 | Anthony Hunter | Antonis Bikakis, Luke Dickens, Anthony Hunter, and Rob Miller | Repurposing of Resources: from Everyday Problem Solving through to
Crisis Management | 16 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The human ability to repurpose objects and processes is universal, but it is
not a well-understood aspect of human intelligence. Repurposing arises in
everyday situations such as finding substitutes for missing ingredients when
cooking, or for unavailable tools when doing DIY. It also arises in critical,
unprecedented situations needing crisis management. After natural disasters and
during wartime, people must repurpose the materials and processes available to
make shelter, distribute food, etc. Repurposing is equally important in
professional life (e.g. clinicians often repurpose medicines off-license) and
in addressing societal challenges (e.g. finding new roles for waste products,).
Despite the importance of repurposing, the topic has received little academic
attention. By considering examples from a variety of domains such as every-day
activities, drug repurposing and natural disasters, we identify some principle
characteristics of the process and describe some technical challenges that
would be involved in modelling and simulating it. We consider cases of both
substitution, i.e. finding an alternative for a missing resource, and
exploitation, i.e. identifying a new role for an existing resource. We argue
that these ideas could be developed into general formal theory of repurposing,
and that this could then lead to the development of AI methods based on
commonsense reasoning, argumentation, ontological reasoning, and various
machine learning methods, to develop tools to support repurposing in practice.
| [
{
"version": "v1",
"created": "Fri, 17 Sep 2021 09:36:56 GMT"
}
] | 1,632,096,000,000 | [
[
"Bikakis",
"Antonis",
""
],
[
"Dickens",
"Luke",
""
],
[
"Hunter",
"Anthony",
""
],
[
"Miller",
"Rob",
""
]
] |
2109.08621 | Yuta Saito | Yuta Saito, Takuma Udagawa, and Kei Tateno | Data-Driven Off-Policy Estimator Selection: An Application in User
Marketing on An Online Content Delivery Service | presented at REVEAL workshop, RecSys2020 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Off-policy evaluation (OPE) is the method that attempts to estimate the
performance of decision making policies using historical data generated by
different policies without conducting costly online A/B tests. Accurate OPE is
essential in domains such as healthcare, marketing or recommender systems to
avoid deploying poor performing policies, as such policies may hart human lives
or destroy the user experience. Thus, many OPE methods with theoretical
backgrounds have been proposed. One emerging challenge with this trend is that
a suitable estimator can be different for each application setting. It is often
unknown for practitioners which estimator to use for their specific
applications and purposes. To find out a suitable estimator among many
candidates, we use a data-driven estimator selection procedure for off-policy
policy performance estimators as a practical solution. As proof of concept, we
use our procedure to select the best estimator to evaluate coupon treatment
policies on a real-world online content delivery service. In the experiment, we
first observe that a suitable estimator might change with different definitions
of the outcome variable, and thus the accurate estimator selection is critical
in real-world applications of OPE. Then, we demonstrate that, by utilizing the
estimator selection procedure, we can easily find out suitable estimators for
each purpose.
| [
{
"version": "v1",
"created": "Fri, 17 Sep 2021 15:53:53 GMT"
}
] | 1,632,096,000,000 | [
[
"Saito",
"Yuta",
""
],
[
"Udagawa",
"Takuma",
""
],
[
"Tateno",
"Kei",
""
]
] |
2109.08662 | Mario Alviano | Mario Alviano, Wolfgang Faber, Martin Gebser | Aggregate Semantics for Propositional Answer Set Programs | Under consideration in Theory and Practice of Logic Programming
(TPLP) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Answer Set Programming (ASP) emerged in the late 1990ies as a paradigm for
Knowledge Representation and Reasoning. The attractiveness of ASP builds on an
expressive high-level modeling language along with the availability of powerful
off-the-shelf solving systems. While the utility of incorporating aggregate
expressions in the modeling language has been realized almost simultaneously
with the inception of the first ASP solving systems, a general semantics of
aggregates and its efficient implementation have been long-standing challenges.
Aggregates have been proposed and widely used in database systems, and also in
the deductive database language Datalog, which is one of the main precursors of
ASP. The use of aggregates was, however, still restricted in Datalog (by either
disallowing recursion or only allowing monotone aggregates), while several ways
to integrate unrestricted aggregates evolved in the context of ASP. In this
survey, we pick up at this point of development by presenting and comparing the
main aggregate semantics that have been proposed for propositional ASP
programs. We highlight crucial properties such as computational complexity and
expressive power, and outline the capabilities and limitations of different
approaches by illustrative examples.
| [
{
"version": "v1",
"created": "Fri, 17 Sep 2021 17:38:55 GMT"
}
] | 1,632,096,000,000 | [
[
"Alviano",
"Mario",
""
],
[
"Faber",
"Wolfgang",
""
],
[
"Gebser",
"Martin",
""
]
] |
2109.08755 | Olivier Buffet | Yang You, Vincent Thomas, Francis Colas and Olivier Buffet | Solving infinite-horizon Dec-POMDPs using Finite State Controllers
within JESP | Extended version of ICTAI 2021 paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper looks at solving collaborative planning problems formalized as
Decentralized POMDPs (Dec-POMDPs) by searching for Nash equilibria, i.e.,
situations where each agent's policy is a best response to the other agents'
(fixed) policies. While the Joint Equilibrium-based Search for Policies (JESP)
algorithm does this in the finite-horizon setting relying on policy trees, we
propose here to adapt it to infinite-horizon Dec-POMDPs by using finite state
controller (FSC) policy representations. In this article, we (1) explain how to
turn a Dec-POMDP with $N-1$ fixed FSCs into an infinite-horizon POMDP whose
solution is an $N^\text{th}$ agent best response; (2) propose a JESP variant,
called \infJESP, using this to solve infinite-horizon Dec-POMDPs; (3) introduce
heuristic initializations for JESP aiming at leading to good solutions; and (4)
conduct experiments on state-of-the-art benchmark problems to evaluate our
approach.
| [
{
"version": "v1",
"created": "Fri, 17 Sep 2021 20:27:51 GMT"
}
] | 1,632,182,400,000 | [
[
"You",
"Yang",
""
],
[
"Thomas",
"Vincent",
""
],
[
"Colas",
"Francis",
""
],
[
"Buffet",
"Olivier",
""
]
] |
2109.08884 | Jean-Guy Mailly | Jean-Marie Lagniez, Emmanuel Lonca, Jean-Guy Mailly, Julien Rossit | Design and Results of ICCMA 2021 | 14 pages. Part of ICCMA 2021 proceedings | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since 2015, the International Competition on Computational Models of
Argumentation (ICCMA) provides a systematic comparison of the different
algorithms for solving some classical reasoning problems in the domain of
abstract argumentation. This paper discusses the design of the Fourth
International Competition on Computational Models of Argumentation. We describe
the rules of the competition and the benchmark selection method that we used.
After a brief presentation of the competitors, we give an overview of the
results.
| [
{
"version": "v1",
"created": "Sat, 18 Sep 2021 09:01:36 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Oct 2021 15:36:30 GMT"
}
] | 1,633,564,800,000 | [
[
"Lagniez",
"Jean-Marie",
""
],
[
"Lonca",
"Emmanuel",
""
],
[
"Mailly",
"Jean-Guy",
""
],
[
"Rossit",
"Julien",
""
]
] |
2109.08947 | Margaret Chapman Dr. | Yuheng Wang and Margaret P. Chapman | Risk-averse autonomous systems: A brief history and recent developments
from the perspective of optimal control | in press, part of the Special Issue on Risk-aware Autonomous Systems:
Theory and Practice | Journal of Artificial Intelligence, 2022 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an historical overview about the connections between the analysis
of risk and the control of autonomous systems. We offer two main contributions.
Our first contribution is to propose three overlapping paradigms to classify
the vast body of literature: the worst-case, risk-neutral, and risk-averse
paradigms. We consider an appropriate assessment for the risk of an autonomous
system to depend on the application at hand. In contrast, it is typical to
assess risk using an expectation, variance, or probability alone. Our second
contribution is to unify the concepts of risk and autonomous systems. We
achieve this by connecting approaches for quantifying and optimizing the risk
that arises from a system's behaviour across academic fields. The survey is
highly multidisciplinary. We include research from the communities of
reinforcement learning, stochastic and robust control theory, operations
research, and formal verification. We describe both model-based and model-free
methods, with emphasis on the former. Lastly, we highlight fruitful areas for
further research. A key direction is to blend risk-averse model-based and
model-free methods to enhance the real-time adaptive capabilities of systems to
improve human and environmental welfare.
| [
{
"version": "v1",
"created": "Sat, 18 Sep 2021 15:01:57 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jun 2022 23:53:31 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Jul 2022 18:00:43 GMT"
}
] | 1,657,670,400,000 | [
[
"Wang",
"Yuheng",
""
],
[
"Chapman",
"Margaret P.",
""
]
] |
2109.09103 | Mahmoud Mahfouz | Mahmoud Mahfouz, Armineh Nourbakhsh, Sameena Shah | A Framework for Institutional Risk Identification using Knowledge Graphs
and Automated News Profiling | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Organizations around the world face an array of risks impacting their
operations globally. It is imperative to have a robust risk identification
process to detect and evaluate the impact of potential risks before they
materialize. Given the nature of the task and the current requirements of deep
subject matter expertise, most organizations utilize a heavily manual process.
In our work, we develop an automated system that (a) continuously monitors
global news, (b) is able to autonomously identify and characterize risks, (c)
is able to determine the proximity of reaching triggers to determine the
distance from the manifestation of the risk impact and (d) identifies
organization's operational areas that may be most impacted by the risk. Other
contributions also include: (a) a knowledge graph representation of risks and
(b) relevant news matching to risks identified by the organization utilizing a
neural embedding model to match the textual description of a given risk with
multi-lingual news.
| [
{
"version": "v1",
"created": "Sun, 19 Sep 2021 11:06:12 GMT"
}
] | 1,632,182,400,000 | [
[
"Mahfouz",
"Mahmoud",
""
],
[
"Nourbakhsh",
"Armineh",
""
],
[
"Shah",
"Sameena",
""
]
] |
2109.09138 | Yu Zhang | Shijie Chen, Yu Zhang, and Qiang Yang | Multi-Task Learning in Natural Language Processing: An Overview | Accepted by ACM Computing Surveys | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning approaches have achieved great success in the field of Natural
Language Processing (NLP). However, directly training deep neural models often
suffer from overfitting and data scarcity problems that are pervasive in NLP
tasks. In recent years, Multi-Task Learning (MTL), which can leverage useful
information of related tasks to achieve simultaneous performance improvement on
these tasks, has been used to handle these problems. In this paper, we give an
overview of the use of MTL in NLP tasks. We first review MTL architectures used
in NLP tasks and categorize them into four classes, including parallel
architecture, hierarchical architecture, modular architecture, and generative
adversarial architecture. Then we present optimization techniques on loss
construction, gradient regularization, data sampling, and task scheduling to
properly train a multi-task model. After presenting applications of MTL in a
variety of NLP tasks, we introduce some benchmark datasets. Finally, we make a
conclusion and discuss several possible research directions in this field.
| [
{
"version": "v1",
"created": "Sun, 19 Sep 2021 14:51:51 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Apr 2024 07:25:45 GMT"
}
] | 1,714,435,200,000 | [
[
"Chen",
"Shijie",
""
],
[
"Zhang",
"Yu",
""
],
[
"Yang",
"Qiang",
""
]
] |
2109.09202 | Adel Memariani | Adel Memariani, Martin Glauer, Fabian Neuhaus, Till Mossakowski and
Janna Hastings | Automated and Explainable Ontology Extension Based on Deep Learning: A
Case Study in the Chemical Domain | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Reference ontologies provide a shared vocabulary and knowledge resource for
their domain. Manual construction enables them to maintain a high quality,
allowing them to be widely accepted across their community. However, the manual
development process does not scale for large domains. We present a new
methodology for automatic ontology extension and apply it to the ChEBI
ontology, a prominent reference ontology for life sciences chemistry. We
trained a Transformer-based deep learning model on the leaf node structures
from the ChEBI ontology and the classes to which they belong. The model is then
capable of automatically classifying previously unseen chemical structures. The
proposed model achieved an overall F1 score of 0.80, an improvement of 6
percentage points over our previous results on the same dataset. Additionally,
we demonstrate how visualizing the model's attention weights can help to
explain the results by providing insight into how the model made its decisions.
| [
{
"version": "v1",
"created": "Sun, 19 Sep 2021 19:37:08 GMT"
}
] | 1,632,182,400,000 | [
[
"Memariani",
"Adel",
""
],
[
"Glauer",
"Martin",
""
],
[
"Neuhaus",
"Fabian",
""
],
[
"Mossakowski",
"Till",
""
],
[
"Hastings",
"Janna",
""
]
] |
2109.09390 | Julius Taylor | Julius Taylor, Eleni Nisioti, Cl\'ement Moulin-Frier | Socially Supervised Representation Learning: the Role of Subjectivity in
Learning Efficient Representations | null | International Conference on Autonomous Agents and Multi-Agent
Systems (AAMAS 2022) | 10.5555/3535850.3535992 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Despite its rise as a prominent solution to the data inefficiency of today's
machine learning models, self-supervised learning has yet to be studied from a
purely multi-agent perspective. In this work, we propose that aligning internal
subjective representations, which naturally arise in a multi-agent setup where
agents receive partial observations of the same underlying environmental state,
can lead to more data-efficient representations. We propose that multi-agent
environments, where agents do not have access to the observations of others but
can communicate within a limited range, guarantees a common context that can be
leveraged in individual representation learning. The reason is that subjective
observations necessarily refer to the same subset of the underlying
environmental states and that communication about these states can freely offer
a supervised signal. To highlight the importance of communication, we refer to
our setting as \textit{socially supervised representation learning}. We present
a minimal architecture comprised of a population of autoencoders, where we
define loss functions, capturing different aspects of effective communication,
and examine their effect on the learned representations. We show that our
proposed architecture allows the emergence of aligned representations. The
subjectivity introduced by presenting agents with distinct perspectives of the
environment state contributes to learning abstract representations that
outperform those learned by a single autoencoder and a population of
autoencoders, presented with identical perspectives of the environment state.
Altogether, our results demonstrate how communication from subjective
perspectives can lead to the acquisition of more abstract representations in
multi-agent systems, opening promising perspectives for future research at the
intersection of representation learning and emergent communication.
| [
{
"version": "v1",
"created": "Mon, 20 Sep 2021 09:30:13 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Oct 2021 09:36:43 GMT"
},
{
"version": "v3",
"created": "Tue, 2 Nov 2021 17:15:51 GMT"
},
{
"version": "v4",
"created": "Thu, 22 Sep 2022 17:50:07 GMT"
}
] | 1,663,891,200,000 | [
[
"Taylor",
"Julius",
""
],
[
"Nisioti",
"Eleni",
""
],
[
"Moulin-Frier",
"Clément",
""
]
] |
2109.09425 | Charl Maree | Charl Maree, Christian W. Omlin | Clustering in Recurrent Neural Networks for Micro-Segmentation using
Spending Personality | null | IEEE SSCI (2021) pp 1-5 | 10.1109/SSCI50451.2021.9659905 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Customer segmentation has long been a productive field in banking. However,
with new approaches to traditional problems come new opportunities.
Fine-grained customer segments are notoriously elusive and one method of
obtaining them is through feature extraction. It is possible to assign
coefficients of standard personality traits to financial transaction classes
aggregated over time. However, we have found that the clusters formed are not
sufficiently discriminatory for micro-segmentation. In a novel approach, we
extract temporal features with continuous values from the hidden states of
neural networks predicting customers' spending personality from their financial
transactions. We consider both temporal and non-sequential models, using long
short-term memory (LSTM) and feed-forward neural networks, respectively. We
found that recurrent neural networks produce micro-segments where feed-forward
networks produce only coarse segments. Finally, we show that classification
using these extracted features performs at least as well as bespoke models on
two common metrics, namely loan default rate and customer liquidity index.
| [
{
"version": "v1",
"created": "Mon, 20 Sep 2021 11:06:58 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Oct 2021 06:12:00 GMT"
}
] | 1,643,587,200,000 | [
[
"Maree",
"Charl",
""
],
[
"Omlin",
"Christian W.",
""
]
] |
2109.09478 | Philip Osborne | Philip Osborne, Heido N\~omm and Andre Freitas | A Survey of Text Games for Reinforcement Learning informed by Natural
Language | 10 pages, 3 figures, pre-submission | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Reinforcement Learning has shown success in a number of complex virtual
environments. However, many challenges still exist towards solving problems
with natural language as a core component. Interactive Fiction Games (or Text
Games) are one such problem type that offer a set of partially observable
environments where natural language is required as part of the reinforcement
learning solutions.
Therefore, this survey's aim is to assist in the development of new Text Game
problem settings and solutions for Reinforcement Learning informed by natural
language. Specifically, this survey summarises: 1) the challenges introduced in
Text Game Reinforcement Learning problems, 2) the generation tools for
evaluating Text Games and the subsequent environments generated and, 3) the
agent architectures currently applied are compared to provide a systematic
review of benchmark methodologies and opportunities for future researchers.
| [
{
"version": "v1",
"created": "Mon, 20 Sep 2021 12:32:57 GMT"
}
] | 1,632,182,400,000 | [
[
"Osborne",
"Philip",
""
],
[
"Nõmm",
"Heido",
""
],
[
"Freitas",
"Andre",
""
]
] |
2109.09507 | Matthew Stephenson | Matthew Stephenson, Eric Piette, Dennis J. N. J. Soemers, Cameron
Browne | Automatic Generation of Board Game Manuals | 12 Pages, 6 Figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a process for automatically generating manuals for
board games within the Ludii general game system. This process requires many
different sub-tasks to be addressed, such as English translation of Ludii game
descriptions, move visualisation, highlighting winning moves, strategy
explanation, among others. These aspects are then combined to create a full
manual for any given game. This manual is intended to provide a more intuitive
explanation of a game's rules and mechanics, particularly for players who are
less familiar with the Ludii game description language and grammar.
| [
{
"version": "v1",
"created": "Mon, 20 Sep 2021 12:54:35 GMT"
}
] | 1,632,182,400,000 | [
[
"Stephenson",
"Matthew",
""
],
[
"Piette",
"Eric",
""
],
[
"Soemers",
"Dennis J. N. J.",
""
],
[
"Browne",
"Cameron",
""
]
] |
2109.09531 | Xinzhu Liu | Xinzhu Liu, Di Guo, Huaping Liu, and Fuchun Sun | Multi-Agent Embodied Visual Semantic Navigation with Scene Prior
Knowledge | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In visual semantic navigation, the robot navigates to a target object with
egocentric visual observations and the class label of the target is given. It
is a meaningful task inspiring a surge of relevant research. However, most of
the existing models are only effective for single-agent navigation, and a
single agent has low efficiency and poor fault tolerance when completing more
complicated tasks. Multi-agent collaboration can improve the efficiency and has
strong application potentials. In this paper, we propose the multi-agent visual
semantic navigation, in which multiple agents collaborate with others to find
multiple target objects. It is a challenging task that requires agents to learn
reasonable collaboration strategies to perform efficient exploration under the
restrictions of communication bandwidth. We develop a hierarchical decision
framework based on semantic mapping, scene prior knowledge, and communication
mechanism to solve this task. The results of testing experiments in unseen
scenes with both known objects and unknown objects illustrate the higher
accuracy and efficiency of the proposed model compared with the single-agent
model.
| [
{
"version": "v1",
"created": "Mon, 20 Sep 2021 13:31:03 GMT"
}
] | 1,632,182,400,000 | [
[
"Liu",
"Xinzhu",
""
],
[
"Guo",
"Di",
""
],
[
"Liu",
"Huaping",
""
],
[
"Sun",
"Fuchun",
""
]
] |
2109.09653 | Sridhar Mahadevan | Sridhar Mahadevan | Asymptotic Causal Inference | 16 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate causal inference in the asymptotic regime as the number of
variables approaches infinity using an information-theoretic framework. We
define structural entropy of a causal model in terms of its description
complexity measured by the logarithmic growth rate, measured in bits, of all
directed acyclic graphs (DAGs), parameterized by the edge density d. Structural
entropy yields non-intuitive predictions. If we randomly sample a DAG from the
space of all models, in the range d = (0, 1/8), almost surely the model is a
two-layer DAG! Semantic entropy quantifies the reduction in entropy where edges
are removed by causal intervention. Semantic causal entropy is defined as the
f-divergence between the observational distribution and the interventional
distribution P', where a subset S of edges are intervened on to determine their
causal influence. We compare the decomposability properties of semantic entropy
for different choices of f-divergences, including KL-divergence, squared
Hellinger distance, and total variation distance. We apply our framework to
generalize a recently popular bipartite experimental design for studying causal
inference on large datasets, where interventions are carried out on one set of
variables (e.g., power plants, items in an online store), but outcomes are
measured on a disjoint set of variables (residents near power plants, or
shoppers). We generalize bipartite designs to k-partite designs, and describe
an optimization framework for finding the optimal k-level DAG architecture for
any value of d \in (0, 1/2). As edge density increases, a sequence of phase
transitions occur over disjoint intervals of d, with deeper DAG architectures
emerging for larger values of d. We also give a quantitative bound on the
number of samples needed to reliably test for average causal influence for a
k-partite design.
| [
{
"version": "v1",
"created": "Mon, 20 Sep 2021 16:16:00 GMT"
}
] | 1,632,182,400,000 | [
[
"Mahadevan",
"Sridhar",
""
]
] |
2109.09696 | Alexander Felfernig | Alexander Felfernig, Andrei Popescu, Mathias Uta, Viet-Man Le, Seda
Polat-Erdeniz, Martin Stettinger, M\"usl\"um Atas, and Thi Ngoc Trang Tran | Configuring Multiple Instances with Multi-Configuration | Cite as: A. Felfernig, A. Popescu, M. Uta, V.M. Le, S.P. Erdeniz, M.
Stettinger, M. Atas, and T.N.T. Tran. Configuring Multiple Instances with
Multi-Configuration. 23rd International Configuration Workshop, Vienna,
Austria, CEUR, vol. 2945, pp. 45-47, 2021 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Configuration is a successful application area of Artificial Intelligence. In
the majority of the cases, configuration systems focus on configuring one
solution (configuration) that satisfies the preferences of a single user or a
group of users. In this paper, we introduce a new configuration approach -
multi-configuration - that focuses on scenarios where the outcome of a
configuration process is a set of configurations. Example applications thereof
are the configuration of personalized exams for individual students, the
configuration of project teams, reviewer-to-paper assignment, and hotel room
assignments including individualized city trips for tourist groups. For
multi-configuration scenarios, we exemplify a constraint satisfaction problem
representation in the context of configuring exams. The paper is concluded with
a discussion of open issues for future work.
| [
{
"version": "v1",
"created": "Mon, 20 Sep 2021 17:04:56 GMT"
}
] | 1,632,182,400,000 | [
[
"Felfernig",
"Alexander",
""
],
[
"Popescu",
"Andrei",
""
],
[
"Uta",
"Mathias",
""
],
[
"Le",
"Viet-Man",
""
],
[
"Polat-Erdeniz",
"Seda",
""
],
[
"Stettinger",
"Martin",
""
],
[
"Atas",
"Müslüm",
""
],
[
"Tran",
"Thi Ngoc Trang",
""
]
] |
2109.09809 | Adam White Dr | Adam White, Artur d'Avila Garcez | Counterfactual Instances Explain Little | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In many applications, it is important to be able to explain the decisions of
machine learning systems. An increasingly popular approach has been to seek to
provide \emph{counterfactual instance explanations}. These specify close
possible worlds in which, contrary to the facts, a person receives their
desired decision from the machine learning system. This paper will draw on
literature from the philosophy of science to argue that a satisfactory
explanation must consist of both counterfactual instances and a causal equation
(or system of equations) that support the counterfactual instances. We will
show that counterfactual instances by themselves explain little. We will
further illustrate how explainable AI methods that provide both causal
equations and counterfactual instances can successfully explain machine
learning predictions.
| [
{
"version": "v1",
"created": "Mon, 20 Sep 2021 19:40:25 GMT"
}
] | 1,632,268,800,000 | [
[
"White",
"Adam",
""
],
[
"Garcez",
"Artur d'Avila",
""
]
] |
2109.09904 | Sarath Sreedharan | Subbarao Kambhampati, Sarath Sreedharan, Mudit Verma, Yantian Zha, Lin
Guan | Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable
and Advisable AI Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the surprising power of many modern AI systems that often learn their
own representations, there is significant discontent about their inscrutability
and the attendant problems in their ability to interact with humans. While
alternatives such as neuro-symbolic approaches have been proposed, there is a
lack of consensus on what they are about. There are often two independent
motivations (i) symbols as a lingua franca for human-AI interaction and (ii)
symbols as system-produced abstractions used by the AI system in its internal
reasoning. The jury is still out on whether AI systems will need to use symbols
in their internal reasoning to achieve general intelligence capabilities.
Whatever the answer there is, the need for (human-understandable) symbols in
human-AI interaction seems quite compelling. Symbols, like emotions, may well
not be sine qua non for intelligence per se, but they will be crucial for AI
systems to interact with us humans -- as we can neither turn off our emotions
nor get by without our symbols. In particular, in many human-designed domains,
humans would be interested in providing explicit (symbolic) knowledge and
advice -- and expect machine explanations in kind. This alone requires AI
systems to to maintain a symbolic interface for interaction with humans. In
this blue sky paper, we argue this point of view, and discuss research
directions that need to be pursued to allow for this type of human-AI
interaction.
| [
{
"version": "v1",
"created": "Tue, 21 Sep 2021 01:30:06 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Dec 2021 20:43:45 GMT"
}
] | 1,639,353,600,000 | [
[
"Kambhampati",
"Subbarao",
""
],
[
"Sreedharan",
"Sarath",
""
],
[
"Verma",
"Mudit",
""
],
[
"Zha",
"Yantian",
""
],
[
"Guan",
"Lin",
""
]
] |
2109.10085 | Alex Bogun | Tim Krappel, Alex Bogun, Damian Borth | Heterogeneous Ensemble for ESG Ratings Prediction | Accepted to KDD Workshop on Machine Learning in Finance 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Over the past years, topics ranging from climate change to human rights have
seen increasing importance for investment decisions. Hence, investors (asset
managers and asset owners) who wanted to incorporate these issues started to
assess companies based on how they handle such topics. For this assessment,
investors rely on specialized rating agencies that issue ratings along the
environmental, social and governance (ESG) dimensions. Such ratings allow them
to make investment decisions in favor of sustainability. However, rating
agencies base their analysis on subjective assessment of sustainability
reports, not provided by every company. Furthermore, due to human labor
involved, rating agencies are currently facing the challenge to scale up the
coverage in a timely manner.
In order to alleviate these challenges and contribute to the overall goal of
supporting sustainability, we propose a heterogeneous ensemble model to predict
ESG ratings using fundamental data. This model is based on feedforward neural
network, CatBoost and XGBoost ensemble members. Given the public availability
of fundamental data, the proposed method would allow cost-efficient and
scalable creation of initial ESG ratings (also for companies without
sustainability reporting). Using our approach we are able to explain 54% of the
variation in ratings R2 using fundamental data and outperform prior work in
this area.
| [
{
"version": "v1",
"created": "Tue, 21 Sep 2021 10:42:24 GMT"
}
] | 1,632,268,800,000 | [
[
"Krappel",
"Tim",
""
],
[
"Bogun",
"Alex",
""
],
[
"Borth",
"Damian",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.