id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2212.08704 | Izhak Shafran | Hagen Soltau, Izhak Shafran, Mingqiu Wang, Abhinav Rastogi, Jeffrey
Zhao, Ye Jia, Wei Han, Yuan Cao, Aramys Miranda | Speech Aware Dialog System Technology Challenge (DSTC11) | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Most research on task oriented dialog modeling is based on written text
input. However, users interact with practical dialog systems often using speech
as input. Typically, systems convert speech into text using an Automatic Speech
Recognition (ASR) system, introducing errors. Furthermore, these systems do not
address the differences in written and spoken language. The research on this
topic is stymied by the lack of a public corpus. Motivated by these
considerations, our goal in hosting the speech-aware dialog state tracking
challenge was to create a public corpus or task which can be used to
investigate the performance gap between the written and spoken forms of input,
develop models that could alleviate this gap, and establish whether
Text-to-Speech-based (TTS) systems is a reasonable surrogate to the more-labor
intensive human data collection. We created three spoken versions of the
popular written-domain MultiWoz task -- (a) TTS-Verbatim: written user inputs
were converted into speech waveforms using a TTS system, (b) Human-Verbatim:
humans spoke the user inputs verbatim, and (c) Human-paraphrased: humans
paraphrased the user inputs. Additionally, we provided different forms of ASR
output to encourage wider participation from teams that may not have access to
state-of-the-art ASR systems. These included ASR transcripts, word time stamps,
and latent representations of the audio (audio encoder outputs). In this paper,
we describe the corpus, report results from participating teams, provide
preliminary analyses of their results, and summarize the current
state-of-the-art in this domain.
| [
{
"version": "v1",
"created": "Fri, 16 Dec 2022 20:30:33 GMT"
}
] | 1,671,494,400,000 | [
[
"Soltau",
"Hagen",
""
],
[
"Shafran",
"Izhak",
""
],
[
"Wang",
"Mingqiu",
""
],
[
"Rastogi",
"Abhinav",
""
],
[
"Zhao",
"Jeffrey",
""
],
[
"Jia",
"Ye",
""
],
[
"Han",
"Wei",
""
],
[
"Cao",
"Yuan",
""
],
[
"Miranda",
"Aramys",
""
]
] |
2212.08817 | Jun-Gi Jang | Jun-Gi Jang, Sooyeon Shim, Vladimir Egay, Jeeyong Lee, Jongmin Park,
Suhyun Chae, U Kang | Accurate Open-set Recognition for Memory Workload | 15 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can we accurately identify new memory workloads while classifying known
memory workloads? Verifying DRAM (Dynamic Random Access Memory) using various
workloads is an important task to guarantee the quality of DRAM. A crucial
component in the process is open-set recognition which aims to detect new
workloads not seen in the training phase. Despite its importance, however,
existing open-set recognition methods are unsatisfactory in terms of accuracy
since they fail to exploit the characteristics of workload sequences. In this
paper, we propose Acorn, an accurate open-set recognition method capturing the
characteristics of workload sequences. Acorn extracts two types of feature
vectors to capture sequential patterns and spatial locality patterns in memory
access. Acorn then uses the feature vectors to accurately classify a
subsequence into one of the known classes or identify it as the unknown class.
Experiments show that Acorn achieves state-of-the-art accuracy, giving up to
37% points higher unknown class detection accuracy while achieving comparable
known class classification accuracy than existing methods.
| [
{
"version": "v1",
"created": "Sat, 17 Dec 2022 07:37:40 GMT"
}
] | 1,671,494,400,000 | [
[
"Jang",
"Jun-Gi",
""
],
[
"Shim",
"Sooyeon",
""
],
[
"Egay",
"Vladimir",
""
],
[
"Lee",
"Jeeyong",
""
],
[
"Park",
"Jongmin",
""
],
[
"Chae",
"Suhyun",
""
],
[
"Kang",
"U",
""
]
] |
2212.08966 | Shaopeng Wei | Shaopeng Wei, Yu Zhao, Xingyan Chen, Qing Li, Fuzhen Zhuang, Ji Liu,
Fuji Ren, Gang Kou | Graph Learning and Its Advancements on Large Language Models: A Holistic
Survey | 24 pages, 9 figures, 4 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph learning is a prevalent domain that endeavors to learn the intricate
relationships among nodes and the topological structure of graphs. Over the
years, graph learning has transcended from graph theory to graph data mining.
With the advent of representation learning, it has attained remarkable
performance in diverse scenarios. Owing to its extensive application prospects,
graph learning attracts copious attention. While some researchers have
accomplished impressive surveys on graph learning, they failed to connect
related objectives, methods, and applications in a more coherent way. As a
result, they did not encompass current ample scenarios and challenging problems
due to the rapid expansion of graph learning. Particularly, large language
models have recently had a disruptive effect on human life, but they also show
relative weakness in structured scenarios. The question of how to make these
models more powerful with graph learning remains open. Our survey focuses on
the most recent advancements in integrating graph learning with pre-trained
language models, specifically emphasizing their application within the domain
of large language models. Different from previous surveys on graph learning, we
provide a holistic review that analyzes current works from the perspective of
graph structure, and discusses the latest applications, trends, and challenges
in graph learning. Specifically, we commence by proposing a taxonomy and then
summarize the methods employed in graph learning. We then provide a detailed
elucidation of mainstream applications. Finally, we propose future directions.
| [
{
"version": "v1",
"created": "Sat, 17 Dec 2022 22:05:07 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Mar 2023 17:00:20 GMT"
},
{
"version": "v3",
"created": "Sat, 3 Jun 2023 18:36:37 GMT"
},
{
"version": "v4",
"created": "Sat, 18 Nov 2023 08:15:20 GMT"
}
] | 1,700,524,800,000 | [
[
"Wei",
"Shaopeng",
""
],
[
"Zhao",
"Yu",
""
],
[
"Chen",
"Xingyan",
""
],
[
"Li",
"Qing",
""
],
[
"Zhuang",
"Fuzhen",
""
],
[
"Liu",
"Ji",
""
],
[
"Ren",
"Fuji",
""
],
[
"Kou",
"Gang",
""
]
] |
2212.08967 | Johannes Schneider | Johannes Schneider | Foundation models in brief: A historical, socio-technical focus | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Foundation models can be disruptive for future AI development by scaling up
deep learning in terms of model size and training data's breadth and size.
These models achieve state-of-the-art performance (often through further
adaptation) on a variety of tasks in domains such as natural language
processing and computer vision. Foundational models exhibit a novel {emergent
behavior}: {In-context learning} enables users to provide a query and a few
examples from which a model derives an answer without being trained on such
queries. Additionally, {homogenization} of models might replace a myriad of
task-specific models with fewer very large models controlled by few
corporations leading to a shift in power and control over AI. This paper
provides a short introduction to foundation models. It contributes by crafting
a crisp distinction between foundation models and prior deep learning models,
providing a history of machine learning leading to foundation models,
elaborating more on socio-technical aspects, i.e., organizational issues and
end-user interaction, and a discussion of future research.
| [
{
"version": "v1",
"created": "Sat, 17 Dec 2022 22:11:33 GMT"
}
] | 1,671,494,400,000 | [
[
"Schneider",
"Johannes",
""
]
] |
2212.09033 | Minghuan Liu | Minghuan Liu, Zhengbang Zhu, Menghui Zhu, Yuzheng Zhuang, Weinan
Zhang, Jianye Hao | Planning Immediate Landmarks of Targets for Model-Free Skill Transfer
across Agents | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In reinforcement learning applications like robotics, agents usually need to
deal with various input/output features when specified with different
state/action spaces by their developers or physical restrictions. This
indicates unnecessary re-training from scratch and considerable sample
inefficiency, especially when agents follow similar solution steps to achieve
tasks. In this paper, we aim to transfer similar high-level goal-transition
knowledge to alleviate the challenge. Specifically, we propose PILoT, i.e.,
Planning Immediate Landmarks of Targets. PILoT utilizes the universal decoupled
policy optimization to learn a goal-conditioned state planner; then, distills a
goal-planner to plan immediate landmarks in a model-free style that can be
shared among different agents. In our experiments, we show the power of PILoT
on various transferring challenges, including few-shot transferring across
action spaces and dynamics, from low-dimensional vector states to image inputs,
from simple robot to complicated morphology; and we also illustrate a zero-shot
transfer solution from a simple 2D navigation task to the harder Ant-Maze task.
| [
{
"version": "v1",
"created": "Sun, 18 Dec 2022 08:03:21 GMT"
}
] | 1,671,494,400,000 | [
[
"Liu",
"Minghuan",
""
],
[
"Zhu",
"Zhengbang",
""
],
[
"Zhu",
"Menghui",
""
],
[
"Zhuang",
"Yuzheng",
""
],
[
"Zhang",
"Weinan",
""
],
[
"Hao",
"Jianye",
""
]
] |
2212.09077 | Johannes Oetsch | Thomas Eiter, Tobias Geibinger, Nysret Musliu, Johannes Oetsch, Peter
Skocovsky, Daria Stepanova | Answer-Set Programming for Lexicographical Makespan Optimisation in
Parallel Machine Scheduling | Under consideration in Theory and Practice of Logic Programming
(TPLP) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We deal with a challenging scheduling problem on parallel machines with
sequence-dependent setup times and release dates from a real-world application
of semiconductor work-shop production. There, jobs can only be processed by
dedicated machines, thus few machines can determine the makespan almost
regardless of how jobs are scheduled on the remaining ones. This causes
problems when machines fail and jobs need to be rescheduled. Instead of
optimising only the makespan, we put the individual machine spans in
non-ascending order and lexicographically minimise the resulting tuples. This
achieves that all machines complete as early as possible and increases the
robustness of the schedule. We study the application of Answer-Set Programming
(ASP) to solve this problem. While ASP eases modelling, the combination of
timing constraints and the considered objective function challenges current
solving technology. The former issue is addressed by using an extension of ASP
by difference logic. For the latter, we devise different algorithms that use
multi-shot solving. To tackle industrial-sized instances, we study different
approximations and heuristics. Our experimental results show that ASP is indeed
a promising KRR paradigm for this problem and is competitive with
state-of-the-art CP and MIP solvers. Under consideration in Theory and Practice
of Logic Programming (TPLP).
| [
{
"version": "v1",
"created": "Sun, 18 Dec 2022 12:43:24 GMT"
}
] | 1,671,494,400,000 | [
[
"Eiter",
"Thomas",
""
],
[
"Geibinger",
"Tobias",
""
],
[
"Musliu",
"Nysret",
""
],
[
"Oetsch",
"Johannes",
""
],
[
"Skocovsky",
"Peter",
""
],
[
"Stepanova",
"Daria",
""
]
] |
2212.09377 | Jan Pichl | Jan Pichl, Petr Marek, Jakub Konr\'ad, Petr Lorenc, Ond\v{r}ej Kobza,
Tom\'a\v{s} Zaj\'i\v{c}ek, Jan \v{S}ediv\'y | Flowstorm: Open-Source Platform with Hybrid Dialogue Architecture | null | NAACL Demo Track (2022) 39-45 | 10.18653/v1/2022.naacl-demo.5 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents a conversational AI platform called Flowstorm. Flowstorm
is an open-source SaaS project suitable for creating, running, and analyzing
conversational applications. Thanks to the fast and fully automated build
process, the dialogues created within the platform can be executed in seconds.
Furthermore, we propose a novel dialogue architecture that uses a combination
of tree structures with generative models. The tree structures are also used
for training NLU models suitable for specific dialogue scenarios. However, the
generative models are globally used across applications and extend the
functionality of the dialogue trees. Moreover, the platform functionality
benefits from out-of-the-box components, such as the one responsible for
extracting data from utterances or working with crawled data. Additionally, it
can be extended using a custom code directly in the platform. One of the
essential features of the platform is the possibility to reuse the created
assets across applications. There is a library of prepared assets where each
developer can contribute. All of the features are available through a
user-friendly visual editor.
| [
{
"version": "v1",
"created": "Mon, 19 Dec 2022 11:27:51 GMT"
}
] | 1,671,494,400,000 | [
[
"Pichl",
"Jan",
""
],
[
"Marek",
"Petr",
""
],
[
"Konrád",
"Jakub",
""
],
[
"Lorenc",
"Petr",
""
],
[
"Kobza",
"Ondřej",
""
],
[
"Zajíček",
"Tomáš",
""
],
[
"Šedivý",
"Jan",
""
]
] |
2212.09399 | Joern Ploennigs | Joern Ploennigs and Markus Berger | AI Art in Architecture | null | AI Civ. Eng. 2, 8 (2023) | 10.1007/s43503-023-00018-y | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent diffusion-based AI art platforms are able to create impressive images
from simple text descriptions. This makes them powerful tools for concept
design in any discipline that requires creativity in visual design tasks. This
is also true for early stages of architectural design with multiple stages of
ideation, sketching and modelling. In this paper, we investigate how applicable
diffusion-based models already are to these tasks. We research the
applicability of the platforms Midjourney, DALL-E 2 and StableDiffusion to a
series of common use cases in architectural design to determine which are
already solvable or might soon be. We also analyze how they are already being
used by analyzing a data set of 40 million Midjourney queries with NLP methods
to extract common usage patterns. With this insights we derived a workflow to
interior and exterior design that combines the strengths of the individual
platforms.
| [
{
"version": "v1",
"created": "Mon, 19 Dec 2022 12:24:14 GMT"
}
] | 1,692,576,000,000 | [
[
"Ploennigs",
"Joern",
""
],
[
"Berger",
"Markus",
""
]
] |
2212.09447 | Mateus Roder | Gustavo H. de Rosa, Mateus Roder, Jo\~ao Paulo Papa and Claudio F. G.
dos Santos | Improving Pre-Trained Weights Through Meta-Heuristics Fine-Tuning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Machine Learning algorithms have been extensively researched throughout the
last decade, leading to unprecedented advances in a broad range of
applications, such as image classification and reconstruction, object
recognition, and text categorization. Nonetheless, most Machine Learning
algorithms are trained via derivative-based optimizers, such as the Stochastic
Gradient Descent, leading to possible local optimum entrapments and inhibiting
them from achieving proper performances. A bio-inspired alternative to
traditional optimization techniques, denoted as meta-heuristic, has received
significant attention due to its simplicity and ability to avoid local optimums
imprisonment. In this work, we propose to use meta-heuristic techniques to
fine-tune pre-trained weights, exploring additional regions of the search
space, and improving their effectiveness. The experimental evaluation comprises
two classification tasks (image and text) and is assessed under four literature
datasets. Experimental results show nature-inspired algorithms' capacity in
exploring the neighborhood of pre-trained weights, achieving superior results
than their counterpart pre-trained architectures. Additionally, a thorough
analysis of distinct architectures, such as Multi-Layer Perceptron and
Recurrent Neural Networks, attempts to visualize and provide more precise
insights into the most critical weights to be fine-tuned in the learning
process.
| [
{
"version": "v1",
"created": "Mon, 19 Dec 2022 13:40:26 GMT"
}
] | 1,671,494,400,000 | [
[
"de Rosa",
"Gustavo H.",
""
],
[
"Roder",
"Mateus",
""
],
[
"Papa",
"João Paulo",
""
],
[
"Santos",
"Claudio F. G. dos",
""
]
] |
2212.09918 | Jinzhao Zhou | Jinzhao Zhou and Yiqun Duan and Zhihong Chen and Yu-Cheng Chang and
Chin-Teng Lin | Generalizing Multimodal Variational Methods to Sets | First Submission | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Making sense of multiple modalities can yield a more comprehensive
description of real-world phenomena. However, learning the co-representation of
diverse modalities is still a long-standing endeavor in emerging machine
learning applications and research. Previous generative approaches for
multimodal input approximate a joint-modality posterior by uni-modality
posteriors as product-of-experts (PoE) or mixture-of-experts (MoE). We argue
that these approximations lead to a defective bound for the optimization
process and loss of semantic connection among modalities. This paper presents a
novel variational method on sets called the Set Multimodal VAE (SMVAE) for
learning a multimodal latent space while handling the missing modality problem.
By modeling the joint-modality posterior distribution directly, the proposed
SMVAE learns to exchange information between multiple modalities and compensate
for the drawbacks caused by factorization. In public datasets of various
domains, the experimental results demonstrate that the proposed method is
applicable to order-agnostic cross-modal generation while achieving outstanding
performance compared to the state-of-the-art multimodal methods. The source
code for our method is available online
https://anonymous.4open.science/r/SMVAE-9B3C/.
| [
{
"version": "v1",
"created": "Mon, 19 Dec 2022 23:50:19 GMT"
}
] | 1,671,580,800,000 | [
[
"Zhou",
"Jinzhao",
""
],
[
"Duan",
"Yiqun",
""
],
[
"Chen",
"Zhihong",
""
],
[
"Chang",
"Yu-Cheng",
""
],
[
"Lin",
"Chin-Teng",
""
]
] |
2212.10030 | Feng Qiu | Feng Qiu, Wanzeng Kong, Yu Ding | InterMulti:Multi-view Multimodal Interactions with Text-dominated
Hierarchical High-order Fusion for Emotion Analysis | 9 pages, 3 figures. arXiv admin note: text overlap with
arXiv:2212.08661 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Humans are sophisticated at reading interlocutors' emotions from multimodal
signals, such as speech contents, voice tones and facial expressions. However,
machines might struggle to understand various emotions due to the difficulty of
effectively decoding emotions from the complex interactions between multimodal
signals. In this paper, we propose a multimodal emotion analysis framework,
InterMulti, to capture complex multimodal interactions from different views and
identify emotions from multimodal signals. Our proposed framework decomposes
signals of different modalities into three kinds of multimodal interaction
representations, including a modality-full interaction representation, a
modality-shared interaction representation, and three modality-specific
interaction representations. Additionally, to balance the contribution of
different modalities and learn a more informative latent interaction
representation, we developed a novel Text-dominated Hierarchical High-order
Fusion(THHF) module. THHF module reasonably integrates the above three kinds of
representations into a comprehensive multimodal interaction representation.
Extensive experimental results on widely used datasets, (i.e.) MOSEI, MOSI and
IEMOCAP, demonstrate that our method outperforms the state-of-the-art.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2022 07:02:32 GMT"
}
] | 1,671,580,800,000 | [
[
"Qiu",
"Feng",
""
],
[
"Kong",
"Wanzeng",
""
],
[
"Ding",
"Yu",
""
]
] |
2212.10252 | Wensheng Gan | Xinhong Chen, Wensheng Gan, Shicheng Wan, and Tianlong Gu | MDL-based Compressing Sequential Rules | Preprint. 6 figures, 8 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, with the rapid development of the Internet, the era of big data has
come. The Internet generates huge amounts of data every day. However,
extracting meaningful information from massive data is like looking for a
needle in a haystack. Data mining techniques can provide various feasible
methods to solve this problem. At present, many sequential rule mining (SRM)
algorithms are presented to find sequential rules in databases with sequential
characteristics. These rules help people extract a lot of meaningful
information from massive amounts of data. How can we achieve compression of
mined results and reduce data size to save storage space and transmission time?
Until now, there has been little research on the compression of SRM. In this
paper, combined with the Minimum Description Length (MDL) principle and under
the two metrics (support and confidence), we introduce the problem of
compression of SRM and also propose a solution named ComSR for MDL-based
compressing of sequential rules based on the designed sequential rule coding
scheme. To our knowledge, we are the first to use sequential rules to encode an
entire database. A heuristic method is proposed to find a set of compact and
meaningful sequential rules as much as possible. ComSR has two trade-off
algorithms, ComSR_non and ComSR_ful, based on whether the database can be
completely compressed. Experiments done on a real dataset with different
thresholds show that a set of compact and meaningful sequential rules can be
found. This shows that the proposed method works.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2022 14:00:57 GMT"
}
] | 1,671,580,800,000 | [
[
"Chen",
"Xinhong",
""
],
[
"Gan",
"Wensheng",
""
],
[
"Wan",
"Shicheng",
""
],
[
"Gu",
"Tianlong",
""
]
] |
2212.10276 | Shashank Srivastava | Graham Caron and Shashank Srivastava | Identifying and Manipulating the Personality Traits of Language Models | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Psychology research has long explored aspects of human personality such as
extroversion, agreeableness and emotional stability. Categorizations like the
`Big Five' personality traits are commonly used to assess and diagnose
personality types. In this work, we explore the question of whether the
perceived personality in language models is exhibited consistently in their
language generation. For example, is a language model such as GPT2 likely to
respond in a consistent way if asked to go out to a party? We also investigate
whether such personality traits can be controlled. We show that when provided
different types of contexts (such as personality descriptions, or answers to
diagnostic questions about personality traits), language models such as BERT
and GPT2 can consistently identify and reflect personality markers in those
contexts. This behavior illustrates an ability to be manipulated in a highly
predictable way, and frames them as tools for identifying personality traits
and controlling personas in applications such as dialog systems. We also
contribute a crowd-sourced data-set of personality descriptions of human
subjects paired with their `Big Five' personality assessment data, and a
data-set of personality descriptions collated from Reddit.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2022 14:24:11 GMT"
}
] | 1,671,580,800,000 | [
[
"Caron",
"Graham",
""
],
[
"Srivastava",
"Shashank",
""
]
] |
2212.10435 | Ron Fulbright | Ron Fulbright | The Expertise Level | 18 pages; 11 figures | HCII 2020: Augmented Cognition. Human Cognition and Behavior;
Lecture Notes in Computer Science book series (LNAI, volume 12197) | 10.1007/978-3-030-50439-7_4 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computers are quickly gaining on us. Artificial systems are now exceeding the
performance of human experts in several domains. However, we do not yet have a
deep definition of expertise. This paper examines the nature of expertise and
presents an abstract knowledge-level and skill-level description of expertise.
A new level lying above the Knowledge Level, called the Expertise Level, is
introduced to describe the skills of an expert without having to worry about
details of the knowledge required. The Model of Expertise is introduced
combining the knowledge-level and expertise-level descriptions. Application of
the model to the fields of cognitive architectures and human cognitive
augmentation is demonstrated and several famous intelligent systems are
analyzed with the model.
| [
{
"version": "v1",
"created": "Fri, 11 Nov 2022 20:55:11 GMT"
}
] | 1,671,580,800,000 | [
[
"Fulbright",
"Ron",
""
]
] |
2212.10446 | Muhammad Hamza Sajjad | M Hamza Sajjad | Neural Network Learner for Minesweeper | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Minesweeper is an interesting single player game based on logic, memory and
guessing. Solving Minesweeper has been shown to be an NP-hard task.
Deterministic solvers are the best known approach for solving Minesweeper. This
project proposes a neural network based learner for solving Minesweeper. To
choose the best learner, different architectures and configurations of neural
networks were trained on hundreds of thousands of games. Surprisingly, the
proposed neural network based learner has shown to be a very good approximation
function for solving Minesweeper. The neural network learner competes well with
the CSP solvers, especially in Beginner and Intermediate modes of the game. It
was also observed that despite having high success rates, the best neural
learner was considerably slower than the best deterministic solver. This report
also discusses the overheads and limitations faced while creating highly
successful neural networks for Minesweeper.
| [
{
"version": "v1",
"created": "Wed, 30 Nov 2022 14:42:05 GMT"
}
] | 1,671,580,800,000 | [
[
"Sajjad",
"M Hamza",
""
]
] |
2212.10723 | Christoph Bergmeir | Christoph Bergmeir, Frits de Nijs, Abishek Sriramulu, Mahdi
Abolghasemi, Richard Bean, John Betts, Quang Bui, Nam Trong Dinh, Nils
Einecke, Rasul Esmaeilbeigi, Scott Ferraro, Priya Galketiya, Evgenii Genov,
Robert Glasgow, Rakshitha Godahewa, Yanfei Kang, Steffen Limmer, Luis
Magdalena, Pablo Montero-Manso, Daniel Peralta, Yogesh Pipada Sunil Kumar,
Alejandro Rosales-P\'erez, Julian Ruddick, Akylas Stratigakos, Peter Stuckey,
Guido Tack, Isaac Triguero, Rui Yuan | Comparison and Evaluation of Methods for a Predict+Optimize Problem in
Renewable Energy | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Algorithms that involve both forecasting and optimization are at the core of
solutions to many difficult real-world problems, such as in supply chains
(inventory optimization), traffic, and in the transition towards carbon-free
energy generation in battery/load/production scheduling in sustainable energy
systems. Typically, in these scenarios we want to solve an optimization problem
that depends on unknown future values, which therefore need to be forecast. As
both forecasting and optimization are difficult problems in their own right,
relatively few research has been done in this area. This paper presents the
findings of the ``IEEE-CIS Technical Challenge on Predict+Optimize for
Renewable Energy Scheduling," held in 2021. We present a comparison and
evaluation of the seven highest-ranked solutions in the competition, to provide
researchers with a benchmark problem and to establish the state of the art for
this benchmark, with the aim to foster and facilitate research in this area.
The competition used data from the Monash Microgrid, as well as weather data
and energy market data. It then focused on two main challenges: forecasting
renewable energy production and demand, and obtaining an optimal schedule for
the activities (lectures) and on-site batteries that lead to the lowest cost of
energy. The most accurate forecasts were obtained by gradient-boosted tree and
random forest models, and optimization was mostly performed using mixed integer
linear and quadratic programming. The winning method predicted different
scenarios and optimized over all scenarios jointly using a sample average
approximation method.
| [
{
"version": "v1",
"created": "Wed, 21 Dec 2022 02:34:12 GMT"
}
] | 1,671,667,200,000 | [
[
"Bergmeir",
"Christoph",
""
],
[
"de Nijs",
"Frits",
""
],
[
"Sriramulu",
"Abishek",
""
],
[
"Abolghasemi",
"Mahdi",
""
],
[
"Bean",
"Richard",
""
],
[
"Betts",
"John",
""
],
[
"Bui",
"Quang",
""
],
[
"Dinh",
"Nam Trong",
""
],
[
"Einecke",
"Nils",
""
],
[
"Esmaeilbeigi",
"Rasul",
""
],
[
"Ferraro",
"Scott",
""
],
[
"Galketiya",
"Priya",
""
],
[
"Genov",
"Evgenii",
""
],
[
"Glasgow",
"Robert",
""
],
[
"Godahewa",
"Rakshitha",
""
],
[
"Kang",
"Yanfei",
""
],
[
"Limmer",
"Steffen",
""
],
[
"Magdalena",
"Luis",
""
],
[
"Montero-Manso",
"Pablo",
""
],
[
"Peralta",
"Daniel",
""
],
[
"Kumar",
"Yogesh Pipada Sunil",
""
],
[
"Rosales-Pérez",
"Alejandro",
""
],
[
"Ruddick",
"Julian",
""
],
[
"Stratigakos",
"Akylas",
""
],
[
"Stuckey",
"Peter",
""
],
[
"Tack",
"Guido",
""
],
[
"Triguero",
"Isaac",
""
],
[
"Yuan",
"Rui",
""
]
] |
2212.10915 | Jiakang Xu | Jiakang Xu, Wolfgang Mayer, HongYu Zhang, Keqing He, Zaiwen Feng | Automatic Semantic Modeling for Structural Data Source with the Prior
Knowledge from Knowledge Base | null | Mathematics 2022, 10, 4778 | 10.3390/math10244778 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A critical step in sharing semantic content online is to map the structural
data source to a public domain ontology. This problem is denoted as the
Relational-To-Ontology Mapping Problem (Rel2Onto). A huge effort and expertise
are required for manually modeling the semantics of data. Therefore, an
automatic approach for learning the semantics of a data source is desirable.
Most of the existing work studies the semantic annotation of source attributes.
However, although critical, the research for automatically inferring the
relationships between attributes is very limited. In this paper, we propose a
novel method for semantically annotating structured data sources using machine
learning, graph matching and modified frequent subgraph mining to amend the
candidate model. In our work, Knowledge graph is used as prior knowledge. Our
evaluation shows that our approach outperforms two state-of-the-art solutions
in tricky cases where only a few semantic models are known.
| [
{
"version": "v1",
"created": "Wed, 21 Dec 2022 10:54:59 GMT"
}
] | 1,671,667,200,000 | [
[
"Xu",
"Jiakang",
""
],
[
"Mayer",
"Wolfgang",
""
],
[
"Zhang",
"HongYu",
""
],
[
"He",
"Keqing",
""
],
[
"Feng",
"Zaiwen",
""
]
] |
2212.11011 | Juliette Gamot | Juliette Gamot, Mathieu Balesdent, Arnault Tremolet, Romain Wuilbercq,
Nouredine Melab, El-Ghazali Talbi | Hidden-Variables Genetic Algorithm for Variable-Size Design Space
Optimal Layout Problems with Application to Aerospace Vehicles | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The optimal layout of a complex system such as aerospace vehicles consists in
placing a given number of components in a container in order to minimize one or
several objectives under some geometrical or functional constraints. This paper
presents an extended formulation of this problem as a variable-size design
space (VSDS) problem to take into account a large number of architectural
choices and components allocation during the design process. As a
representative example of such systems, considering the layout of a satellite
module, the VSDS aspect translates the fact that the optimizer has to choose
between several subdivisions of the components. For instance, one large tank of
fuel might be placed as well as two smaller tanks or three even smaller tanks
for the same amount of fuel. In order to tackle this NP-hard problem, a genetic
algorithm enhanced by an adapted hidden-variables mechanism is proposed. This
latter is illustrated on a toy case and an aerospace application case
representative to real world complexity to illustrate the performance of the
proposed algorithms. The results obtained using the proposed mechanism are
reported and analyzed.
| [
{
"version": "v1",
"created": "Wed, 21 Dec 2022 13:32:16 GMT"
}
] | 1,671,667,200,000 | [
[
"Gamot",
"Juliette",
""
],
[
"Balesdent",
"Mathieu",
""
],
[
"Tremolet",
"Arnault",
""
],
[
"Wuilbercq",
"Romain",
""
],
[
"Melab",
"Nouredine",
""
],
[
"Talbi",
"El-Ghazali",
""
]
] |
2212.11214 | Fabr\'icio G\'oes | Fabricio Goes, Zisen Zhou, Piotr Sawicki, Marek Grzes and Daniel G.
Brown | Crowd Score: A Method for the Evaluation of Jokes using Large Language
Model AI Voters as Judges | 11 pages, 3 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents the Crowd Score, a novel method to assess the funniness
of jokes using large language models (LLMs) as AI judges. Our method relies on
inducing different personalities into the LLM and aggregating the votes of the
AI judges into a single score to rate jokes. We validate the votes using an
auditing technique that checks if the explanation for a particular vote is
reasonable using the LLM. We tested our methodology on 52 jokes in a crowd of
four AI voters with different humour types: affiliative, self-enhancing,
aggressive and self-defeating. Our results show that few-shot prompting leads
to better results than zero-shot for the voting question. Personality induction
showed that aggressive and self-defeating voters are significantly more
inclined to find more jokes funny of a set of aggressive/self-defeating jokes
than the affiliative and self-enhancing voters. The Crowd Score follows the
same trend as human judges by assigning higher scores to jokes that are also
considered funnier by human judges. We believe that our methodology could be
applied to other creative domains such as story, poetry, slogans, etc. It could
both help the adoption of a flexible and accurate standard approach to compare
different work in the CC community under a common metric and by minimizing
human participation in assessing creative artefacts, it could accelerate the
prototyping of creative artefacts and reduce the cost of hiring human
participants to rate creative artefacts.
| [
{
"version": "v1",
"created": "Wed, 21 Dec 2022 17:41:16 GMT"
}
] | 1,671,667,200,000 | [
[
"Goes",
"Fabricio",
""
],
[
"Zhou",
"Zisen",
""
],
[
"Sawicki",
"Piotr",
""
],
[
"Grzes",
"Marek",
""
],
[
"Brown",
"Daniel G.",
""
]
] |
2212.11517 | Fabio Tanaka | Fabio Tanaka, Claus Aranha | Co-evolving morphology and control of soft robots using a single genome | 8 pages, accepted by 2022 IEEE Symposium Series on Computational
Intelligence | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | When simulating soft robots, both their morphology and their controllers play
important roles in task performance. This paper introduces a new method to
co-evolve these two components in the same process. We do that by using the
hyperNEAT algorithm to generate two separate neural networks in one pass, one
responsible for the design of the robot body structure and the other for the
control of the robot.
The key difference between our method and most existing approaches is that it
does not treat the development of the morphology and the controller as separate
processes. Similar to nature, our method derives both the "brain" and the
"body" of an agent from a single genome and develops them together. While our
approach is more realistic and doesn't require an arbitrary separation of
processes during evolution, it also makes the problem more complex because the
search space for this single genome becomes larger and any mutation to the
genome affects "brain" and the "body" at the same time.
Additionally, we present a new speciation function that takes into
consideration both the genotypic distance, as is the standard for NEAT, and the
similarity between robot bodies. By using this function, agents with very
different bodies are more likely to be in different species, this allows robots
with different morphologies to have more specialized controllers since they
won't crossover with other robots that are too different from them.
We evaluate the presented methods on four tasks and observe that even if the
search space was larger, having a single genome makes the evolution process
converge faster when compared to having separated genomes for body and control.
The agents in our population also show morphologies with a high degree of
regularity and controllers capable of coordinating the voxels to produce the
necessary movements.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2022 07:34:31 GMT"
}
] | 1,671,753,600,000 | [
[
"Tanaka",
"Fabio",
""
],
[
"Aranha",
"Claus",
""
]
] |
2212.11717 | Henri Prade M | Myriam Bounhas and Henri Prade and Gilles Richard | Some recent advances in reasoning based on analogical proportions | 11 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analogical proportions compare pairs of items (a, b) and (c, d) in terms of
their differences and similarities. They play a key role in the formalization
of analogical inference. The paper first discusses how to improve analogical
inference in terms of accuracy and in terms of computational cost. Then it
indicates the potential of analogical proportions for explanation. Finally, it
highlights the close relationship between analogical proportions and
multi-valued dependencies, which reveals an unsuspected aspect of the former.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2022 14:10:14 GMT"
}
] | 1,671,753,600,000 | [
[
"Bounhas",
"Myriam",
""
],
[
"Prade",
"Henri",
""
],
[
"Richard",
"Gilles",
""
]
] |
2212.11738 | Arnault Pachot | Arnault Pachot, C\'eline Patissier | Towards Sustainable Artificial Intelligence: An Overview of
Environmental Protection Uses and Issues | null | Green and Low-Carbon Economy 2023 | 10.47852/bonviewGLCE3202608 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence (AI) is used to create more sustainable production
methods and model climate change, making it a valuable tool in the fight
against environmental degradation. This paper describes the paradox of an
energy-consuming technology serving the ecological challenges of tomorrow. The
study provides an overview of the sectors that use AI-based solutions for
environmental protection. It draws on numerous examples from AI for Green
players to present use cases and concrete examples. In the second part of the
study, the negative impacts of AI on the environment and the emerging
technological solutions to support Green AI are examined. It is also shown that
the research on less energy-consuming AI is motivated more by cost and energy
autonomy constraints than by environmental considerations. This leads to a
rebound effect that favors an increase in the complexity of models. Finally,
the need to integrate environmental indicators into algorithms is discussed.
The environmental dimension is part of the broader ethical problem of AI, and
addressing it is crucial for ensuring the sustainability of AI in the long
term.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2022 14:31:48 GMT"
}
] | 1,710,115,200,000 | [
[
"Pachot",
"Arnault",
""
],
[
"Patissier",
"Céline",
""
]
] |
2212.11854 | Johannes Jakubik | Johannes Jakubik, Michael V\"ossing, Niklas K\"uhl, Jannis Walk,
Gerhard Satzger | Data-Centric Artificial Intelligence | Accepted for publication at Business & Information Systems
Engineering | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-centric artificial intelligence (data-centric AI) represents an emerging
paradigm emphasizing that the systematic design and engineering of data is
essential for building effective and efficient AI-based systems. The objective
of this article is to introduce practitioners and researchers from the field of
Information Systems (IS) to data-centric AI. We define relevant terms, provide
key characteristics to contrast the data-centric paradigm to the model-centric
one, and introduce a framework for data-centric AI. We distinguish data-centric
AI from related concepts and discuss its longer-term implications for the IS
community.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2022 16:41:03 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Aug 2023 11:10:01 GMT"
},
{
"version": "v3",
"created": "Fri, 20 Oct 2023 13:37:58 GMT"
},
{
"version": "v4",
"created": "Thu, 18 Jan 2024 11:52:08 GMT"
}
] | 1,705,622,400,000 | [
[
"Jakubik",
"Johannes",
""
],
[
"Vössing",
"Michael",
""
],
[
"Kühl",
"Niklas",
""
],
[
"Walk",
"Jannis",
""
],
[
"Satzger",
"Gerhard",
""
]
] |
2212.11868 | Xiaoyu Zhang | Xiaoyu Zhang, Xin Xin, Dongdong Li, Wenxuan Liu, Pengjie Ren, Zhumin
Chen, Jun Ma, Zhaochun Ren | Variational Reasoning over Incomplete Knowledge Graphs for
Conversational Recommendation | null | null | 10.1145/3539597.3570426 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conversational recommender systems (CRSs) often utilize external knowledge
graphs (KGs) to introduce rich semantic information and recommend relevant
items through natural language dialogues. However, original KGs employed in
existing CRSs are often incomplete and sparse, which limits the reasoning
capability in recommendation. Moreover, only few of existing studies exploit
the dialogue context to dynamically refine knowledge from KGs for better
recommendation. To address the above issues, we propose the Variational
Reasoning over Incomplete KGs Conversational Recommender (VRICR). Our key idea
is to incorporate the large dialogue corpus naturally accompanied with CRSs to
enhance the incomplete KGs; and perform dynamic knowledge reasoning conditioned
on the dialogue context. Specifically, we denote the dialogue-specific
subgraphs of KGs as latent variables with categorical priors for adaptive
knowledge graphs refactor. We propose a variational Bayesian method to
approximate posterior distributions over dialogue-specific subgraphs, which not
only leverages the dialogue corpus for restructuring missing entity relations
but also dynamically selects knowledge based on the dialogue context. Finally,
we infuse the dialogue-specific subgraphs to decode the recommendation and
responses. We conduct experiments on two benchmark CRSs datasets. Experimental
results confirm the effectiveness of our proposed method.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2022 17:02:21 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Dec 2022 06:41:01 GMT"
}
] | 1,672,012,800,000 | [
[
"Zhang",
"Xiaoyu",
""
],
[
"Xin",
"Xin",
""
],
[
"Li",
"Dongdong",
""
],
[
"Liu",
"Wenxuan",
""
],
[
"Ren",
"Pengjie",
""
],
[
"Chen",
"Zhumin",
""
],
[
"Ma",
"Jun",
""
],
[
"Ren",
"Zhaochun",
""
]
] |
2212.11879 | Abdulaziz Ahmed | Abdulaziz Ahmed, Khalid Y.Aram, Salih Tutun | A Study of Left Before Treatment Complete Emergency Department Patients:
An Optimized Explanatory Machine Learning Framework | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The issue of left before treatment complete (LBTC) patients is common in
emergency departments (EDs). This issue represents a medico-legal risk and may
cause a revenue loss. Thus, understanding the factors that cause patients to
leave before treatment is complete is vital to mitigate and potentially
eliminate these adverse effects. This paper proposes a framework for studying
the factors that affect LBTC outcomes in EDs. The framework integrates machine
learning, metaheuristic optimization, and model interpretation techniques.
Metaheuristic optimization is used for hyperparameter optimization--one of the
main challenges of machine learning model development. Three metaheuristic
optimization algorithms are employed for optimizing the parameters of extreme
gradient boosting (XGB), which are simulated annealing (SA), adaptive simulated
annealing (ASA), and adaptive tabu simulated annealing (ATSA). The optimized
XGB models are used to predict the LBTC outcomes for the patients under
treatment in ED. The designed algorithms are trained and tested using four data
groups resulting from the feature selection phase. The model with the best
predictive performance is interpreted using SHaply Additive exPlanations (SHAP)
method. The findings show that ATSA-XGB outperformed other mode configurations
with an accuracy, area under the curve (AUC), sensitivity, specificity, and
F1-score of 86.61%, 87.50%, 85.71%, 87.51%, and 86.60%, respectively. The
degree and the direction of effects of each feature were determined and
explained using the SHAP method.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2022 17:14:10 GMT"
}
] | 1,671,753,600,000 | [
[
"Ahmed",
"Abdulaziz",
""
],
[
"Aram",
"Khalid Y.",
""
],
[
"Tutun",
"Salih",
""
]
] |
2212.11892 | Abdulaziz Ahmed | Abdulaziz Ahmed, Mohammed Al-Maamari, Mohammad Firouz, Dursun Delen | An Adaptive Simulated Annealing-Based Machine Learning Approach for
Developing an E-Triage Tool for Hospital Emergency Operations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Patient triage at emergency departments (EDs) is necessary to prioritize care
for patients with critical and time-sensitive conditions. Different tools are
used for patient triage and one of the most common ones is the emergency
severity index (ESI), which has a scale of five levels, where level 1 is the
most urgent and level 5 is the least urgent. This paper proposes a framework
for utilizing machine learning to develop an e-triage tool that can be used at
EDs. A large retrospective dataset of ED patient visits is obtained from the
electronic health record of a healthcare provider in the Midwest of the US for
three years. However, the main challenge of using machine learning algorithms
is that most of them have many parameters and without optimizing these
parameters, developing a high-performance model is not possible. This paper
proposes an approach to optimize the hyperparameters of machine learning. The
metaheuristic optimization algorithms simulated annealing (SA) and adaptive
simulated annealing (ASA) are proposed to optimize the parameters of extreme
gradient boosting (XGB) and categorical boosting (CaB). The newly proposed
algorithms are SA-XGB, ASA-XGB, SA-CaB, ASA-CaB. Grid search (GS), which is a
traditional approach used for machine learning fine-tunning is also used to
fine-tune the parameters of XGB and CaB, which are named GS-XGB and GS-CaB. The
six algorithms are trained and tested using eight data groups obtained from the
feature selection phase. The results show ASA-CaB outperformed all the proposed
algorithms with accuracy, precision, recall, and f1 of 83.3%, 83.2%, 83.3%,
83.2%, respectively.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2022 17:25:12 GMT"
}
] | 1,671,753,600,000 | [
[
"Ahmed",
"Abdulaziz",
""
],
[
"Al-Maamari",
"Mohammed",
""
],
[
"Firouz",
"Mohammad",
""
],
[
"Delen",
"Dursun",
""
]
] |
2212.11901 | Denis Ponomaryov | Alexander Demin and Denis Ponomaryov | Machine Learning with Probabilistic Law Discovery: A Concise
Introduction | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Probabilistic Law Discovery (PLD) is a logic based Machine Learning method,
which implements a variant of probabilistic rule learning. In several aspects,
PLD is close to Decision Tree/Random Forest methods, but it differs
significantly in how relevant rules are defined. The learning procedure of PLD
solves the optimization problem related to the search for rules (called
probabilistic laws), which have a minimal length and relatively high
probability. At inference, ensembles of these rules are used for prediction.
Probabilistic laws are human-readable and PLD based models are transparent and
inherently interpretable. Applications of PLD include
classification/clusterization/regression tasks, as well as time series
analysis/anomaly detection and adaptive (robotic) control. In this paper, we
outline the main principles of PLD, highlight its benefits and limitations and
provide some application guidelines.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2022 17:40:13 GMT"
}
] | 1,671,753,600,000 | [
[
"Demin",
"Alexander",
""
],
[
"Ponomaryov",
"Denis",
""
]
] |
2212.12050 | Simon Odense | Simon Odense, Artur d'Avila Garcez | A Semantic Framework for Neural-Symbolic Computing | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Two approaches to AI, neural networks and symbolic systems, have been proven
very successful for an array of AI problems. However, neither has been able to
achieve the general reasoning ability required for human-like intelligence. It
has been argued that this is due to inherent weaknesses in each approach.
Luckily, these weaknesses appear to be complementary, with symbolic systems
being adept at the kinds of things neural networks have trouble with and
vice-versa. The field of neural-symbolic AI attempts to exploit this asymmetry
by combining neural networks and symbolic AI into integrated systems. Often
this has been done by encoding symbolic knowledge into neural networks.
Unfortunately, although many different methods for this have been proposed,
there is no common definition of an encoding to compare them. We seek to
rectify this problem by introducing a semantic framework for neural-symbolic
AI, which is then shown to be general enough to account for a large family of
neural-symbolic systems. We provide a number of examples and proofs of the
application of the framework to the neural encoding of various forms of
knowledge representation and neural network. These, at first sight disparate
approaches, are all shown to fall within the framework's formal definition of
what we call semantic encoding for neural-symbolic AI.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2022 22:00:58 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Apr 2023 18:11:24 GMT"
}
] | 1,681,862,400,000 | [
[
"Odense",
"Simon",
""
],
[
"Garcez",
"Artur d'Avila",
""
]
] |
2212.12139 | Fucai Ke | Fucai Ke, Weiqing Wang, Weicong Tan, Lan Du, Yuan Jin, Yujin Huang and
Hongzhi Yin | HiTSKT: A Hierarchical Transformer Model for Session-Aware Knowledge
Tracing | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Knowledge tracing (KT) aims to leverage students' learning histories to
estimate their mastery levels on a set of pre-defined skills, based on which
the corresponding future performance can be accurately predicted. As an
important way of providing personalized experience for online education, KT has
gained increased attention in recent years. In practice, a student's learning
history comprises answers to sets of massed questions, each known as a session,
rather than merely being a sequence of independent answers. Theoretically,
within and across these sessions, students' learning dynamics can be very
different. Therefore, how to effectively model the dynamics of students'
knowledge states within and across the sessions is crucial for handling the KT
problem. Most existing KT models treat student's learning records as a single
continuing sequence, without capturing the sessional shift of students'
knowledge state. To address the above issue, we propose a novel hierarchical
transformer model, named HiTSKT, comprises an interaction(-level) encoder to
capture the knowledge a student acquires within a session, and a
session(-level) encoder to summarise acquired knowledge across the past
sessions. To predict an interaction in the current session, a knowledge
retriever integrates the summarised past-session knowledge with the previous
interactions' information into proper knowledge representations. These
representations are then used to compute the student's current knowledge state.
Additionally, to model the student's long-term forgetting behaviour across the
sessions, a power-law-decay attention mechanism is designed and deployed in the
session encoder, allowing it to emphasize more on the recent sessions.
Extensive experiments on three public datasets demonstrate that HiTSKT achieves
new state-of-the-art performance on all the datasets compared with six
state-of-the-art KT models.
| [
{
"version": "v1",
"created": "Fri, 23 Dec 2022 04:22:42 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Jan 2023 12:52:16 GMT"
},
{
"version": "v3",
"created": "Tue, 6 Jun 2023 13:05:01 GMT"
}
] | 1,686,096,000,000 | [
[
"Ke",
"Fucai",
""
],
[
"Wang",
"Weiqing",
""
],
[
"Tan",
"Weicong",
""
],
[
"Du",
"Lan",
""
],
[
"Jin",
"Yuan",
""
],
[
"Huang",
"Yujin",
""
],
[
"Yin",
"Hongzhi",
""
]
] |
2212.12154 | Arec Jamgochian | Arec Jamgochian, Anthony Corso, Mykel J. Kochenderfer | Online Planning for Constrained POMDPs with Continuous Spaces through
Dual Ascent | Submitted to ICAPS-23 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rather than augmenting rewards with penalties for undesired behavior,
Constrained Partially Observable Markov Decision Processes (CPOMDPs) plan
safely by imposing inviolable hard constraint value budgets. Previous work
performing online planning for CPOMDPs has only been applied to discrete action
and observation spaces. In this work, we propose algorithms for online CPOMDP
planning for continuous state, action, and observation spaces by combining dual
ascent with progressive widening. We empirically compare the effectiveness of
our proposed algorithms on continuous CPOMDPs that model both toy and
real-world safety-critical problems. Additionally, we compare against the use
of online solvers for continuous unconstrained POMDPs that scalarize cost
constraints into rewards, and investigate the effect of optimistic cost
propagation.
| [
{
"version": "v1",
"created": "Fri, 23 Dec 2022 05:22:39 GMT"
}
] | 1,672,012,800,000 | [
[
"Jamgochian",
"Arec",
""
],
[
"Corso",
"Anthony",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] |
2212.12252 | Bhavuk Kalra | Bhavuk Kalra | Generalised agent for solving higher board states of tic tac toe using
Reinforcement Learning | 29 pages, 20 figures, 2022 Seventh International Conference on
Parallel, Distributed and Grid Computing(PDGC) | null | 10.1109/PDGC56933.2022.10053317 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tic Tac Toe is amongst the most well-known games. It has already been shown
that it is a biased game, giving more chances to win for the first player
leaving only a draw or a loss as possibilities for the opponent, assuming both
the players play optimally. Thus on average majority of the games played result
in a draw. The majority of the latest research on how to solve a tic tac toe
board state employs strategies such as Genetic Algorithms, Neural Networks,
Co-Evolution, and Evolutionary Programming. But these approaches deal with a
trivial board state of 3X3 and very little research has been done for a
generalized algorithm to solve 4X4,5X5,6X6 and many higher states. Even though
an algorithm exists which is Min-Max but it takes a lot of time in coming up
with an ideal move due to its recursive nature of implementation. A Sample has
been created on this link \url{https://bk-tic-tac-toe.herokuapp.com/} to prove
this fact. This is the main problem that this study is aimed at solving i.e
providing a generalized algorithm(Approximate method, Learning-Based) for
higher board states of tic tac toe to make precise moves in a short period.
Also, the code changes needed to accommodate higher board states will be
nominal. The idea is to pose the tic tac toe game as a well-posed learning
problem. The study and its results are promising, giving a high win to draw
ratio with each epoch of training. This study could also be encouraging for
other researchers to apply the same algorithm to other similar board games like
Minesweeper, Chess, and GO for finding efficient strategies and comparing the
results.
| [
{
"version": "v1",
"created": "Fri, 23 Dec 2022 10:58:27 GMT"
}
] | 1,678,838,400,000 | [
[
"Kalra",
"Bhavuk",
""
]
] |
2212.12470 | Angela Lopez | \'Angela L\'opez-Cardona and Guillermo Bern\'ardez and Pere Barlet-Ros
and Albert Cabellos-Aparicio | Proximal Policy Optimization with Graph Neural Networks for Optimal
Power Flow | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Optimal Power Flow (OPF) is a very traditional research area within the power
systems field that seeks for the optimal operation point of electric power
plants, and which needs to be solved every few minutes in real-world scenarios.
However, due to the nonconvexities that arise in power generation systems,
there is not yet a fast, robust solution technique for the full Alternating
Current Optimal Power Flow (ACOPF). In the last decades, power grids have
evolved into a typical dynamic, non-linear and large-scale control system,
known as the power system, so searching for better and faster ACOPF solutions
is becoming crucial. Appearance of Graph Neural Networks (GNN) has allowed the
natural use of Machine Learning (ML) algorithms on graph data, such as power
networks. On the other hand, Deep Reinforcement Learning (DRL) is known for its
powerful capability to solve complex decision-making problems. Although
solutions that use these two methods separately are beginning to appear in the
literature, none has yet combined the advantages of both. We propose a novel
architecture based on the Proximal Policy Optimization algorithm with Graph
Neural Networks to solve the Optimal Power Flow. The objective is to design an
architecture that learns how to solve the optimization problem and that is at
the same time able to generalize to unseen scenarios. We compare our solution
with the DCOPF in terms of cost after having trained our DRL agent on IEEE 30
bus system and then computing the OPF on that base network with topology
changes
| [
{
"version": "v1",
"created": "Fri, 23 Dec 2022 17:00:00 GMT"
}
] | 1,672,012,800,000 | [
[
"López-Cardona",
"Ángela",
""
],
[
"Bernárdez",
"Guillermo",
""
],
[
"Barlet-Ros",
"Pere",
""
],
[
"Cabellos-Aparicio",
"Albert",
""
]
] |
2212.12560 | Matej Zecevic | Kieran Didi and Matej Ze\v{c}evi\'c | On How AI Needs to Change to Advance the Science of Drug Discovery | Main paper: 6 pages, References: 1.5 pages. Main paper: 3 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research around AI for Science has seen significant success since the rise of
deep learning models over the past decade, even with longstanding challenges
such as protein structure prediction. However, this fast development inevitably
made their flaws apparent -- especially in domains of reasoning where
understanding the cause-effect relationship is important. One such domain is
drug discovery, in which such understanding is required to make sense of data
otherwise plagued by spurious correlations. Said spuriousness only becomes
worse with the ongoing trend of ever-increasing amounts of data in the life
sciences and thereby restricts researchers in their ability to understand
disease biology and create better therapeutics. Therefore, to advance the
science of drug discovery with AI it is becoming necessary to formulate the key
problems in the language of causality, which allows the explication of
modelling assumptions needed for identifying true cause-effect relationships.
In this attention paper, we present causal drug discovery as the craft of
creating models that ground the process of drug discovery in causal reasoning.
| [
{
"version": "v1",
"created": "Fri, 23 Dec 2022 19:35:51 GMT"
}
] | 1,672,099,200,000 | [
[
"Didi",
"Kieran",
""
],
[
"Zečević",
"Matej",
""
]
] |
2212.12575 | Matej Zecevic | Matej Ze\v{c}evi\'c and Moritz Willig and Jonas Seng and Florian Peter
Busch | Continual Causal Abstractions | Main paper: 3 pages, 1 figure. References: 1 page | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This short paper discusses continually updated causal abstractions as a
potential direction of future research. The key idea is to revise the existing
level of causal abstraction to a different level of detail that is both
consistent with the history of observed data and more effective in solving a
given task.
| [
{
"version": "v1",
"created": "Fri, 23 Dec 2022 20:12:53 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Jan 2023 22:03:08 GMT"
}
] | 1,673,308,800,000 | [
[
"Zečević",
"Matej",
""
],
[
"Willig",
"Moritz",
""
],
[
"Seng",
"Jonas",
""
],
[
"Busch",
"Florian Peter",
""
]
] |
2212.12757 | Assia Kamal-Idrissi | Abdelouadoud Kerarmi, Assia Kamal-idrissi, Amal El Fallah Seghrouchni | An optimized fuzzy logic model for proactive maintenance | 16 pages in single column format, 11 figures, 12th International
Conference on Artificial Intelligence, Soft Computing and Applications (AIAA
2022) December 22 ~ 24, 2022, Sydney, Australia | null | 10.5121/csit.2022.122303 | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Fuzzy logic has been proposed in previous studies for machine diagnosis, to
overcome different drawbacks of the traditional diagnostic approaches used.
Among these approaches Failure Mode and Effect Critical Analysis method(FMECA)
attempts to identify potential modes and treat failures before they occur based
on subjective expert judgments. Although several versions of fuzzy logic are
used to improve FMECA or to replace it, since it is an extremely cost-intensive
approach in terms of failure modes because it evaluates each one of them
separately, these propositions have not explicitly focused on the combinatorial
complexity nor justified the choice of membership functions in Fuzzy logic
modeling. Within this context, we develop an optimization-based approach
referred to Integrated Truth Table and Fuzzy Logic Model (ITTFLM) that smartly
generates fuzzy logic rules using Truth Tables. The ITTFLM was tested on fan
data collected in real-time from a plant machine. In the experiment, three
types of membership functions (Triangular, Trapezoidal, and Gaussian) were
used. The ITTFLM can generate outputs in 5ms, the results demonstrate that this
model based on the Trapezoidal membership functions identifies the failure
states with high accuracy, and its capability of dealing with large numbers of
rules and thus meets the real-time constraints that usually impact user
experience.
| [
{
"version": "v1",
"created": "Sat, 24 Dec 2022 15:49:46 GMT"
}
] | 1,672,099,200,000 | [
[
"Kerarmi",
"Abdelouadoud",
""
],
[
"Kamal-idrissi",
"Assia",
""
],
[
"Seghrouchni",
"Amal El Fallah",
""
]
] |
2212.13537 | M.Z. Naser | M.Z. Naser | Simplifying Causality: A Brief Review of Philosophical Views and
Definitions with Examples from Economics, Education, Medicine, Policy,
Physics and Engineering | Under review | null | 10.1016/j.sheji.2024.01.002 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This short paper compiles the big ideas behind some philosophical views,
definitions, and examples of causality. This collection spans the realms of the
four commonly adopted approaches to causality: Humes regularity,
counterfactual, manipulation, and mechanisms. This short review is motivated by
presenting simplified views and definitions and then supplements them with
examples from various fields, including economics, education, medicine,
politics, physics, and engineering. It is the hope that this short review comes
in handy for new and interested readers with little knowledge of causality and
causal inference.
| [
{
"version": "v1",
"created": "Tue, 27 Dec 2022 16:16:36 GMT"
}
] | 1,711,411,200,000 | [
[
"Naser",
"M. Z.",
""
]
] |
2212.13631 | Gege Wen | Feras A. Batarseh, Priya L. Donti, J\'an Drgo\v{n}a, Kristen Fletcher,
Pierre-Adrien Hanania, Melissa Hatton, Srinivasan Keshav, Bran Knowles,
Raphaela Kotsch, Sean McGinnis, Peetak Mitra, Alex Philp, Jim Spohrer, Frank
Stein, Meghna Tare, Svitlana Volkov, Gege Wen | Proceedings of AAAI 2022 Fall Symposium: The Role of AI in Responding to
Climate Challenges | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Climate change is one of the most pressing challenges of our time, requiring
rapid action across society. As artificial intelligence tools (AI) are rapidly
deployed, it is therefore crucial to understand how they will impact climate
action. On the one hand, AI can support applications in climate change
mitigation (reducing or preventing greenhouse gas emissions), adaptation
(preparing for the effects of a changing climate), and climate science. These
applications have implications in areas ranging as widely as energy,
agriculture, and finance. At the same time, AI is used in many ways that hinder
climate action (e.g., by accelerating the use of greenhouse gas-emitting fossil
fuels). In addition, AI technologies have a carbon and energy footprint
themselves. This symposium brought together participants from across academia,
industry, government, and civil society to explore these intersections of AI
with climate change, as well as how each of these sectors can contribute to
solutions.
| [
{
"version": "v1",
"created": "Tue, 27 Dec 2022 22:28:56 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Jan 2023 20:38:44 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Jan 2023 03:16:30 GMT"
},
{
"version": "v4",
"created": "Fri, 6 Jan 2023 04:33:59 GMT"
},
{
"version": "v5",
"created": "Mon, 30 Jan 2023 00:07:05 GMT"
}
] | 1,675,123,200,000 | [
[
"Batarseh",
"Feras A.",
""
],
[
"Donti",
"Priya L.",
""
],
[
"Drgoňa",
"Ján",
""
],
[
"Fletcher",
"Kristen",
""
],
[
"Hanania",
"Pierre-Adrien",
""
],
[
"Hatton",
"Melissa",
""
],
[
"Keshav",
"Srinivasan",
""
],
[
"Knowles",
"Bran",
""
],
[
"Kotsch",
"Raphaela",
""
],
[
"McGinnis",
"Sean",
""
],
[
"Mitra",
"Peetak",
""
],
[
"Philp",
"Alex",
""
],
[
"Spohrer",
"Jim",
""
],
[
"Stein",
"Frank",
""
],
[
"Tare",
"Meghna",
""
],
[
"Volkov",
"Svitlana",
""
],
[
"Wen",
"Gege",
""
]
] |
2212.13725 | Qihao (Joe) Shi | Qihao Shi, Bingyang Fu, Can Wang, Jiawei Chen, Sheng Zhou, Yan Feng,
Chun Chen | Robust Sequence Networked Submodular Maximization | 12 pages, 14 figures, aaai2023 conference accepted | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the \underline{R}obust \underline{o}ptimization for
\underline{se}quence \underline{Net}worked \underline{s}ubmodular maximization
(RoseNets) problem. We interweave the robust optimization with the sequence
networked submodular maximization. The elements are connected by a directed
acyclic graph and the objective function is not submodular on the elements but
on the edges in the graph. Under such networked submodular scenario, the impact
of removing an element from a sequence depends both on its position in the
sequence and in the network. This makes the existing robust algorithms
inapplicable. In this paper, we take the first step to study the RoseNets
problem. We design a robust greedy algorithm, which is robust against the
removal of an arbitrary subset of the selected elements. The approximation
ratio of the algorithm depends both on the number of the removed elements and
the network topology. We further conduct experiments on real applications of
recommendation and link prediction. The experimental results demonstrate the
effectiveness of the proposed algorithm.
| [
{
"version": "v1",
"created": "Wed, 28 Dec 2022 07:20:03 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jan 2023 14:02:04 GMT"
}
] | 1,674,777,600,000 | [
[
"Shi",
"Qihao",
""
],
[
"Fu",
"Bingyang",
""
],
[
"Wang",
"Can",
""
],
[
"Chen",
"Jiawei",
""
],
[
"Zhou",
"Sheng",
""
],
[
"Feng",
"Yan",
""
],
[
"Chen",
"Chun",
""
]
] |
2212.13819 | Ekaterina Nikonova | Ekaterina Nikonova, Cheng Xue, Jochen Renz | Don't do it: Safer Reinforcement Learning With Rule-based Guidance | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | During training, reinforcement learning systems interact with the world
without considering the safety of their actions. When deployed into the real
world, such systems can be dangerous and cause harm to their surroundings.
Often, dangerous situations can be mitigated by defining a set of rules that
the system should not violate under any conditions. For example, in robot
navigation, one safety rule would be to avoid colliding with surrounding
objects and people. In this work, we define safety rules in terms of the
relationships between the agent and objects and use them to prevent
reinforcement learning systems from performing potentially harmful actions. We
propose a new safe epsilon-greedy algorithm that uses safety rules to override
agents' actions if they are considered to be unsafe. In our experiments, we
show that a safe epsilon-greedy policy significantly increases the safety of
the agent during training, improves the learning efficiency resulting in much
faster convergence, and achieves better performance than the base model.
| [
{
"version": "v1",
"created": "Wed, 28 Dec 2022 13:42:56 GMT"
}
] | 1,672,272,000,000 | [
[
"Nikonova",
"Ekaterina",
""
],
[
"Xue",
"Cheng",
""
],
[
"Renz",
"Jochen",
""
]
] |
2212.14462 | Mojtaba Elahi | Mojtaba Elahi and Jussi Rintanen | Planning with Complex Data Types in PDDL | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Practically all of the planning research is limited to states represented in
terms of Boolean and numeric state variables. Many practical problems, for
example, planning inside complex software systems, require far more complex
data types, and even real-world planning in many cases requires concepts such
as sets of objects, which are not convenient to express in modeling languages
with scalar types only. In this work, we investigate a modeling language for
complex software systems, which supports complex data types such as sets,
arrays, records, and unions. We give a reduction of a broad range of complex
data types and their operations to Boolean logic, and then map this
representation further to PDDL to be used with domain-independent PDDL
planners. We evaluate the practicality of this approach, and provide solutions
to some of the issues that arise in the PDDL translation.
| [
{
"version": "v1",
"created": "Thu, 29 Dec 2022 21:19:22 GMT"
}
] | 1,672,617,600,000 | [
[
"Elahi",
"Mojtaba",
""
],
[
"Rintanen",
"Jussi",
""
]
] |
2301.01837 | Erman Acar | Erman Acar, Andrea De Domenico, Krishna Manoorkar and Mattia
Panettiere | A Meta-Learning Algorithm for Interrogative Agendas | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Explainability is a key challenge and a major research theme in AI research
for developing intelligent systems that are capable of working with humans more
effectively. An obvious choice in developing explainable intelligent systems
relies on employing knowledge representation formalisms which are inherently
tailored towards expressing human knowledge e.g., interrogative agendas. In the
scope of this work, we focus on formal concept analysis (FCA), a standard
knowledge representation formalism, to express interrogative agendas, and in
particular to categorize objects w.r.t. a given set of features. Several
FCA-based algorithms have already been in use for standard machine learning
tasks such as classification and outlier detection. These algorithms use a
single concept lattice for such a task, meaning that the set of features used
for the categorization is fixed. Different sets of features may have different
importance in that categorization, we call a set of features an agenda. In many
applications a correct or good agenda for categorization is not known
beforehand. In this paper, we propose a meta-learning algorithm to construct a
good interrogative agenda explaining the data. Such algorithm is meant to call
existing FCA-based classification and outlier detection algorithms iteratively,
to increase their accuracy and reduce their sample complexity. The proposed
method assigns a measure of importance to different set of features used in the
categorization, hence making the results more explainable.
| [
{
"version": "v1",
"created": "Wed, 4 Jan 2023 22:09:36 GMT"
}
] | 1,672,963,200,000 | [
[
"Acar",
"Erman",
""
],
[
"De Domenico",
"Andrea",
""
],
[
"Manoorkar",
"Krishna",
""
],
[
"Panettiere",
"Mattia",
""
]
] |
2301.02758 | Alexis Tsoukias | Alberto Colorni and Alexis Tsouki\`as | What is a decision problem? | null | null | null | Cahier du LAMSADE 404 | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents a general framework about what is a decision problem. Our
motivation is related to the fact that decision analysis and operational
research are structured (as disciplines) around classes of methods, while
instead we should first characterise the decision problems our clients present
us. For this purpose we introduce a new framework, independent from any
existing method, based upon primitives provided by (or elicited from) the
client. We show that the number of archetypal decision problems are finite and
so the archetypal decision support methods.
| [
{
"version": "v1",
"created": "Sat, 7 Jan 2023 01:03:08 GMT"
}
] | 1,673,308,800,000 | [
[
"Colorni",
"Alberto",
""
],
[
"Tsoukiàs",
"Alexis",
""
]
] |
2301.02781 | Yinyu Lan | Yinyu Lan, Shizhu He, Kang Liu, Jun Zhao | Knowledge Reasoning via Jointly Modeling Knowledge Graphs and Soft Rules | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge graphs (KGs) play a crucial role in many applications, such as
question answering, but incompleteness is an urgent issue for their broad
application. Much research in knowledge graph completion (KGC) has been
performed to resolve this issue. The methods of KGC can be classified into two
major categories: rule-based reasoning and embedding-based reasoning. The
former has high accuracy and good interpretability, but a major challenge is to
obtain effective rules on large-scale KGs. The latter has good efficiency and
scalability, but it relies heavily on data richness and cannot fully use domain
knowledge in the form of logical rules. We propose a novel method that injects
rules and learns representations iteratively to take full advantage of rules
and embeddings. Specifically, we model the conclusions of rule groundings as
0-1 variables and use a rule confidence regularizer to remove the uncertainty
of the conclusions. The proposed approach has the following advantages: 1) It
combines the benefits of both rules and knowledge graph embeddings (KGEs) and
achieves a good balance between efficiency and scalability. 2) It uses an
iterative method to continuously improve KGEs and remove incorrect rule
conclusions. Evaluations on two public datasets show that our method
outperforms the current state-of-the-art methods, improving performance by
2.7\% and 4.3\% in mean reciprocal rank (MRR).
| [
{
"version": "v1",
"created": "Sat, 7 Jan 2023 05:24:29 GMT"
}
] | 1,673,308,800,000 | [
[
"Lan",
"Yinyu",
""
],
[
"He",
"Shizhu",
""
],
[
"Liu",
"Kang",
""
],
[
"Zhao",
"Jun",
""
]
] |
2301.02983 | Fangzhi Xu | Fangzhi Xu, Jun Liu, Qika Lin, Tianzhe Zhao, Jian Zhang, Lingling
Zhang | Mind Reasoning Manners: Enhancing Type Perception for Generalized
Zero-shot Logical Reasoning over Text | 12 pages, 7 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Logical reasoning task involves diverse types of complex reasoning over text,
based on the form of multiple-choice question answering. Given the context,
question and a set of options as the input, previous methods achieve superior
performances on the full-data setting. However, the current benchmark dataset
has the ideal assumption that the reasoning type distribution on the train
split is close to the test split, which is inconsistent with many real
application scenarios. To address it, there remain two problems to be studied:
(1) How is the zero-shot capability of the models (train on seen types and test
on unseen types)? (2) How to enhance the perception of reasoning types for the
models? For problem 1, we propose a new benchmark for generalized zero-shot
logical reasoning, named ZsLR. It includes six splits based on the three type
sampling strategies. For problem 2, a type-aware model TaCo is proposed. It
utilizes both the heuristic input reconstruction and the contrastive learning
to improve the type perception in the global representation. Extensive
experiments on both the zero-shot and full-data settings prove the superiority
of TaCo over the state-of-the-art methods. Also, we experiment and verify the
generalization capability of TaCo on other logical reasoning dataset.
| [
{
"version": "v1",
"created": "Sun, 8 Jan 2023 05:24:34 GMT"
}
] | 1,673,308,800,000 | [
[
"Xu",
"Fangzhi",
""
],
[
"Liu",
"Jun",
""
],
[
"Lin",
"Qika",
""
],
[
"Zhao",
"Tianzhe",
""
],
[
"Zhang",
"Jian",
""
],
[
"Zhang",
"Lingling",
""
]
] |
2301.03013 | Ritesh Chandra | Ritesh Chandra, Sadhana Tiwari, Sonali Agarwal, Navjot Singh | Semantic rule Web-based Diagnosis and Treatment of Vector-Borne Diseases
using SWRL rules | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Vector-borne diseases (VBDs) are a kind of infection caused through the
transmission of vectors generated by the bites of infected parasites, bacteria,
and viruses, such as ticks, mosquitoes, triatomine bugs, blackflies, and
sandflies. If these diseases are not properly treated within a reasonable time
frame, the mortality rate may rise. In this work, we propose a set of
ontologies that will help in the diagnosis and treatment of vector-borne
diseases. For developing VBD's ontology, electronic health records taken from
the Indian Health Records website, text data generated from Indian government
medical mobile applications, and doctors' prescribed handwritten notes of
patients are used as input. This data is then converted into correct text using
Optical Character Recognition (OCR) and a spelling checker after
pre-processing. Natural Language Processing (NLP) is applied for entity
extraction from text data for making Resource Description Framework (RDF)
medical data with the help of the Patient Clinical Data (PCD) ontology.
Afterwards, Basic Formal Ontology (BFO), National Vector Borne Disease Control
Program (NVBDCP) guidelines, and RDF medical data are used to develop
ontologies for VBDs, and Semantic Web Rule Language (SWRL) rules are applied
for diagnosis and treatment. The developed ontology helps in the construction
of decision support systems (DSS) for the NVBDCP to control these diseases.
| [
{
"version": "v1",
"created": "Sun, 8 Jan 2023 10:32:38 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Jan 2023 16:36:36 GMT"
}
] | 1,675,209,600,000 | [
[
"Chandra",
"Ritesh",
""
],
[
"Tiwari",
"Sadhana",
""
],
[
"Agarwal",
"Sonali",
""
],
[
"Singh",
"Navjot",
""
]
] |
2301.03094 | Jonas Witt | Jonas Witt, Stef Rasing, Sebastijan Duman\v{c}i\'c, Tias Guns and
Claus-Christian Carbon | A Divide-Align-Conquer Strategy for Program Synthesis | 11 pages, 9 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major bottleneck in search-based program synthesis is the exponentially
growing search space which makes learning large programs intractable. Humans
mitigate this problem by leveraging the compositional nature of the real world:
In structured domains, a logical specification can often be decomposed into
smaller, complementary solution programs. We show that compositional
segmentation can be applied in the programming by examples setting to divide
the search for large programs across multiple smaller program synthesis
problems. For each example, we search for a decomposition into smaller units
which maximizes the reconstruction accuracy in the output under a latent task
program. A structural alignment of the constituent parts in the input and
output leads to pairwise correspondences used to guide the program synthesis
search. In order to align the input/output structures, we make use of the
Structure-Mapping Theory (SMT), a formal model of human analogical reasoning
which originated in the cognitive sciences. We show that decomposition-driven
program synthesis with structural alignment outperforms Inductive Logic
Programming (ILP) baselines on string transformation tasks even with minimal
knowledge priors. Unlike existing methods, the predictive accuracy of our agent
monotonically increases for additional examples and achieves an average time
complexity of $\mathcal{O}(m)$ in the number $m$ of partial programs for highly
structured domains such as strings. We extend this method to the complex
setting of visual reasoning in the Abstraction and Reasoning Corpus (ARC) for
which ILP methods were previously infeasible.
| [
{
"version": "v1",
"created": "Sun, 8 Jan 2023 19:10:55 GMT"
}
] | 1,673,308,800,000 | [
[
"Witt",
"Jonas",
""
],
[
"Rasing",
"Stef",
""
],
[
"Dumančić",
"Sebastijan",
""
],
[
"Guns",
"Tias",
""
],
[
"Carbon",
"Claus-Christian",
""
]
] |
2301.03283 | Zhaohong Deng | Qiongdan Lou, Zhaohong Deng, Kup-Sze Choi, Shitong Wang | A Robust Multilabel Method Integrating Rule-based Transparent Model,
Soft Label Correlation Learning and Label Noise Resistance | This paper has been accepted by IEEE Transactions on Fuzzy Systems | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model transparency, label correlation learning and the robust-ness to label
noise are crucial for multilabel learning. However, few existing methods study
these three characteristics simultaneously. To address this challenge, we
propose the robust multilabel Takagi-Sugeno-Kang fuzzy system (R-MLTSK-FS) with
three mechanisms. First, we design a soft label learning mechanism to reduce
the effect of label noise by explicitly measuring the interactions between
labels, which is also the basis of the other two mechanisms. Second, the
rule-based TSK FS is used as the base model to efficiently model the inference
relationship be-tween features and soft labels in a more transparent way than
many existing multilabel models. Third, to further improve the performance of
multilabel learning, we build a correlation enhancement learning mechanism
based on the soft label space and the fuzzy feature space. Extensive
experiments are conducted to demonstrate the superiority of the proposed
method.
| [
{
"version": "v1",
"created": "Mon, 9 Jan 2023 11:54:14 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 16:58:50 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Sep 2023 13:58:57 GMT"
}
] | 1,695,686,400,000 | [
[
"Lou",
"Qiongdan",
""
],
[
"Deng",
"Zhaohong",
""
],
[
"Choi",
"Kup-Sze",
""
],
[
"Wang",
"Shitong",
""
]
] |
2301.03913 | Dennis Soemers | Matthew Stephenson and Dennis J.N.J. Soemers and \'Eric Piette and
Cameron Browne | Measuring Board Game Distance | Accepted at the Computers and Games 2022 conference | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a general approach for measuring distances between board
games within the Ludii general game system. These distances are calculated
using a previously published set of general board game concepts, each of which
represents a common game idea or shared property. Our results compare and
contrast two different measures of distance, highlighting the subjective nature
of such metrics and discussing the different ways that they can be interpreted.
| [
{
"version": "v1",
"created": "Tue, 10 Jan 2023 11:34:57 GMT"
}
] | 1,673,395,200,000 | [
[
"Stephenson",
"Matthew",
""
],
[
"Soemers",
"Dennis J. N. J.",
""
],
[
"Piette",
"Éric",
""
],
[
"Browne",
"Cameron",
""
]
] |
2301.04709 | Atticus Geiger | Atticus Geiger and Chris Potts and Thomas Icard | Causal Abstraction for Faithful Model Interpretation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A faithful and interpretable explanation of an AI model's behavior and
internal structure is a high-level explanation that is human-intelligible but
also consistent with the known, but often opaque low-level causal details of
the model. We argue that the theory of causal abstraction provides the
mathematical foundations for the desired kinds of model explanations. In causal
abstraction analysis, we use interventions on model-internal states to
rigorously assess whether an interpretable high-level causal model is a
faithful description of an AI model. Our contributions in this area are: (1) We
generalize causal abstraction to cyclic causal structures and typed high-level
variables. (2) We show how multi-source interchange interventions can be used
to conduct causal abstraction analyses. (3) We define a notion of approximate
causal abstraction that allows us to assess the degree to which a high-level
causal model is a causal abstraction of a lower-level one. (4) We prove
constructive causal abstraction can be decomposed into three operations we
refer to as marginalization, variable-merge, and value-merge. (5) We formalize
the XAI methods of LIME, causal effect estimation, causal mediation analysis,
iterated nullspace projection, and circuit-based explanations as special cases
of causal abstraction analysis.
| [
{
"version": "v1",
"created": "Wed, 11 Jan 2023 20:42:41 GMT"
}
] | 1,673,568,000,000 | [
[
"Geiger",
"Atticus",
""
],
[
"Potts",
"Chris",
""
],
[
"Icard",
"Thomas",
""
]
] |
2301.04790 | Jieyu Li | Jieyu Li, Lu Chen, Ruisheng Cao, Su Zhu, Hongshen Xu, Zhi Chen,
Hanchong Zhang, Kai Yu | On the Structural Generalization in Text-to-SQL | The experiment results of T5 and T5-Picard in Table 5 and Table 6 are
not correct because we made mistakes in the evaluation codes | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Exploring the generalization of a text-to-SQL parser is essential for a
system to automatically adapt the real-world databases. Previous works provided
investigations focusing on lexical diversity, including the influence of the
synonym and perturbations in both natural language questions and databases.
However, research on the structure variety of database schema~(DS) is
deficient. Specifically, confronted with the same input question, the target
SQL is probably represented in different ways when the DS comes to a different
structure. In this work, we provide in-deep discussions about the structural
generalization of text-to-SQL tasks. We observe that current datasets are too
templated to study structural generalization. To collect eligible test data, we
propose a framework to generate novel text-to-SQL data via automatic and
synchronous (DS, SQL) pair altering. In the experiments, significant
performance reduction when evaluating well-trained text-to-SQL models on the
synthetic samples demonstrates the limitation of current research regarding
structural generalization. According to comprehensive analysis, we suggest the
practical reason is the overfitting of (NL, SQL) patterns.
| [
{
"version": "v1",
"created": "Thu, 12 Jan 2023 02:52:51 GMT"
},
{
"version": "v2",
"created": "Sat, 21 Jan 2023 11:52:55 GMT"
}
] | 1,674,518,400,000 | [
[
"Li",
"Jieyu",
""
],
[
"Chen",
"Lu",
""
],
[
"Cao",
"Ruisheng",
""
],
[
"Zhu",
"Su",
""
],
[
"Xu",
"Hongshen",
""
],
[
"Chen",
"Zhi",
""
],
[
"Zhang",
"Hanchong",
""
],
[
"Yu",
"Kai",
""
]
] |
2301.04993 | Marija Slavkovik | Inga Str\"umke and Marija Slavkovik and Clemens Stachl | Against Algorithmic Exploitation of Human Vulnerabilities | arXiv admin note: text overlap with arXiv:2203.00317 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Decisions such as which movie to watch next, which song to listen to, or
which product to buy online, are increasingly influenced by recommender systems
and user models that incorporate information on users' past behaviours,
preferences, and digitally created content. Machine learning models that enable
recommendations and that are trained on user data may unintentionally leverage
information on human characteristics that are considered vulnerabilities, such
as depression, young age, or gambling addiction. The use of algorithmic
decisions based on latent vulnerable state representations could be considered
manipulative and could have a deteriorating impact on the condition of
vulnerable individuals. In this paper, we are concerned with the problem of
machine learning models inadvertently modelling vulnerabilities, and want to
raise awareness for this issue to be considered in legislation and AI ethics.
Hence, we define and describe common vulnerabilities, and illustrate cases
where they are likely to play a role in algorithmic decision-making. We propose
a set of requirements for methods to detect the potential for vulnerability
modelling, detect whether vulnerable groups are treated differently by a model,
and detect whether a model has created an internal representation of
vulnerability. We conclude that explainable artificial intelligence methods may
be necessary for detecting vulnerability exploitation by machine learning-based
recommendation systems.
| [
{
"version": "v1",
"created": "Thu, 12 Jan 2023 13:15:24 GMT"
}
] | 1,673,568,000,000 | [
[
"Strümke",
"Inga",
""
],
[
"Slavkovik",
"Marija",
""
],
[
"Stachl",
"Clemens",
""
]
] |
2301.05041 | Lenaig Cornanguer | L\'ena\"ig Cornanguer (LACODAM, IRISA), Christine Largou\"et (LACODAM,
IRISA), Laurence Roz\'e (LACODAM, IRISA), Alexandre Termier (LACODAM, IRISA) | Persistence-Based Discretization for Learning Discrete Event Systems
from Time Series | null | MLmDS 2023 - AAAI Workshop When Machine Learning meets Dynamical
Systems: Theory and Applications, Feb 2023, Washington (DC), United States.
pp.1-6 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To get a good understanding of a dynamical system, it is convenient to have
an interpretable and versatile model of it. Timed discrete event systems are a
kind of model that respond to these requirements. However, such models can be
inferred from timestamped event sequences but not directly from numerical data.
To solve this problem, a discretization step must be done to identify events or
symbols in the time series. Persist is a discretization method that intends to
create persisting symbols by using a score called persistence score. This
allows to mitigate the risk of undesirable symbol changes that would lead to a
too complex model. After the study of the persistence score, we point out that
it tends to favor excessive cases making it miss interesting persisting
symbols. To correct this behavior, we replace the metric used in the
persistence score, the Kullback-Leibler divergence, with the Wasserstein
distance. Experiments show that the improved persistence score enhances
Persist's ability to capture the information of the original time series and
that it makes it better suited for discrete event systems learning.
| [
{
"version": "v1",
"created": "Thu, 12 Jan 2023 14:10:30 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2023 09:37:25 GMT"
}
] | 1,687,305,600,000 | [
[
"Cornanguer",
"Lénaïg",
"",
"LACODAM, IRISA"
],
[
"Largouët",
"Christine",
"",
"LACODAM,\n IRISA"
],
[
"Rozé",
"Laurence",
"",
"LACODAM, IRISA"
],
[
"Termier",
"Alexandre",
"",
"LACODAM, IRISA"
]
] |
2301.05082 | Ignacio Vellido | Ignacio Vellido, Juan Fdez-Olivares, Ra\'ul P\'erez | Discovering and Explaining Driver Behaviour under HoS Regulations | To be submitted to the Information Fusion journal | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | World wide transport authorities are imposing complex Hours of Service
regulations to drivers, which constraint the amount of working, driving and
resting time when delivering a service. As a consequence, transport companies
are responsible not only of scheduling driving plans aligned with laws that
define the legal behaviour of a driver, but also of monitoring and identifying
as soon as possible problematic patterns that can incur in costs due to
sanctions. Transport experts are frequently in charge of many drivers and lack
time to analyse the vast amount of data recorded by the onboard sensors, and
companies have grown accustomed to pay sanctions rather than predict and
forestall wrongdoings. This paper exposes an application for summarising raw
driver activity logs according to these regulations and for explaining driver
behaviour in a human readable format. The system employs planning, constraint,
and clustering techniques to extract and describe what the driver has been
doing while identifying infractions and the activities that originate them.
Furthermore, it groups drivers based on similar driving patterns. An
experimentation in real world data indicates that recurring driving patterns
can be clustered from short basic driving sequences to whole drivers working
days.
| [
{
"version": "v1",
"created": "Thu, 12 Jan 2023 15:30:11 GMT"
}
] | 1,673,568,000,000 | [
[
"Vellido",
"Ignacio",
""
],
[
"Fdez-Olivares",
"Juan",
""
],
[
"Pérez",
"Raúl",
""
]
] |
2301.05336 | Hongjun Wang | Hongjun Wang, Zhiwen Zhang, Zipei Fan, Jiyuan Chen, Lingyu Zhang,
Ryosuke Shibasaki, Xuan Song | Multitask Weakly Supervised Learning for Origin Destination Travel Time
Estimation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Travel time estimation from GPS trips is of great importance to order
duration, ridesharing, taxi dispatching, etc. However, the dense trajectory is
not always available due to the limitation of data privacy and acquisition,
while the origin destination (OD) type of data, such as NYC taxi data, NYC bike
data, and Capital Bikeshare data, is more accessible. To address this issue,
this paper starts to estimate the OD trips travel time combined with the road
network. Subsequently, a Multitask Weakly Supervised Learning Framework for
Travel Time Estimation (MWSL TTE) has been proposed to infer transition
probability between roads segments, and the travel time on road segments and
intersection simultaneously. Technically, given an OD pair, the transition
probability intends to recover the most possible route. And then, the output of
travel time is equal to the summation of all segments' and intersections'
travel time in this route. A novel route recovery function has been proposed to
iteratively maximize the current route's co occurrence probability, and
minimize the discrepancy between routes' probability distribution and the
inverse distribution of routes' estimation loss. Moreover, the expected log
likelihood function based on a weakly supervised framework has been deployed in
optimizing the travel time from road segments and intersections concurrently.
We conduct experiments on a wide range of real world taxi datasets in Xi'an and
Chengdu and demonstrate our method's effectiveness on route recovery and travel
time estimation.
| [
{
"version": "v1",
"created": "Fri, 13 Jan 2023 00:11:56 GMT"
}
] | 1,673,827,200,000 | [
[
"Wang",
"Hongjun",
""
],
[
"Zhang",
"Zhiwen",
""
],
[
"Fan",
"Zipei",
""
],
[
"Chen",
"Jiyuan",
""
],
[
"Zhang",
"Lingyu",
""
],
[
"Shibasaki",
"Ryosuke",
""
],
[
"Song",
"Xuan",
""
]
] |
2301.05376 | Chunhui Du | Chunhui Du and Hao He and Yaohui Jin | Contrast with Major Classifier Vectors for Federated Medical Relation
Extraction with Heterogeneous Label Distribution | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Federated medical relation extraction enables multiple clients to train a
deep network collaboratively without sharing their raw medical data. In order
to handle the heterogeneous label distribution across clients, most of the
existing works only involve enforcing regularization between local and global
models during optimization. In this paper, we fully utilize the models of all
clients and propose a novel concept of \textit{major classifier vectors}, where
a group of class vectors is obtained in an ensemble rather than the weighted
average method on the server. The major classifier vectors are then distributed
to all clients and the local training of each client is Contrasted with Major
Classifier vectors (FedCMC), so the local model is not prone to overfitting to
the local label distribution. FedCMC requires only a small amount of additional
transfer of classifier parameters without any leakage of raw data, extracted
representations, and label distributions. Our extensive experiments show that
FedCMC outperforms the other state-of-the-art FL algorithms on three medical
relation extraction datasets.
| [
{
"version": "v1",
"created": "Fri, 13 Jan 2023 03:22:07 GMT"
}
] | 1,673,827,200,000 | [
[
"Du",
"Chunhui",
""
],
[
"He",
"Hao",
""
],
[
"Jin",
"Yaohui",
""
]
] |
2301.05412 | Ling Cheng | Ling Cheng, Feida Zhu, Yong Wang, Ruicheng Liang, Huiwen Liu | Evolve Path Tracer: Early Detection of Malicious Addresses in
Cryptocurrency | In Proceedings of the 29th ACM SIGKDD Conference on Knowledge
Discovery and Data Mining (KDD23) | null | 10.1145/3580305.3599817 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the ever-increasing boom of Cryptocurrency, detecting fraudulent
behaviors and associated malicious addresses draws significant research effort.
However, most existing studies still rely on the full history features or
full-fledged address transaction networks, thus cannot meet the requirements of
early malicious address detection, which is urgent but seldom discussed by
existing studies. To detect fraud behaviors of malicious addresses in the early
stage, we present Evolve Path Tracer, which consists of Evolve Path Encoder
LSTM, Evolve Path Graph GCN, and Hierarchical Survival Predictor. Specifically,
in addition to the general address features, we propose asset transfer paths
and corresponding path graphs to characterize early transaction patterns.
Further, since the transaction patterns are changing rapidly during the early
stage, we propose Evolve Path Encoder LSTM and Evolve Path Graph GCN to encode
asset transfer path and path graph under an evolving structure setting.
Hierarchical Survival Predictor then predicts addresses' labels with nice
scalability and faster prediction speed. We investigate the effectiveness and
versatility of Evolve Path Tracer on three real-world illicit bitcoin datasets.
Our experimental results demonstrate that Evolve Path Tracer outperforms the
state-of-the-art methods. Extensive scalability experiments demonstrate the
model's adaptivity under a dynamic prediction setting.
| [
{
"version": "v1",
"created": "Fri, 13 Jan 2023 06:59:52 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2023 12:11:55 GMT"
},
{
"version": "v3",
"created": "Sat, 3 Jun 2023 05:59:42 GMT"
}
] | 1,686,009,600,000 | [
[
"Cheng",
"Ling",
""
],
[
"Zhu",
"Feida",
""
],
[
"Wang",
"Yong",
""
],
[
"Liang",
"Ruicheng",
""
],
[
"Liu",
"Huiwen",
""
]
] |
2301.05433 | Alon Jacovi | Alon Jacovi | Trends in Explainable AI (XAI) Literature | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The XAI literature is decentralized, both in terminology and in publication
venues, but recent years saw the community converge around keywords that make
it possible to more reliably discover papers automatically. We use keyword
search using the SemanticScholar API and manual curation to collect a
well-formatted and reasonably comprehensive set of 5199 XAI papers, available
at https://github.com/alonjacovi/XAI-Scholar . We use this collection to
clarify and visualize trends about the size and scope of the literature,
citation trends, cross-field trends, and collaboration trends. Overall, XAI is
becoming increasingly multidisciplinary, with relative growth in papers
belonging to increasingly diverse (non-CS) scientific fields, increasing
cross-field collaborative authorship, increasing cross-field citation activity.
The collection can additionally be used as a paper discovery engine, by
retrieving XAI literature which is cited according to specific constraints (for
example, papers that are influential outside of their field, or influential to
non-XAI research).
| [
{
"version": "v1",
"created": "Fri, 13 Jan 2023 08:36:56 GMT"
}
] | 1,673,827,200,000 | [
[
"Jacovi",
"Alon",
""
]
] |
2301.05535 | Abdul Sittar | Abdul Sittar, Dunja Mladenic | Using the profile of publishers to predict barriers across news articles | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Detection of news propagation barriers, being economical, cultural,
political, time zonal, or geographical, is still an open research issue. We
present an approach to barrier detection in news spreading by utilizing
Wikipedia-concepts and metadata associated with each barrier. Solving this
problem can not only convey the information about the coverage of an event but
it can also show whether an event has been able to cross a specific barrier or
not. Experimental results on IPoNews dataset (dataset for information spreading
over the news) reveals that simple classification models are able to detect
barriers with high accuracy. We believe that our approach can serve to provide
useful insights which pave the way for the future development of a system for
predicting information spreading barriers over the news.
| [
{
"version": "v1",
"created": "Fri, 13 Jan 2023 13:32:42 GMT"
}
] | 1,673,827,200,000 | [
[
"Sittar",
"Abdul",
""
],
[
"Mladenic",
"Dunja",
""
]
] |
2301.05608 | Nils Wilken | Nils Wilken, Lea Cohausz, Johannes Schaum, Stefan L\"udtke and Heiner
Stuckenschmidt | Investigating the Combination of Planning-Based and Data-Driven Methods
for Goal Recognition | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | An important feature of pervasive, intelligent assistance systems is the
ability to dynamically adapt to the current needs of their users. Hence, it is
critical for such systems to be able to recognize those goals and needs based
on observations of the user's actions and state of the environment. In this
work, we investigate the application of two state-of-the-art, planning-based
plan recognition approaches in a real-world setting. So far, these approaches
were only evaluated in artificial settings in combination with agents that act
perfectly rational. We show that such approaches have difficulties when used to
recognize the goals of human subjects, because human behaviour is typically not
perfectly rational. To overcome this issue, we propose an extension to the
existing approaches through a classification-based method trained on observed
behaviour data. We empirically show that the proposed extension not only
outperforms the purely planning-based- and purely data-driven goal recognition
methods but is also able to recognize the correct goal more reliably,
especially when only a small number of observations were seen. This
substantially improves the usefulness of hybrid goal recognition approaches for
intelligent assistance systems, as recognizing a goal early opens much more
possibilities for supportive reactions of the system.
| [
{
"version": "v1",
"created": "Fri, 13 Jan 2023 15:24:02 GMT"
}
] | 1,673,827,200,000 | [
[
"Wilken",
"Nils",
""
],
[
"Cohausz",
"Lea",
""
],
[
"Schaum",
"Johannes",
""
],
[
"Lüdtke",
"Stefan",
""
],
[
"Stuckenschmidt",
"Heiner",
""
]
] |
2301.05893 | Fabio Massimo Zennaro | Fabio Massimo Zennaro, M\'at\'e Dr\'avucz, Geanina Apachitei, W.
Dhammika Widanage, Theodoros Damoulas | Jointly Learning Consistent Causal Abstractions Over Multiple
Interventional Distributions | 12 pages, 21 pages appendix, 6 figures, CLeaR (Causal Learning and
Reasoning) 2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An abstraction can be used to relate two structural causal models
representing the same system at different levels of resolution. Learning
abstractions which guarantee consistency with respect to interventional
distributions would allow one to jointly reason about evidence across multiple
levels of granularity while respecting the underlying cause-effect
relationships. In this paper, we introduce a first framework for causal
abstraction learning between SCMs based on the formalization of abstraction
recently proposed by Rischel (2020). Based on that, we propose a differentiable
programming solution that jointly solves a number of combinatorial
sub-problems, and we study its performance and benefits against independent and
sequential approaches on synthetic settings and on a challenging real-world
problem related to electric vehicle battery manufacturing.
| [
{
"version": "v1",
"created": "Sat, 14 Jan 2023 11:22:16 GMT"
},
{
"version": "v2",
"created": "Sun, 7 May 2023 19:10:47 GMT"
}
] | 1,683,590,400,000 | [
[
"Zennaro",
"Fabio Massimo",
""
],
[
"Drávucz",
"Máté",
""
],
[
"Apachitei",
"Geanina",
""
],
[
"Widanage",
"W. Dhammika",
""
],
[
"Damoulas",
"Theodoros",
""
]
] |
2301.06141 | Isma\"il Baaj | Isma\"il Baaj | Max-min Learning of Approximate Weight Matrices from Fuzzy Data | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this article, we study the approximate solutions set $\Lambda_b$ of an
inconsistent system of $\max-\min$ fuzzy relational equations $(S): A
\Box_{\min}^{\max}x =b$. Using the $L_\infty$ norm, we compute by an explicit
analytical formula the Chebyshev distance $\Delta~=~\inf_{c \in \mathcal{C}}
\Vert b -c \Vert$, where $\mathcal{C}$ is the set of second members of the
consistent systems defined with the same matrix $A$. We study the set
$\mathcal{C}_b$ of Chebyshev approximations of the second member $b$ i.e.,
vectors $c \in \mathcal{C}$ such that $\Vert b -c \Vert = \Delta$, which is
associated to the approximate solutions set $\Lambda_b$ in the following sense:
an element of the set $\Lambda_b$ is a solution vector $x^\ast$ of a system $A
\Box_{\min}^{\max}x =c$ where $c \in \mathcal{C}_b$. As main results, we
describe both the structure of the set $\Lambda_b$ and that of the set
$\mathcal{C}_b$. We then introduce a paradigm for $\max-\min$ learning weight
matrices that relates input and output data from training data. The learning
error is expressed in terms of the $L_\infty$ norm. We compute by an explicit
formula the minimal value of the learning error according to the training data.
We give a method to construct weight matrices whose learning error is minimal,
that we call approximate weight matrices.
Finally, as an application of our results, we show how to learn approximately
the rule parameters of a possibilistic rule-based system according to multiple
training data.
| [
{
"version": "v1",
"created": "Sun, 15 Jan 2023 16:48:30 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Jan 2023 16:10:50 GMT"
}
] | 1,674,518,400,000 | [
[
"Baaj",
"Ismaïl",
""
]
] |
2301.06387 | Xingzhou Lou | Xingzhou Lou, Jiaxian Guo, Junge Zhang, Jun Wang, Kaiqi Huang, Yali Du | PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI
Coordination | Accepted by AAMAS 2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zero-shot human-AI coordination holds the promise of collaborating with
humans without human data. Prevailing methods try to train the ego agent with a
population of partners via self-play. However, these methods suffer from two
problems: 1) The diversity of a population with finite partners is limited,
thereby limiting the capacity of the trained ego agent to collaborate with a
novel human; 2) Current methods only provide a common best response for every
partner in the population, which may result in poor zero-shot coordination
performance with a novel partner or humans. To address these issues, we first
propose the policy ensemble method to increase the diversity of partners in the
population, and then develop a context-aware method enabling the ego agent to
analyze and identify the partner's potential policy primitives so that it can
take different actions accordingly. In this way, the ego agent is able to learn
more universal cooperative behaviors for collaborating with diverse partners.
We conduct experiments on the Overcooked environment, and evaluate the
zero-shot human-AI coordination performance of our method with both
behavior-cloned human proxies and real humans. The results demonstrate that our
method significantly increases the diversity of partners and enables ego agents
to learn more diverse behaviors than baselines, thus achieving state-of-the-art
performance in all scenarios. We also open-source a human-AI coordination study
framework on the Overcooked for the convenience of future studies.
| [
{
"version": "v1",
"created": "Mon, 16 Jan 2023 12:14:58 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Jan 2023 15:18:47 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Feb 2023 17:04:23 GMT"
},
{
"version": "v4",
"created": "Mon, 22 May 2023 13:04:03 GMT"
}
] | 1,684,800,000,000 | [
[
"Lou",
"Xingzhou",
""
],
[
"Guo",
"Jiaxian",
""
],
[
"Zhang",
"Junge",
""
],
[
"Wang",
"Jun",
""
],
[
"Huang",
"Kaiqi",
""
],
[
"Du",
"Yali",
""
]
] |
2301.06845 | Sander Beckers | Sander Beckers, Joseph Y. Halpern, and Christopher Hitchcock | Causal Models with Constraints | Accepted at CLeaR 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Causal models have proven extremely useful in offering formal representations
of causal relationships between a set of variables. Yet in many situations,
there are non-causal relationships among variables. For example, we may want
variables $LDL$, $HDL$, and $TOT$ that represent the level of low-density
lipoprotein cholesterol, the level of lipoprotein high-density lipoprotein
cholesterol, and total cholesterol level, with the relation $LDL+HDL=TOT$. This
cannot be done in standard causal models, because we can intervene
simultaneously on all three variables. The goal of this paper is to extend
standard causal models to allow for constraints on settings of variables.
Although the extension is relatively straightforward, to make it useful we have
to define a new intervention operation that $disconnects$ a variable from a
causal equation. We give examples showing the usefulness of this extension, and
provide a sound and complete axiomatization for causal models with constraints.
| [
{
"version": "v1",
"created": "Tue, 17 Jan 2023 12:43:46 GMT"
}
] | 1,674,000,000,000 | [
[
"Beckers",
"Sander",
""
],
[
"Halpern",
"Joseph Y.",
""
],
[
"Hitchcock",
"Christopher",
""
]
] |
2301.07345 | Irfansha Shaik | Irfansha Shaik, Valentin Mayer-Eichberger, Jaco van de Pol, Abdallah
Saffidine | Implicit State and Goals in QBF Encodings for Positional Games (extended
version) | 11 pages (including appendix), 5 figures and 4 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We address two bottlenecks for concise QBF encodings of maker-breaker
positional games, like Hex and Tic-Tac-Toe. Our baseline is a QBF encoding with
explicit variables for board positions and an explicit representation of
winning configurations. The first improvement is inspired by lifted planning
and avoids variables for explicit board positions, introducing a universal
quantifier representing a symbolic board state. The second improvement
represents the winning configurations implicitly, exploiting their structure.
The paper evaluates the size of several encodings, depending on board size and
game depth. It also reports the performance of QBF solvers on these encodings.
We evaluate the techniques on Hex instances and also apply them to Harary's
Tic-Tac-Toe. In particular, we study scalability to 19$\times$19 boards, played
in human Hex tournaments.
| [
{
"version": "v1",
"created": "Wed, 18 Jan 2023 07:28:41 GMT"
}
] | 1,674,086,400,000 | [
[
"Shaik",
"Irfansha",
""
],
[
"Mayer-Eichberger",
"Valentin",
""
],
[
"van de Pol",
"Jaco",
""
],
[
"Saffidine",
"Abdallah",
""
]
] |
2301.07427 | Martina Cinquini | Martina Cinquini, Fosca Giannotti, Riccardo Guidotti | Boosting Synthetic Data Generation with Effective Nonlinear Causal
Discovery | null | null | 10.1109/CogMI52975.2021.00016 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Synthetic data generation has been widely adopted in software testing, data
privacy, imbalanced learning, and artificial intelligence explanation. In all
such contexts, it is crucial to generate plausible data samples. A common
assumption of approaches widely used for data generation is the independence of
the features. However, typically, the variables of a dataset depend on one
another, and these dependencies are not considered in data generation leading
to the creation of implausible records. The main problem is that dependencies
among variables are typically unknown. In this paper, we design a synthetic
dataset generator for tabular data that can discover nonlinear causalities
among the variables and use them at generation time. State-of-the-art methods
for nonlinear causal discovery are typically inefficient. We boost them by
restricting the causal discovery among the features appearing in the frequent
patterns efficiently retrieved by a pattern mining algorithm. We design a
framework for generating synthetic datasets with known causalities to validate
our proposal. Broad experimentation on many synthetic and real datasets with
known causalities shows the effectiveness of the proposed method.
| [
{
"version": "v1",
"created": "Wed, 18 Jan 2023 10:54:06 GMT"
}
] | 1,674,086,400,000 | [
[
"Cinquini",
"Martina",
""
],
[
"Giannotti",
"Fosca",
""
],
[
"Guidotti",
"Riccardo",
""
]
] |
2301.07629 | David Cerna | David M. Cerna and Andrew Cropper | Generalisation Through Negation and Predicate Invention | Accepted at AAAI-24 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to generalise from a small number of examples is a fundamental
challenge in machine learning. To tackle this challenge, we introduce an
inductive logic programming (ILP) approach that combines negation and predicate
invention. Combining these two features allows an ILP system to generalise
better by learning rules with universally quantified body-only variables. We
implement our idea in NOPI, which can learn normal logic programs with
predicate invention, including Datalog programs with stratified negation. Our
experimental results on multiple domains show that our approach can improve
predictive accuracies and learning times.
| [
{
"version": "v1",
"created": "Wed, 18 Jan 2023 16:12:27 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Aug 2023 07:15:48 GMT"
},
{
"version": "v3",
"created": "Sat, 9 Dec 2023 09:21:34 GMT"
},
{
"version": "v4",
"created": "Wed, 27 Dec 2023 10:38:51 GMT"
}
] | 1,703,808,000,000 | [
[
"Cerna",
"David M.",
""
],
[
"Cropper",
"Andrew",
""
]
] |
2301.07636 | Minrui Xu | Minrui Xu, Dusit Niyato, Hongliang Zhang, Jiawen Kang, Zehui Xiong,
Shiwen Mao, and Zhu Han | Generative AI-empowered Effective Physical-Virtual Synchronization in
the Vehicular Metaverse | arXiv admin note: text overlap with arXiv:2211.06838 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Metaverse seamlessly blends the physical world and virtual space via
ubiquitous communication and computing infrastructure. In transportation
systems, the vehicular Metaverse can provide a fully-immersive and hyperreal
traveling experience (e.g., via augmented reality head-up displays, AR-HUDs) to
drivers and users in autonomous vehicles (AVs) via roadside units (RSUs).
However, provisioning real-time and immersive services necessitates effective
physical-virtual synchronization between physical and virtual entities, i.e.,
AVs and Metaverse AR recommenders (MARs). In this paper, we propose a
generative AI-empowered physical-virtual synchronization framework for the
vehicular Metaverse. In physical-to-virtual synchronization, digital twin (DT)
tasks generated by AVs are offloaded for execution in RSU with future route
generation. In virtual-to-physical synchronization, MARs customize diverse and
personal AR recommendations via generative AI models based on user preferences.
Furthermore, we propose a multi-task enhanced auction-based mechanism to match
and price AVs and MARs for RSUs to provision real-time and effective services.
Finally, property analysis and experimental results demonstrate that the
proposed mechanism is strategy-proof and adverse-selection free while
increasing social surplus by 50%.
| [
{
"version": "v1",
"created": "Wed, 18 Jan 2023 16:25:42 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Jan 2023 04:15:41 GMT"
}
] | 1,674,172,800,000 | [
[
"Xu",
"Minrui",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Zhang",
"Hongliang",
""
],
[
"Kang",
"Jiawen",
""
],
[
"Xiong",
"Zehui",
""
],
[
"Mao",
"Shiwen",
""
],
[
"Han",
"Zhu",
""
]
] |
2301.07835 | Paritosh Verma | Paritosh Verma, Shresth Verma, Aditya Mate, Aparna Taneja, Milind
Tambe | Decision-Focused Evaluation: Analyzing Performance of Deployed Restless
Multi-Arm Bandits | 11 pages, 3 figures, AI for Social Good Workshop (AAAI'23) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Restless multi-arm bandits (RMABs) is a popular decision-theoretic framework
that has been used to model real-world sequential decision making problems in
public health, wildlife conservation, communication systems, and beyond.
Deployed RMAB systems typically operate in two stages: the first predicts the
unknown parameters defining the RMAB instance, and the second employs an
optimization algorithm to solve the constructed RMAB instance.
In this work we provide and analyze the results from a first-of-its-kind
deployment of an RMAB system in public health domain, aimed at improving
maternal and child health. Our analysis is focused towards understanding the
relationship between prediction accuracy and overall performance of deployed
RMAB systems. This is crucial for determining the value of investing in
improving predictive accuracy towards improving the final system performance,
and is useful for diagnosing, monitoring deployed RMAB systems.
Using real-world data from our deployed RMAB system, we demonstrate that an
improvement in overall prediction accuracy may even be accompanied by a
degradation in the performance of RMAB system -- a broad investment of
resources to improve overall prediction accuracy may not yield expected
results. Following this, we develop decision-focused evaluation metrics to
evaluate the predictive component and show that it is better at explaining
(both empirically and theoretically) the overall performance of a deployed RMAB
system.
| [
{
"version": "v1",
"created": "Thu, 19 Jan 2023 01:04:55 GMT"
}
] | 1,674,172,800,000 | [
[
"Verma",
"Paritosh",
""
],
[
"Verma",
"Shresth",
""
],
[
"Mate",
"Aditya",
""
],
[
"Taneja",
"Aparna",
""
],
[
"Tambe",
"Milind",
""
]
] |
2301.07894 | Dong-Kyun Han | Dong-Kyun Han, Dong-Young Kim, Geun-Deok Jang | Subject-Independent Brain-Computer Interfaces with Open-Set Subject
Recognition | Submitted to 2023 11th IEEE International Winter Conference on
Brain-Computer Interface | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A brain-computer interface (BCI) can't be effectively used since
electroencephalography (EEG) varies between and within subjects. BCI systems
require calibration steps to adjust the model to subject-specific data. It is
widely acknowledged that this is a major obstacle to the development of BCIs.
To address this issue, previous studies have trained a generalized model by
removing the subjects' information. In contrast, in this work, we introduce a
style information encoder as an auxiliary task that classifies various source
domains and recognizes open-set domains. Open-set recognition method was used
as an auxiliary task to learn subject-related style information from the source
subjects, while at the same time helping the shared feature extractor map
features in an unseen target. This paper compares various OSR methods within an
open-set subject recognition (OSSR) framework. As a result of our experiments,
we found that the OSSR auxiliary network that encodes domain information
improves generalization performance.
| [
{
"version": "v1",
"created": "Thu, 19 Jan 2023 05:48:05 GMT"
}
] | 1,674,172,800,000 | [
[
"Han",
"Dong-Kyun",
""
],
[
"Kim",
"Dong-Young",
""
],
[
"Jang",
"Geun-Deok",
""
]
] |
2301.08025 | Wenjun Li | Wenjun Li, Pradeep Varakantham, Dexun Li | Generalization through Diversity: Improving Unsupervised Environment
Design | 9 pages | 2023; Proceedings of the Thirty-Second International Joint
Conference on Artificial Intelligence (IJCAI-23); Page 5411-5419, | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Agent decision making using Reinforcement Learning (RL) heavily relies on
either a model or simulator of the environment (e.g., moving in an 8x8 maze
with three rooms, playing Chess on an 8x8 board). Due to this dependence, small
changes in the environment (e.g., positions of obstacles in the maze, size of
the board) can severely affect the effectiveness of the policy learned by the
agent. To that end, existing work has proposed training RL agents on an
adaptive curriculum of environments (generated automatically) to improve
performance on out-of-distribution (OOD) test scenarios. Specifically, existing
research has employed the potential for the agent to learn in an environment
(captured using Generalized Advantage Estimation, GAE) as the key factor to
select the next environment(s) to train the agent. However, such a mechanism
can select similar environments (with a high potential to learn) thereby making
agent training redundant on all but one of those environments. To that end, we
provide a principled approach to adaptively identify diverse environments based
on a novel distance measure relevant to environment design. We empirically
demonstrate the versatility and effectiveness of our method in comparison to
multiple leading approaches for unsupervised environment design on three
distinct benchmark problems used in literature.
| [
{
"version": "v1",
"created": "Thu, 19 Jan 2023 11:55:47 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 03:27:44 GMT"
}
] | 1,695,168,000,000 | [
[
"Li",
"Wenjun",
""
],
[
"Varakantham",
"Pradeep",
""
],
[
"Li",
"Dexun",
""
]
] |
2301.08490 | Sven Pieper | Sven Pieper, Carl Willy Mehling, Dominik Hirsch, Tobias L\"uke and
Steffen Ihlenfeldt | causalgraph: A Python Package for Modeling, Persisting and Visualizing
Causal Graphs Embedded in Knowledge Graphs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper describes a novel Python package, named causalgraph, for modeling
and saving causal graphs embedded in knowledge graphs. The package has been
designed to provide an interface between causal disciplines such as causal
discovery and causal inference. With this package, users can create and save
causal graphs and export the generated graphs for use in other graph-based
packages. The main advantage of the proposed package is its ability to
facilitate the linking of additional information and metadata to causal
structures. In addition, the package offers a variety of functions for graph
modeling and plotting, such as editing, adding, and deleting nodes and edges.
It is also compatible with widely used graph data science libraries such as
NetworkX and Tigramite and incorporates a specially developed causalgraph
ontology in the background. This paper provides an overview of the package's
main features, functionality, and usage examples, enabling the reader to use
the package effectively in practice.
| [
{
"version": "v1",
"created": "Fri, 20 Jan 2023 09:36:32 GMT"
}
] | 1,674,432,000,000 | [
[
"Pieper",
"Sven",
""
],
[
"Mehling",
"Carl Willy",
""
],
[
"Hirsch",
"Dominik",
""
],
[
"Lüke",
"Tobias",
""
],
[
"Ihlenfeldt",
"Steffen",
""
]
] |
2301.08509 | Hiroyuki Kido | Hiroyuki Kido | Generative Logic with Time: Beyond Logical Consistency and Statistical
Possibility | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper gives a simple theory of inference to logically reason symbolic
knowledge fully from data over time. We take a Bayesian approach to model how
data causes symbolic knowledge. Probabilistic reasoning with symbolic knowledge
is modelled as a process of going the causality forwards and backwards. The
forward and backward processes correspond to an interpretation and inverse
interpretation of formal logic, respectively. The theory is applied to a
localisation problem to show a robot with broken or noisy sensors can
efficiently solve the problem in a fully data-driven fashion.
| [
{
"version": "v1",
"created": "Fri, 20 Jan 2023 10:55:49 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 13:19:37 GMT"
}
] | 1,678,924,800,000 | [
[
"Kido",
"Hiroyuki",
""
]
] |
2301.08608 | Nikolai K\"afer | Christel Baier and Clemens Dubslaff and Holger Hermanns and Nikolai
K\"afer | On the Foundations of Cycles in Bayesian Networks | Full version with an appendix containing the proofs | Principles of Systems Design. Lecture Notes in Computer Science,
vol 13660, pp 343-363, 2022 | 10.1007/978-3-031-22337-2_17 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian networks (BNs) are a probabilistic graphical model widely used for
representing expert knowledge and reasoning under uncertainty. Traditionally,
they are based on directed acyclic graphs that capture dependencies between
random variables. However, directed cycles can naturally arise when
cross-dependencies between random variables exist, e.g., for modeling feedback
loops. Existing methods to deal with such cross-dependencies usually rely on
reductions to BNs without cycles. These approaches are fragile to generalize,
since their justifications are intermingled with additional knowledge about the
application context. In this paper, we present a foundational study regarding
semantics for cyclic BNs that are generic and conservatively extend the
cycle-free setting. First, we propose constraint-based semantics that specify
requirements for full joint distributions over a BN to be consistent with the
local conditional probabilities and independencies. Second, two kinds of limit
semantics that formalize infinite unfolding approaches are introduced and shown
to be computable by a Markov chain construction.
| [
{
"version": "v1",
"created": "Fri, 20 Jan 2023 14:40:17 GMT"
}
] | 1,674,432,000,000 | [
[
"Baier",
"Christel",
""
],
[
"Dubslaff",
"Clemens",
""
],
[
"Hermanns",
"Holger",
""
],
[
"Käfer",
"Nikolai",
""
]
] |
2301.08687 | Pavel Surynek | Pavel Surynek | Counterexample Guided Abstraction Refinement with Non-Refined
Abstractions for Multi-Agent Path Finding | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Counterexample guided abstraction refinement (CEGAR) represents a powerful
symbolic technique for various tasks such as model checking and reachability
analysis. Recently, CEGAR combined with Boolean satisfiability (SAT) has been
applied for multi-agent path finding (MAPF), a problem where the task is to
navigate agents from their start positions to given individual goal positions
so that the agents do not collide with each other.
The recent CEGAR approach used the initial abstraction of the MAPF problem
where collisions between agents were omitted and were eliminated in subsequent
abstraction refinements. We propose in this work a novel CEGAR-style solver for
MAPF based on SAT in which some abstractions are deliberately left non-refined.
This adds the necessity to post-process the answers obtained from the
underlying SAT solver as these answers slightly differ from the correct MAPF
solutions. Non-refining however yields order-of-magnitude smaller SAT encodings
than those of the previous approach and speeds up the overall solving process
making the SAT-based solver for MAPF competitive again in relevant benchmarks.
| [
{
"version": "v1",
"created": "Fri, 20 Jan 2023 17:18:49 GMT"
}
] | 1,674,432,000,000 | [
[
"Surynek",
"Pavel",
""
]
] |
2301.09723 | Ernest Davis | Ernest Davis | Mathematics, word problems, common sense, and artificial intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The paper discusses the capacities and limitations of current artificial
intelligence (AI) technology to solve word problems that combine elementary
knowledge with commonsense reasoning. No existing AI systems can solve these
reliably. We review three approaches that have been developed, using AI natural
language technology: outputting the answer directly, outputting a computer
program that solves the problem, and outputting a formalized representation
that can be input to an automated theorem verifier. We review some benchmarks
that have been developed to evaluate these systems and some experimental
studies. We discuss the limitations of the existing technology at solving these
kinds of problems. We argue that it is not clear whether these kinds of
limitations will be important in developing AI technology for pure mathematical
research, but that they will be important in applications of mathematics, and
may well be important in developing programs capable of reading and
understanding mathematical content written by humans.
| [
{
"version": "v1",
"created": "Mon, 23 Jan 2023 21:21:39 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jan 2023 01:24:25 GMT"
}
] | 1,674,691,200,000 | [
[
"Davis",
"Ernest",
""
]
] |
2301.09770 | Prasoon Goyal | Prasoon Goyal, Raymond J. Mooney, Scott Niekum | Language-guided Task Adaptation for Imitation Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel setting, wherein an agent needs to learn a task from a
demonstration of a related task with the difference between the tasks
communicated in natural language. The proposed setting allows reusing
demonstrations from other tasks, by providing low effort language descriptions,
and can also be used to provide feedback to correct agent errors, which are
both important desiderata for building intelligent agents that assist humans in
daily tasks. To enable progress in this proposed setting, we create two
benchmarks -- Room Rearrangement and Room Navigation -- that cover a diverse
set of task adaptations. Further, we propose a framework that uses a
transformer-based model to reason about the entities in the tasks and their
relationships, to learn a policy for the target task
| [
{
"version": "v1",
"created": "Tue, 24 Jan 2023 00:56:43 GMT"
}
] | 1,674,604,800,000 | [
[
"Goyal",
"Prasoon",
""
],
[
"Mooney",
"Raymond J.",
""
],
[
"Niekum",
"Scott",
""
]
] |
2301.10034 | Shaofei Cai | Shaofei Cai, Zihao Wang, Xiaojian Ma, Anji Liu, Yitao Liang | Open-World Multi-Task Control Through Goal-Aware Representation Learning
and Adaptive Horizon Prediction | This paper is accepted by CVPR2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We study the problem of learning goal-conditioned policies in Minecraft, a
popular, widely accessible yet challenging open-ended environment for
developing human-level multi-task agents. We first identify two main challenges
of learning such policies: 1) the indistinguishability of tasks from the state
distribution, due to the vast scene diversity, and 2) the non-stationary nature
of environment dynamics caused by partial observability. To tackle the first
challenge, we propose Goal-Sensitive Backbone (GSB) for the policy to encourage
the emergence of goal-relevant visual state representations. To tackle the
second challenge, the policy is further fueled by an adaptive horizon
prediction module that helps alleviate the learning uncertainty brought by the
non-stationary dynamics. Experiments on 20 Minecraft tasks show that our method
significantly outperforms the best baseline so far; in many of them, we double
the performance. Our ablation and exploratory studies then explain how our
approach beat the counterparts and also unveil the surprising bonus of
zero-shot generalization to new scenes (biomes). We hope our agent could help
shed some light on learning goal-conditioned, multi-task agents in challenging,
open-ended environments like Minecraft.
| [
{
"version": "v1",
"created": "Sat, 21 Jan 2023 08:15:38 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2023 14:12:52 GMT"
},
{
"version": "v3",
"created": "Thu, 12 Oct 2023 12:59:56 GMT"
}
] | 1,697,414,400,000 | [
[
"Cai",
"Shaofei",
""
],
[
"Wang",
"Zihao",
""
],
[
"Ma",
"Xiaojian",
""
],
[
"Liu",
"Anji",
""
],
[
"Liang",
"Yitao",
""
]
] |
2301.10079 | Mauro Vallati | Diaeddin Alarnaouti and George Baryannis and Mauro Vallati | Reformulation Techniques for Automated Planning: A Systematic Review | Accepted and to appear in The Knowledge Engineering Review (KER),
2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Automated planning is a prominent area of Artificial Intelligence, and an
important component for intelligent autonomous agents. A cornerstone of
domain-independent planning is the separation between planning logic, i.e. the
automated reasoning side, and the knowledge model, that encodes a formal
representation of domain knowledge needed to reason upon a given problem to
synthesise a solution plan. Such a separation enables the use of reformulation
techniques, which transform how a model is represented in order to improve the
efficiency of plan generation. Over the past decades, significant research
effort has been devoted to the design of reformulation techniques. In this
paper, we present a systematic review of the large body of work on
reformulation techniques for classical planning, aiming to provide a holistic
view of the field and to foster future research in the area. As a tangible
outcome, we provide a qualitative comparison of the existing classes of
techniques, that can help researchers gain an overview of their strengths and
weaknesses.
| [
{
"version": "v1",
"created": "Tue, 24 Jan 2023 15:33:37 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Jan 2023 10:04:02 GMT"
}
] | 1,675,123,200,000 | [
[
"Alarnaouti",
"Diaeddin",
""
],
[
"Baryannis",
"George",
""
],
[
"Vallati",
"Mauro",
""
]
] |
2301.10280 | Carlos N\'u\~nez Molina | Carlos N\'u\~nez-Molina, Pablo Mesejo, Juan Fern\'andez-Olivares | NeSIG: A Neuro-Symbolic Method for Learning to Generate Planning
Problems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field of Automated Planning there is often the need for a set of
planning problems from a particular domain, e.g., to be used as training data
for Machine Learning or as benchmarks in planning competitions. In most cases,
these problems are created either by hand or by a domain-specific generator,
putting a burden on the human designers. In this paper we propose NeSIG, to the
best of our knowledge the first domain-independent method for automatically
generating planning problems that are valid, diverse and difficult to solve. We
formulate problem generation as a Markov Decision Process and train two
generative policies with Deep Reinforcement Learning to generate problems with
the desired properties. We conduct experiments on several classical domains,
comparing our method with handcrafted domain-specific generators that generate
valid and diverse problems but do not optimize difficulty. The results show
NeSIG is able to automatically generate valid problems of greater difficulty
than the competitor approaches, while maintaining good diversity.
| [
{
"version": "v1",
"created": "Tue, 24 Jan 2023 19:37:59 GMT"
}
] | 1,674,691,200,000 | [
[
"Núñez-Molina",
"Carlos",
""
],
[
"Mesejo",
"Pablo",
""
],
[
"Fernández-Olivares",
"Juan",
""
]
] |
2301.10289 | Xinghua Lou | Ken Kansky, Skanda Vaidyanath, Scott Swingle, Xinghua Lou, Miguel
Lazaro-Gredilla, Dileep George | PushWorld: A benchmark for manipulation planning with tools and movable
obstacles | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | While recent advances in artificial intelligence have achieved human-level
performance in environments like Starcraft and Go, many physical reasoning
tasks remain challenging for modern algorithms. To date, few algorithms have
been evaluated on physical tasks that involve manipulating objects when movable
obstacles are present and when tools must be used to perform the manipulation.
To promote research on such tasks, we introduce PushWorld, an environment with
simplistic physics that requires manipulation planning with both movable
obstacles and tools. We provide a benchmark of more than 200 PushWorld puzzles
in PDDL and in an OpenAI Gym environment. We evaluate state-of-the-art
classical planning and reinforcement learning algorithms on this benchmark, and
we find that these baseline results are below human-level performance. We then
provide a new classical planning heuristic that solves the most puzzles among
the baselines, and although it is 40 times faster than the best baseline
planner, it remains below human-level performance.
| [
{
"version": "v1",
"created": "Tue, 24 Jan 2023 20:20:17 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Feb 2023 18:16:19 GMT"
}
] | 1,675,296,000,000 | [
[
"Kansky",
"Ken",
""
],
[
"Vaidyanath",
"Skanda",
""
],
[
"Swingle",
"Scott",
""
],
[
"Lou",
"Xinghua",
""
],
[
"Lazaro-Gredilla",
"Miguel",
""
],
[
"George",
"Dileep",
""
]
] |
2301.10571 | Nils Wilken | Nils Wilken, Lea Cohausz, Johannes Schaum, Stefan L\"udtke, Christian
Bartelt and Heiner Stuckenschmidt | Leveraging Planning Landmarks for Hybrid Online Goal Recognition | 9 pages. Presented at SPARK 2022
(https://icaps22.icaps-conference.org/workshops/SPARK/) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Goal recognition is an important problem in many application domains (e.g.,
pervasive computing, intrusion detection, computer games, etc.). In many
application scenarios it is important that goal recognition algorithms can
recognize goals of an observed agent as fast as possible and with minimal
domain knowledge. Hence, in this paper, we propose a hybrid method for online
goal recognition that combines a symbolic planning landmark based approach and
a data-driven goal recognition approach and evaluate it in a real-world cooking
scenario. The empirical results show that the proposed method is not only
significantly more efficient in terms of computation time than the
state-of-the-art but also improves goal recognition performance. Furthermore,
we show that the utilized planning landmark based approach, which was so far
only evaluated on artificial benchmark domains, achieves also good recognition
performance when applied to a real-world cooking scenario.
| [
{
"version": "v1",
"created": "Wed, 25 Jan 2023 13:21:30 GMT"
}
] | 1,674,691,200,000 | [
[
"Wilken",
"Nils",
""
],
[
"Cohausz",
"Lea",
""
],
[
"Schaum",
"Johannes",
""
],
[
"Lüdtke",
"Stefan",
""
],
[
"Bartelt",
"Christian",
""
],
[
"Stuckenschmidt",
"Heiner",
""
]
] |
2301.10823 | Stefan Sarkadi | Peter R. Lewis and Stefan Sarkadi | Reflective Artificial Intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial Intelligence (AI) is about making computers that do the sorts of
things that minds can do, and as we progress towards this goal, we tend to
increasingly delegate human tasks to machines. However, AI systems usually do
these tasks with an unusual imbalance of insight and understanding: new, deeper
insights are present, yet many important qualities that a human mind would have
previously brought to the activity are utterly absent. Therefore, it is crucial
to ask which features of minds have we replicated, which are missing, and if
that matters. One core feature that humans bring to tasks, when dealing with
the ambiguity, emergent knowledge, and social context presented by the world,
is reflection. Yet this capability is utterly missing from current mainstream
AI. In this paper we ask what reflective AI might look like. Then, drawing on
notions of reflection in complex systems, cognitive science, and agents, we
sketch an architecture for reflective AI agents, and highlight ways forward.
| [
{
"version": "v1",
"created": "Wed, 25 Jan 2023 20:50:26 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 10:15:15 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Apr 2023 08:51:09 GMT"
}
] | 1,682,640,000,000 | [
[
"Lewis",
"Peter R.",
""
],
[
"Sarkadi",
"Stefan",
""
]
] |
2301.10927 | Asjad Khan | Asjad Khan, Arsal Huda, Aditya Ghose, Hoa Khanh Dam | Towards Knowledge-Centric Process Mining | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Process analytic approaches play a critical role in supporting the practice
of business process management and continuous process improvement by leveraging
process-related data to identify performance bottlenecks, extracting insights
about reducing costs and optimizing the utilization of available resources.
Process analytic techniques often have to contend with real-world settings
where available logs are noisy or incomplete. In this paper we present an
approach that permits process analytics techniques to deliver value in the face
of noisy/incomplete event logs. Our approach leverages knowledge graphs to
mitigate the effects of noise in event logs while supporting process analysts
in understanding variability associated with event logs.
| [
{
"version": "v1",
"created": "Thu, 26 Jan 2023 04:23:04 GMT"
}
] | 1,674,777,600,000 | [
[
"Khan",
"Asjad",
""
],
[
"Huda",
"Arsal",
""
],
[
"Ghose",
"Aditya",
""
],
[
"Dam",
"Hoa Khanh",
""
]
] |
2301.11047 | June Sallou | Roberto Verdecchia and June Sallou and Lu\'is Cruz | A Systematic Review of Green AI | Journal WIREs Data Mining and Knowledge Discovery. 16 pages, 12
figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the ever-growing adoption of AI-based systems, the carbon footprint of
AI is no longer negligible. AI researchers and practitioners are therefore
urged to hold themselves accountable for the carbon emissions of the AI models
they design and use. This led in recent years to the appearance of researches
tackling AI environmental sustainability, a field referred to as Green AI.
Despite the rapid growth of interest in the topic, a comprehensive overview of
Green AI research is to date still missing. To address this gap, in this paper,
we present a systematic review of the Green AI literature. From the analysis of
98 primary studies, different patterns emerge. The topic experienced a
considerable growth from 2020 onward. Most studies consider monitoring AI model
footprint, tuning hyperparameters to improve model sustainability, or
benchmarking models. A mix of position papers, observational studies, and
solution papers are present. Most papers focus on the training phase, are
algorithm-agnostic or study neural networks, and use image data. Laboratory
experiments are the most common research strategy. Reported Green AI energy
savings go up to 115%, with savings over 50% being rather common. Industrial
parties are involved in Green AI studies, albeit most target academic readers.
Green AI tool provisioning is scarce. As a conclusion, the Green AI research
field results to have reached a considerable level of maturity. Therefore, from
this review emerges that the time is suitable to adopt other Green AI research
strategies, and port the numerous promising academic results to industrial
practice.
| [
{
"version": "v1",
"created": "Thu, 26 Jan 2023 11:41:46 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Jan 2023 12:47:06 GMT"
},
{
"version": "v3",
"created": "Fri, 5 May 2023 07:49:02 GMT"
}
] | 1,683,504,000,000 | [
[
"Verdecchia",
"Roberto",
""
],
[
"Sallou",
"June",
""
],
[
"Cruz",
"Luís",
""
]
] |
2301.11087 | Javier Segovia Aguas | Javier Segovia-Aguas, Sergio Jim\'enez, Anders Jonsson | Generalized Planning as Heuristic Search: A new planning search-space
that leverages pointers over objects | Under review in the Artificial Intelligence Journal (AIJ) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Planning as heuristic search is one of the most successful approaches to
classical planning but unfortunately, it does not extend trivially to
Generalized Planning (GP). GP aims to compute algorithmic solutions that are
valid for a set of classical planning instances from a given domain, even if
these instances differ in the number of objects, the number of state variables,
their domain size, or their initial and goal configuration. The generalization
requirements of GP make it impractical to perform the state-space search that
is usually implemented by heuristic planners. This paper adapts the planning as
heuristic search paradigm to the generalization requirements of GP, and
presents the first native heuristic search approach to GP. First, the paper
introduces a new pointer-based solution space for GP that is independent of the
number of classical planning instances in a GP problem and the size of those
instances (i.e. the number of objects, state variables and their domain sizes).
Second, the paper defines a set of evaluation and heuristic functions for
guiding a combinatorial search in our new GP solution space. The computation of
these evaluation and heuristic functions does not require grounding states or
actions in advance. Therefore our GP as heuristic search approach can handle
large sets of state variables with large numerical domains, e.g.~integers.
Lastly, the paper defines an upgraded version of our novel algorithm for GP
called Best-First Generalized Planning (BFGP), that implements a best-first
search in our pointer-based solution space, and that is guided by our
evaluation/heuristic functions for GP.
| [
{
"version": "v1",
"created": "Thu, 26 Jan 2023 13:25:39 GMT"
}
] | 1,674,777,600,000 | [
[
"Segovia-Aguas",
"Javier",
""
],
[
"Jiménez",
"Sergio",
""
],
[
"Jonsson",
"Anders",
""
]
] |
2301.11891 | Stephen Goss | Stephen A. Goss, Robert J. Steininger, Dhruv Narayanan, Daniel V.
Oliven\c{c}a, Yutong Sun, Peng Qiu, Jim Amato, Eberhard O. Voit, Walter E.
Voit, Eric J. Kildebeck | Polycraft World AI Lab (PAL): An Extensible Platform for Evaluating
Artificial Intelligence Agents | 27 pages, 5 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As artificial intelligence research advances, the platforms used to evaluate
AI agents need to adapt and grow to continue to challenge them. We present the
Polycraft World AI Lab (PAL), a task simulator with an API based on the
Minecraft mod Polycraft World. Our platform is built to allow AI agents with
different architectures to easily interact with the Minecraft world, train and
be evaluated in multiple tasks. PAL enables the creation of tasks in a flexible
manner as well as having the capability to manipulate any aspect of the task
during an evaluation. All actions taken by AI agents and external actors
(non-player-characters, NPCs) in the open-world environment are logged to
streamline evaluation. Here we present two custom tasks on the PAL platform,
one focused on multi-step planning and one focused on navigation, and
evaluations of agents solving them. In summary, we report a versatile and
extensible AI evaluation platform with a low barrier to entry for AI
researchers to utilize.
| [
{
"version": "v1",
"created": "Fri, 27 Jan 2023 18:08:04 GMT"
}
] | 1,675,036,800,000 | [
[
"Goss",
"Stephen A.",
""
],
[
"Steininger",
"Robert J.",
""
],
[
"Narayanan",
"Dhruv",
""
],
[
"Olivença",
"Daniel V.",
""
],
[
"Sun",
"Yutong",
""
],
[
"Qiu",
"Peng",
""
],
[
"Amato",
"Jim",
""
],
[
"Voit",
"Eberhard O.",
""
],
[
"Voit",
"Walter E.",
""
],
[
"Kildebeck",
"Eric J.",
""
]
] |
2301.11970 | Mark Keane | Saugat Aryal and Mark T Keane | Even if Explanations: Prior Work, Desiderata & Benchmarks for
Semi-Factual XAI | 14 pages, 4 Figures | 32nd International Joint Conference on Artificial Intelligence
(IJCAI-23), China, Macao, 2023 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recently, eXplainable AI (XAI) research has focused on counterfactual
explanations as post-hoc justifications for AI-system decisions (e.g. a
customer refused a loan might be told: If you asked for a loan with a shorter
term, it would have been approved). Counterfactuals explain what changes to the
input-features of an AI system change the output-decision. However, there is a
sub-type of counterfactual, semi-factuals, that have received less attention in
AI (though the Cognitive Sciences have studied them extensively). This paper
surveys these literatures to summarise historical and recent breakthroughs in
this area. It defines key desiderata for semi-factual XAI and reports benchmark
tests of historical algorithms (along with a novel, naieve method) to provide a
solid basis for future algorithmic developments.
| [
{
"version": "v1",
"created": "Fri, 27 Jan 2023 19:58:12 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2023 18:06:32 GMT"
}
] | 1,683,676,800,000 | [
[
"Aryal",
"Saugat",
""
],
[
"Keane",
"Mark T",
""
]
] |
2301.12031 | Zhengliang Liu | Zhengliang Liu, Xinyu He, Lei Liu, Tianming Liu, Xiaoming Zhai | Context Matters: A Strategy to Pre-train Language Model for Science
Education | null | Artificial Intelligence in Education. AIED 2023. Communications in
Computer and Information Science, vol 1831. Springer | 10.1007/978-3-031-36336-8_103 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This study aims at improving the performance of scoring student responses in
science education automatically. BERT-based language models have shown
significant superiority over traditional NLP models in various language-related
tasks. However, science writing of students, including argumentation and
explanation, is domain-specific. In addition, the language used by students is
different from the language in journals and Wikipedia, which are training
sources of BERT and its existing variants. All these suggest that a
domain-specific model pre-trained using science education data may improve
model performance. However, the ideal type of data to contextualize pre-trained
language model and improve the performance in automatically scoring student
written responses remains unclear. Therefore, we employ different data in this
study to contextualize both BERT and SciBERT models and compare their
performance on automatic scoring of assessment tasks for scientific
argumentation. We use three datasets to pre-train the model: 1) journal
articles in science education, 2) a large dataset of students' written
responses (sample size over 50,000), and 3) a small dataset of students'
written responses of scientific argumentation tasks. Our experimental results
show that in-domain training corpora constructed from science questions and
responses improve language model performance on a wide variety of downstream
tasks. Our study confirms the effectiveness of continual pre-training on
domain-specific data in the education domain and demonstrates a generalizable
strategy for automating science education tasks with high accuracy. We plan to
release our data and SciEdBERT models for public use and community engagement.
| [
{
"version": "v1",
"created": "Fri, 27 Jan 2023 23:50:16 GMT"
}
] | 1,700,524,800,000 | [
[
"Liu",
"Zhengliang",
""
],
[
"He",
"Xinyu",
""
],
[
"Liu",
"Lei",
""
],
[
"Liu",
"Tianming",
""
],
[
"Zhai",
"Xiaoming",
""
]
] |
2301.12063 | Chengyu Sun | Chengyu Sun | HAT-GAE: Self-Supervised Graph Auto-encoders with Hierarchical Adaptive
Masking and Trainable Corruption | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Self-supervised auto-encoders have emerged as a successful framework for
representation learning in computer vision and natural language processing in
recent years, However, their application to graph data has been met with
limited performance due to the non-Euclidean and complex structure of graphs in
comparison to images or text, as well as the limitations of conventional
auto-encoder architectures. In this paper, we investigate factors impacting the
performance of auto-encoders on graph data and propose a novel auto-encoder
model for graph representation learning. Our model incorporates a hierarchical
adaptive masking mechanism to incrementally increase the difficulty of training
in order to mimic the process of human cognitive learning, and a trainable
corruption scheme to enhance the robustness of learned representations. Through
extensive experimentation on ten benchmark datasets, we demonstrate the
superiority of our proposed method over state-of-the-art graph representation
learning models.
| [
{
"version": "v1",
"created": "Sat, 28 Jan 2023 02:43:54 GMT"
}
] | 1,675,123,200,000 | [
[
"Sun",
"Chengyu",
""
]
] |
2301.12158 | Debayan Banerjee | Debayan Banerjee, Mathis Poser, Christina Wiethof, Varun Shankar
Subramanian, Richard Paucar, Eva A. C. Bittner, Chris Biemann | A System for Human-AI collaboration for Online Customer Support | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI enabled chat bots have recently been put to use to answer customer service
queries, however it is a common feedback of users that bots lack a personal
touch and are often unable to understand the real intent of the user's
question. To this end, it is desirable to have human involvement in the
customer servicing process. In this work, we present a system where a human
support agent collaborates in real-time with an AI agent to satisfactorily
answer customer queries. We describe the user interaction elements of the
solution, along with the machine learning techniques involved in the AI agent.
| [
{
"version": "v1",
"created": "Sat, 28 Jan 2023 11:07:23 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 09:31:26 GMT"
}
] | 1,675,814,400,000 | [
[
"Banerjee",
"Debayan",
""
],
[
"Poser",
"Mathis",
""
],
[
"Wiethof",
"Christina",
""
],
[
"Subramanian",
"Varun Shankar",
""
],
[
"Paucar",
"Richard",
""
],
[
"Bittner",
"Eva A. C.",
""
],
[
"Biemann",
"Chris",
""
]
] |
2301.12178 | Yuzhen Qin | Yuzhen Qin, Li Sun, Hui Chen, Wei-qiang Zhang, Wenming Yang, Jintao
Fei, Guijin Wang | MVKT-ECG: Efficient Single-lead ECG Classification on Multi-Label
Arrhythmia by Multi-View Knowledge Transferring | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The widespread emergence of smart devices for ECG has sparked demand for
intelligent single-lead ECG-based diagnostic systems. However, it is
challenging to develop a single-lead-based ECG interpretation model for
multiple diseases diagnosis due to the lack of some key disease information. In
this work, we propose inter-lead Multi-View Knowledge Transferring of ECG
(MVKT-ECG) to boost single-lead ECG's ability for multi-label disease
diagnosis. This training strategy can transfer superior disease knowledge from
multiple different views of ECG (e.g. 12-lead ECG) to single-lead-based ECG
interpretation model to mine details in single-lead ECG signals that are easily
overlooked by neural networks. MVKT-ECG allows this lead variety as a
supervision signal within a teacher-student paradigm, where the teacher
observes multi-lead ECG educates a student who observes only single-lead ECG.
Since the mutual disease information between the single-lead ECG and muli-lead
ECG plays a key role in knowledge transferring, we present a new disease-aware
Contrastive Lead-information Transferring(CLT) to improve the mutual disease
information between the single-lead ECG and muli-lead ECG. Moreover, We modify
traditional Knowledge Distillation to multi-label disease Knowledge
Distillation (MKD) to make it applicable for multi-label disease diagnosis. The
comprehensive experiments verify that MVKT-ECG has an excellent performance in
improving the diagnostic effect of single-lead ECG.
| [
{
"version": "v1",
"created": "Sat, 28 Jan 2023 12:28:39 GMT"
}
] | 1,675,123,200,000 | [
[
"Qin",
"Yuzhen",
""
],
[
"Sun",
"Li",
""
],
[
"Chen",
"Hui",
""
],
[
"Zhang",
"Wei-qiang",
""
],
[
"Yang",
"Wenming",
""
],
[
"Fei",
"Jintao",
""
],
[
"Wang",
"Guijin",
""
]
] |
2301.12225 | Liming Wang | Liming Wang, Hong Xie, Ye Li, Jian Tan and John C.S. Lui | Interactive Log Parsing via Light-weight User Feedback | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Template mining is one of the foundational tasks to support log analysis,
which supports the diagnosis and troubleshooting of large scale Web
applications. This paper develops a human-in-the-loop template mining framework
to support interactive log analysis, which is highly desirable in real-world
diagnosis or troubleshooting of Web applications but yet previous template
mining algorithms fails to support it. We formulate three types of light-weight
user feedbacks and based on them we design three atomic human-in-the-loop
template mining algorithms. We derive mild conditions under which the outputs
of our proposed algorithms are provably correct. We also derive upper bounds on
the computational complexity and query complexity of each algorithm. We
demonstrate the versatility of our proposed algorithms by combining them to
improve the template mining accuracy of five representative algorithms over
sixteen widely used benchmark datasets.
| [
{
"version": "v1",
"created": "Sat, 28 Jan 2023 15:19:43 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2023 17:14:48 GMT"
}
] | 1,677,542,400,000 | [
[
"Wang",
"Liming",
""
],
[
"Xie",
"Hong",
""
],
[
"Li",
"Ye",
""
],
[
"Tan",
"Jian",
""
],
[
"Lui",
"John C. S.",
""
]
] |
2301.12289 | Zhaoyang Chen | Zhaoyang Chen, Lina Siltala-Li, Mikko Lassila, Pekka Malo, Eeva
Vilkkumaa, Tarja Saaresranta, Arho Veli Virkki | Predicting Visit Cost of Obstructive Sleep Apnea using Electronic
Healthcare Records with Transformer | 12 pages, 7 figures, 2 tables, to be submitted to IEEE Journal of
Translational Engineering in Health and Medicine | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Obstructive sleep apnea (OSA) is growing increasingly prevalent
in many countries as obesity rises. Sufficient, effective treatment of OSA
entails high social and financial costs for healthcare. Objective: For
treatment purposes, predicting OSA patients' visit expenses for the coming year
is crucial. Reliable estimates enable healthcare decision-makers to perform
careful fiscal management and budget well for effective distribution of
resources to hospitals. The challenges created by scarcity of high-quality
patient data are exacerbated by the fact that just a third of those data from
OSA patients can be used to train analytics models: only OSA patients with more
than 365 days of follow-up are relevant for predicting a year's expenditures.
Methods and procedures: The authors propose a method applying two Transformer
models, one for augmenting the input via data from shorter visit histories and
the other predicting the costs by considering both the material thus enriched
and cases with more than a year's follow-up. Results: The two-model solution
permits putting the limited body of OSA patient data to productive use.
Relative to a single-Transformer solution using only a third of the
high-quality patient data, the solution with two models improved the prediction
performance's $R^{2}$ from 88.8% to 97.5%. Even using baseline models with the
model-augmented data improved the $R^{2}$ considerably, from 61.6% to 81.9%.
Conclusion: The proposed method makes prediction with the most of the available
high-quality data by carefully exploiting details, which are not directly
relevant for answering the question of the next year's likely expenditure.
| [
{
"version": "v1",
"created": "Sat, 28 Jan 2023 20:08:00 GMT"
}
] | 1,675,123,200,000 | [
[
"Chen",
"Zhaoyang",
""
],
[
"Siltala-Li",
"Lina",
""
],
[
"Lassila",
"Mikko",
""
],
[
"Malo",
"Pekka",
""
],
[
"Vilkkumaa",
"Eeva",
""
],
[
"Saaresranta",
"Tarja",
""
],
[
"Virkki",
"Arho Veli",
""
]
] |
2301.12382 | Maolin Yang | Maolin Yang, Pingyu Jiang, Tianshuo Zang, Yuhao Liu | Data-driven intelligent computational design for products: Method,
techniques, and applications | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-driven intelligent computational design (DICD) is a research hotspot
emerged under the context of fast-developing artificial intelligence. It
emphasizes on utilizing deep learning algorithms to extract and represent the
design features hidden in historical or fabricated design process data, and
then learn the combination and mapping patterns of these design features for
the purposes of design solution retrieval, generation, optimization,
evaluation, etc. Due to its capability of automatically and efficiently
generating design solutions and thus supporting human-in-the-loop intelligent
and innovative design activities, DICD has drawn the attentions from both
academic and industrial fields. However, as an emerging research subject, there
are still many unexplored issues that limit the development and application of
DICD, such as specific dataset building, engineering design related feature
engineering, systematic methods and techniques for DICD implementation in the
entire product design process, etc. In this regard, a systematic and operable
road map for DICD implementation from full-process perspective is established,
including a general workflow for DICD project planning, an overall framework
for DICD project implementation, the computing mechanisms for DICD
implementation, key enabling technologies for detailed DICD implementation, and
three application scenarios of DICD. The road map reveals the common mechanisms
and calculation principles of existing DICD researches, and thus it can provide
systematic guidance for the possible DICD applications that have not been
explored.
| [
{
"version": "v1",
"created": "Sun, 29 Jan 2023 07:17:46 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 07:19:31 GMT"
}
] | 1,681,257,600,000 | [
[
"Yang",
"Maolin",
""
],
[
"Jiang",
"Pingyu",
""
],
[
"Zang",
"Tianshuo",
""
],
[
"Liu",
"Yuhao",
""
]
] |
2301.12400 | Bolin Zhang | Bolin Zhang and Yunzhe Xu and Zhiying Tu and Dianhui Chu | HeroNet: A Hybrid Retrieval-Generation Network for Conversational Bots | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using natural language, Conversational Bot offers unprecedented ways to many
challenges in areas such as information searching, item recommendation, and
question answering. Existing bots are usually developed through retrieval-based
or generative-based approaches, yet both of them have their own advantages and
disadvantages. To assemble this two approaches, we propose a hybrid
retrieval-generation network (HeroNet) with the three-fold ideas: 1). To
produce high-quality sentence representations, HeroNet performs multi-task
learning on two subtasks: Similar Queries Discovery and Query-Response
Matching. Specifically, the retrieval performance is improved while the model
size is reduced by training two lightweight, task-specific adapter modules that
share only one underlying T5-Encoder model. 2). By introducing adversarial
training, HeroNet is able to solve both retrieval\&generation tasks
simultaneously while maximizing performance of each other. 3). The retrieval
results are used as prior knowledge to improve the generation performance while
the generative result are scored by the discriminator and their scores are
integrated into the generator's cross-entropy loss function. The experimental
results on a open dataset demonstrate the effectiveness of the HeroNet and our
code is available at https://github.com/TempHero/HeroNet.git
| [
{
"version": "v1",
"created": "Sun, 29 Jan 2023 09:36:44 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 06:36:45 GMT"
}
] | 1,675,900,800,000 | [
[
"Zhang",
"Bolin",
""
],
[
"Xu",
"Yunzhe",
""
],
[
"Tu",
"Zhiying",
""
],
[
"Chu",
"Dianhui",
""
]
] |
2301.12500 | Sanda-Maria Avram Dr. | Sanda-Maria Avram | BERT-based Authorship Attribution on the Romanian Dataset called ROST | arXiv admin note: text overlap with arXiv:2211.05180 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Being around for decades, the problem of Authorship Attribution is still very
much in focus currently. Some of the more recent instruments used are the
pre-trained language models, the most prevalent being BERT. Here we used such a
model to detect the authorship of texts written in the Romanian language. The
dataset used is highly unbalanced, i.e., significant differences in the number
of texts per author, the sources from which the texts were collected, the time
period in which the authors lived and wrote these texts, the medium intended to
be read (i.e., paper or online), and the type of writing (i.e., stories, short
stories, fairy tales, novels, literary articles, and sketches). The results are
better than expected, sometimes exceeding 87\% macro-accuracy.
| [
{
"version": "v1",
"created": "Sun, 29 Jan 2023 17:37:29 GMT"
}
] | 1,675,123,200,000 | [
[
"Avram",
"Sanda-Maria",
""
]
] |
2301.12507 | Theodore Sumers | Theodore Sumers, Kenneth Marino, Arun Ahuja, Rob Fergus, Ishita
Dasgupta | Distilling Internet-Scale Vision-Language Models into Embodied Agents | 9 pages, 7 figures. Presented at ICML 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Instruction-following agents must ground language into their observation and
action spaces. Learning to ground language is challenging, typically requiring
domain-specific engineering or large quantities of human interaction data. To
address this challenge, we propose using pretrained vision-language models
(VLMs) to supervise embodied agents. We combine ideas from model distillation
and hindsight experience replay (HER), using a VLM to retroactively generate
language describing the agent's behavior. Simple prompting allows us to control
the supervision signal, teaching an agent to interact with novel objects based
on their names (e.g., planes) or their features (e.g., colors) in a 3D rendered
environment. Fewshot prompting lets us teach abstract category membership,
including pre-existing categories (food vs toys) and ad-hoc ones (arbitrary
preferences over objects). Our work outlines a new and effective way to use
internet-scale VLMs, repurposing the generic language grounding acquired by
such models to teach task-relevant groundings to embodied agents.
| [
{
"version": "v1",
"created": "Sun, 29 Jan 2023 18:21:05 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2023 14:04:50 GMT"
}
] | 1,686,873,600,000 | [
[
"Sumers",
"Theodore",
""
],
[
"Marino",
"Kenneth",
""
],
[
"Ahuja",
"Arun",
""
],
[
"Fergus",
"Rob",
""
],
[
"Dasgupta",
"Ishita",
""
]
] |
2301.12510 | Bushra Amjad | Bushra Amjad, Muhammad Zeeshan and Mirza Omer Beg | EMP-EVAL: A Framework for Measuring Empathy in Open Domain Dialogues | 7 pages, 5 figures, 4 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Measuring empathy in conversation can be challenging, as empathy is a complex
and multifaceted psychological construct that involves both cognitive and
emotional components. Human evaluations can be subjective, leading to
inconsistent results. Therefore, there is a need for an automatic method for
measuring empathy that reduces the need for human evaluations. In this paper,
we proposed a novel approach EMP-EVAL, a simple yet effective automatic empathy
evaluation method. The proposed technique takes the influence of Emotion,
Cognitive and Emotional empathy. To the best knowledge, our work is the first
to systematically measure empathy without the human-annotated provided scores.
Experimental results demonstrate that our metrics can correlate with human
preference, achieving comparable results with human judgments.
| [
{
"version": "v1",
"created": "Sun, 29 Jan 2023 18:42:19 GMT"
}
] | 1,675,123,200,000 | [
[
"Amjad",
"Bushra",
""
],
[
"Zeeshan",
"Muhammad",
""
],
[
"Beg",
"Mirza Omer",
""
]
] |
2301.12569 | Zahra Zahedi | Zahra Zahedi, Sarath Sreedharan, Subbarao Kambhampati | A Mental Model Based Theory of Trust | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Handling trust is one of the core requirements for facilitating effective
interaction between the human and the AI agent. Thus, any decision-making
framework designed to work with humans must possess the ability to estimate and
leverage human trust. In this paper, we propose a mental model based theory of
trust that not only can be used to infer trust, thus providing an alternative
to psychological or behavioral trust inference methods, but also can be used as
a foundation for any trust-aware decision-making frameworks. First, we
introduce what trust means according to our theory and then use the theory to
define trust evolution, human reliance and decision making, and a formalization
of the appropriate level of trust in the agent. Using human subject studies, we
compare our theory against one of the most common trust scales (Muir scale) to
evaluate 1) whether the observations from the human studies match our proposed
theory and 2) what aspects of trust are more aligned with our proposed theory.
| [
{
"version": "v1",
"created": "Sun, 29 Jan 2023 22:36:37 GMT"
}
] | 1,675,123,200,000 | [
[
"Zahedi",
"Zahra",
""
],
[
"Sreedharan",
"Sarath",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2301.12820 | Denis Steckelmacher | H\'el\`ene Plisnier, Denis Steckelmacher, Jeroen Willems, Bruno
Depraetere, Ann Now\'e | Transferring Multiple Policies to Hotstart Reinforcement Learning in an
Air Compressor Management Problem | Preliminary version, experimental details still to be made more
precise | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Many instances of similar or almost-identical industrial machines or tools
are often deployed at once, or in quick succession. For instance, a particular
model of air compressor may be installed at hundreds of customers. Because
these tools perform distinct but highly similar tasks, it is interesting to be
able to quickly produce a high-quality controller for machine $N+1$ given the
controllers already produced for machines $1..N$. This is even more important
when the controllers are learned through Reinforcement Learning, as training
takes time, energy and other resources. In this paper, we apply Policy
Intersection, a Policy Shaping method, to help a Reinforcement Learning agent
learn to solve a new variant of a compressors control problem faster, by
transferring knowledge from several previously learned controllers. We show
that our approach outperforms loading an old controller, and significantly
improves performance in the long run.
| [
{
"version": "v1",
"created": "Mon, 30 Jan 2023 12:18:36 GMT"
}
] | 1,675,123,200,000 | [
[
"Plisnier",
"Hélène",
""
],
[
"Steckelmacher",
"Denis",
""
],
[
"Willems",
"Jeroen",
""
],
[
"Depraetere",
"Bruno",
""
],
[
"Nowé",
"Ann",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.