id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2403.03517 | Tsz Ho Chan | Tsz Ho Chan, Wenyi Xiao, Junhua Huang, Huiling Zhen, Guangji Tian and
Mingxuan Yuan | IB-Net: Initial Branch Network for Variable Decision in Boolean
Satisfiability | 7 pages, 12 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Boolean Satisfiability problems are vital components in Electronic Design
Automation, particularly within the Logic Equivalence Checking process.
Currently, SAT solvers are employed for these problems and neural network is
tried as assistance to solvers. However, as SAT problems in the LEC context are
distinctive due to their predominantly unsatisfiability nature and a
substantial proportion of UNSAT-core variables, existing neural network
assistance has proven unsuccessful in this specialized domain. To tackle this
challenge, we propose IB-Net, an innovative framework utilizing graph neural
networks and novel graph encoding techniques to model unsatisfiable problems
and interact with state-of-the-art solvers. Extensive evaluations across
solvers and datasets demonstrate IB-Net's acceleration, achieving an average
runtime speedup of 5.0% on industrial data and 8.3% on SAT competition data
empirically. This breakthrough advances efficient solving in LEC workflows.
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2024 07:54:40 GMT"
}
] | 1,709,769,600,000 | [
[
"Chan",
"Tsz Ho",
""
],
[
"Xiao",
"Wenyi",
""
],
[
"Huang",
"Junhua",
""
],
[
"Zhen",
"Huiling",
""
],
[
"Tian",
"Guangji",
""
],
[
"Yuan",
"Mingxuan",
""
]
] |
2403.03594 | Yoshia Abe | Yoshia Abe, Tatsuya Daikoku, Yasuo Kuniyoshi | Assessing the Aesthetic Evaluation Capabilities of GPT-4 with Vision:
Insights from Group and Individual Assessments | 8 pages, 6 figures, submitted to The 38th Annual Conference of the
Japanese Society for Artificial Intelligence, 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recently, it has been recognized that large language models demonstrate high
performance on various intellectual tasks. However, few studies have
investigated alignment with humans in behaviors that involve sensibility, such
as aesthetic evaluation. This study investigates the performance of GPT-4 with
Vision, a state-of-the-art language model that can handle image input, on the
task of aesthetic evaluation of images. We employ two tasks, prediction of the
average evaluation values of a group and an individual's evaluation values. We
investigate the performance of GPT-4 with Vision by exploring prompts and
analyzing prediction behaviors. Experimental results reveal GPT-4 with Vision's
superior performance in predicting aesthetic evaluations and the nature of
different responses to beauty and ugliness. Finally, we discuss developing an
AI system for aesthetic evaluation based on scientific knowledge of the human
perception of beauty, employing agent technologies that integrate traditional
deep learning models with large language models.
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2024 10:27:09 GMT"
}
] | 1,709,769,600,000 | [
[
"Abe",
"Yoshia",
""
],
[
"Daikoku",
"Tatsuya",
""
],
[
"Kuniyoshi",
"Yasuo",
""
]
] |
2403.03600 | Li Wang | Li Wang, Lei Sang, Quangui Zhang, Qiang Wu, Min Xu | A Privacy-Preserving Framework with Multi-Modal Data for Cross-Domain
Recommendation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Cross-domain recommendation (CDR) aims to enhance recommendation accuracy in
a target domain with sparse data by leveraging rich information in a source
domain, thereby addressing the data-sparsity problem. Some existing CDR methods
highlight the advantages of extracting domain-common and domain-specific
features to learn comprehensive user and item representations. However, these
methods can't effectively disentangle these components as they often rely on
simple user-item historical interaction information (such as ratings, clicks,
and browsing), neglecting the rich multi-modal features. Additionally, they
don't protect user-sensitive data from potential leakage during knowledge
transfer between domains. To address these challenges, we propose a
Privacy-Preserving Framework with Multi-Modal Data for Cross-Domain
Recommendation, called P2M2-CDR. Specifically, we first design a multi-modal
disentangled encoder that utilizes multi-modal information to disentangle more
informative domain-common and domain-specific embeddings. Furthermore, we
introduce a privacy-preserving decoder to mitigate user privacy leakage during
knowledge transfer. Local differential privacy (LDP) is utilized to obfuscate
the disentangled embeddings before inter-domain exchange, thereby enhancing
privacy protection. To ensure both consistency and differentiation among these
obfuscated disentangled embeddings, we incorporate contrastive learning-based
domain-inter and domain-intra losses. Extensive Experiments conducted on four
real-world datasets demonstrate that P2M2-CDR outperforms other
state-of-the-art single-domain and cross-domain baselines.
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2024 10:40:08 GMT"
}
] | 1,709,769,600,000 | [
[
"Wang",
"Li",
""
],
[
"Sang",
"Lei",
""
],
[
"Zhang",
"Quangui",
""
],
[
"Wu",
"Qiang",
""
],
[
"Xu",
"Min",
""
]
] |
2403.03607 | Johannes Hirth | Johannes Hirth, Tom Hanika | The Geometric Structure of Topic Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Topic models are a popular tool for clustering and analyzing textual data.
They allow texts to be classified on the basis of their affiliation to the
previously calculated topics. Despite their widespread use in research and
application, an in-depth analysis of topic models is still an open research
topic. State-of-the-art methods for interpreting topic models are based on
simple visualizations, such as similarity matrices, top-term lists or
embeddings, which are limited to a maximum of three dimensions. In this paper,
we propose an incidence-geometric method for deriving an ordinal structure from
flat topic models, such as non-negative matrix factorization. These enable the
analysis of the topic model in a higher (order) dimension and the possibility
of extracting conceptual relationships between several topics at once. Due to
the use of conceptual scaling, our approach does not introduce any artificial
topical relationships, such as artifacts of feature compression. Based on our
findings, we present a new visualization paradigm for concept hierarchies based
on ordinal motifs. These allow for a top-down view on topic spaces. We
introduce and demonstrate the applicability of our approach based on a topic
model derived from a corpus of scientific papers taken from 32 top machine
learning venues.
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2024 10:53:51 GMT"
}
] | 1,709,769,600,000 | [
[
"Hirth",
"Johannes",
""
],
[
"Hanika",
"Tom",
""
]
] |
2403.03645 | Yucheng Wang | Yucheng Wang, Ruibing Jin, Min Wu, Xiaoli Li, Lihua Xie, Zhenghua Chen | K-Link: Knowledge-Link Graph from LLMs for Enhanced Representation
Learning in Multivariate Time-Series Data | 12 pages,7 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Sourced from various sensors and organized chronologically, Multivariate
Time-Series (MTS) data involves crucial spatial-temporal dependencies, e.g.,
correlations among sensors. To capture these dependencies, Graph Neural
Networks (GNNs) have emerged as powerful tools, yet their effectiveness is
restricted by the quality of graph construction from MTS data. Typically,
existing approaches construct graphs solely from MTS signals, which may
introduce bias due to a small training dataset and may not accurately represent
underlying dependencies. To address this challenge, we propose a novel
framework named K-Link, leveraging Large Language Models (LLMs) to encode
extensive general knowledge and thereby providing effective solutions to reduce
the bias. Leveraging the knowledge embedded in LLMs, such as physical
principles, we extract a \textit{Knowledge-Link graph}, capturing vast semantic
knowledge of sensors and the linkage of the sensor-level knowledge. To harness
the potential of the knowledge-link graph in enhancing the graph derived from
MTS data, we propose a graph alignment module, facilitating the transfer of
semantic knowledge within the knowledge-link graph into the MTS-derived graph.
By doing so, we can improve the graph quality, ensuring effective
representation learning with GNNs for MTS data. Extensive experiments
demonstrate the efficacy of our approach for superior performance across
various MTS-related downstream tasks.
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2024 12:08:14 GMT"
}
] | 1,709,769,600,000 | [
[
"Wang",
"Yucheng",
""
],
[
"Jin",
"Ruibing",
""
],
[
"Wu",
"Min",
""
],
[
"Li",
"Xiaoli",
""
],
[
"Xie",
"Lihua",
""
],
[
"Chen",
"Zhenghua",
""
]
] |
2403.03744 | Tessa Han | Tessa Han, Aounon Kumar, Chirag Agarwal, Himabindu Lakkaraju | Towards Safe Large Language Models for Medicine | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | As large language models (LLMs) develop ever-improving capabilities and are
applied in real-world settings, it is important to understand their safety.
While initial steps have been taken to evaluate the safety of general-knowledge
LLMs, exposing some weaknesses, the safety of medical LLMs has not been
sufficiently evaluated despite their high risks to personal health and safety,
public health and safety, patient rights, and human rights. To address this
gap, we conduct, to our knowledge, the first study of its kind to evaluate and
improve the safety of medical LLMs. We find that 1) current medical LLMs do not
meet standards of general or medical safety, as they readily comply with
harmful requests and that 2) fine-tuning medical LLMs on safety demonstrations
significantly improves their safety, reducing their tendency to comply with
harmful requests. In addition, we present a definition of medical safety for
LLMs and develop a benchmark dataset to evaluate and train for medical safety
in LLMs. Poised at the intersection of research on machine learning safety and
medical machine learning, this work casts light on the status quo of the safety
of medical LLMs and motivates future work in this area, mitigating the risks of
harm of LLMs in medicine.
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2024 14:34:07 GMT"
},
{
"version": "v2",
"created": "Wed, 1 May 2024 12:24:04 GMT"
},
{
"version": "v3",
"created": "Tue, 14 May 2024 00:30:54 GMT"
}
] | 1,715,731,200,000 | [
[
"Han",
"Tessa",
""
],
[
"Kumar",
"Aounon",
""
],
[
"Agarwal",
"Chirag",
""
],
[
"Lakkaraju",
"Himabindu",
""
]
] |
2403.03828 | Rushit Dave | Rushit Dave, Marcho Handoko, Ali Rashid, Cole Schoenbauer | From Clicks to Security: Investigating Continuous Authentication via
Mouse Dynamics | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the realm of computer security, the importance of efficient and reliable
user authentication methods has become increasingly critical. This paper
examines the potential of mouse movement dynamics as a consistent metric for
continuous authentication. By analyzing user mouse movement patterns in two
contrasting gaming scenarios, "Team Fortress" and Poly Bridge we investigate
the distinctive behavioral patterns inherent in high-intensity and
low-intensity UI interactions. The study extends beyond conventional
methodologies by employing a range of machine learning models. These models are
carefully selected to assess their effectiveness in capturing and interpreting
the subtleties of user behavior as reflected in their mouse movements. This
multifaceted approach allows for a more nuanced and comprehensive understanding
of user interaction patterns. Our findings reveal that mouse movement dynamics
can serve as a reliable indicator for continuous user authentication. The
diverse machine learning models employed in this study demonstrate competent
performance in user verification, marking an improvement over previous methods
used in this field. This research contributes to the ongoing efforts to enhance
computer security and highlights the potential of leveraging user behavior,
specifically mouse dynamics, in developing robust authentication systems.
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2024 16:18:02 GMT"
}
] | 1,709,769,600,000 | [
[
"Dave",
"Rushit",
""
],
[
"Handoko",
"Marcho",
""
],
[
"Rashid",
"Ali",
""
],
[
"Schoenbauer",
"Cole",
""
]
] |
2403.03832 | Rushit Dave | Pedro Gomes do Nascimento, Pidge Witiak, Tucker MacCallum, Zachary
Winterfeldt, Rushit Dave | Your device may know you better than you know yourself -- continuous
authentication on novel dataset using machine learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This research aims to further understanding in the field of continuous
authentication using behavioral biometrics. We are contributing a novel dataset
that encompasses the gesture data of 15 users playing Minecraft with a Samsung
Tablet, each for a duration of 15 minutes. Utilizing this dataset, we employed
machine learning (ML) binary classifiers, being Random Forest (RF), K-Nearest
Neighbors (KNN), and Support Vector Classifier (SVC), to determine the
authenticity of specific user actions. Our most robust model was SVC, which
achieved an average accuracy of approximately 90%, demonstrating that touch
dynamics can effectively distinguish users. However, further studies are needed
to make it viable option for authentication systems
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2024 16:22:49 GMT"
}
] | 1,709,769,600,000 | [
[
"Nascimento",
"Pedro Gomes do",
""
],
[
"Witiak",
"Pidge",
""
],
[
"MacCallum",
"Tucker",
""
],
[
"Winterfeldt",
"Zachary",
""
],
[
"Dave",
"Rushit",
""
]
] |
2403.03996 | Kai Yin | Zhewei Liu, Kai Yin, Ali Mostafavi | Rethinking Urban Flood Risk Assessment By Adapting Health Domain
Perspective | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inspired by ideas from health risk assessment, this paper presents a new
perspective for flood risk assessment. The proposed perspective focuses on
three pillars for examining flood risk: (1) inherent susceptibility, (2)
mitigation strategies, and (3) external stressors. These pillars collectively
encompass the physical and environmental characteristics of urban areas, the
effectiveness of human-intervention measures, and the influence of
uncontrollable external factors, offering a fresh point of view for decoding
flood risks. For each pillar, we delineate its individual contributions to
flood risk and illustrate their interactive and overall impact. The
three-pillars model embodies a shift in focus from the quest to precisely model
and quantify flood risk to evaluating pathways to high flood risk. The shift in
perspective is intended to alleviate the quest for quantifying and predicting
flood risk at fine resolutions as a panacea for enhanced flood risk management.
The decomposition of flood risk pathways into the three intertwined pillars
(i.e., inherent factors, mitigation factors, and external factors) enables
evaluation of changes in factors within each pillar enhance and exacerbate
flood risk, creating a platform from which to inform plans, decisions, and
actions. Building on this foundation, we argue that a flood risk pathway
analysis approach, which examines the individual and collective impacts of
inherent factors, mitigation strategies, and external stressors, is essential
for a nuanced evaluation of flood risk. Accordingly, the proposed perspective
could complement the existing frameworks and approaches for flood risk
assessment.
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2024 19:12:41 GMT"
}
] | 1,709,856,000,000 | [
[
"Liu",
"Zhewei",
""
],
[
"Yin",
"Kai",
""
],
[
"Mostafavi",
"Ali",
""
]
] |
2403.03997 | Yixuan Li | Yixuan Li, Julian Parsert, Elizabeth Polgreen | Guiding Enumerative Program Synthesis with Large Language Models | Accepted at CAV 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Pre-trained Large Language Models (LLMs) are beginning to dominate the
discourse around automatic code generation with natural language
specifications. In contrast, the best-performing synthesizers in the domain of
formal synthesis with precise logical specifications are still based on
enumerative algorithms. In this paper, we evaluate the abilities of LLMs to
solve formal synthesis benchmarks by carefully crafting a library of prompts
for the domain. When one-shot synthesis fails, we propose a novel enumerative
synthesis algorithm, which integrates calls to an LLM into a weighted
probabilistic search. This allows the synthesizer to provide the LLM with
information about the progress of the enumerator, and the LLM to provide the
enumerator with syntactic guidance in an iterative loop. We evaluate our
techniques on benchmarks from the Syntax-Guided Synthesis (SyGuS) competition.
We find that GPT-3.5 as a stand-alone tool for formal synthesis is easily
outperformed by state-of-the-art formal synthesis algorithms, but our approach
integrating the LLM into an enumerative synthesis algorithm shows significant
performance gains over both the LLM and the enumerative synthesizer alone and
the winning SyGuS competition tool.
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2024 19:13:53 GMT"
},
{
"version": "v2",
"created": "Mon, 27 May 2024 12:18:40 GMT"
}
] | 1,716,854,400,000 | [
[
"Li",
"Yixuan",
""
],
[
"Parsert",
"Julian",
""
],
[
"Polgreen",
"Elizabeth",
""
]
] |
2403.04087 | Nik Bear Brown | Nik Bear Brown | The Cognitive Type Project -- Mapping Typography to Cognition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Cognitive Type Project is focused on developing computational tools to
enable the design of typefaces with varying cognitive properties. This
initiative aims to empower typographers to craft fonts that enhance
click-through rates for online ads, improve reading levels in children's books,
enable dyslexics to create personalized type, or provide insights into customer
reactions to textual content in media. A significant challenge in research
related to mapping typography to cognition is the creation of thousands of
typefaces with minor variations, a process that is both labor-intensive and
requires the expertise of skilled typographers. Cognitive science research
highlights that the design and form of letters, along with the text's overall
layout, are crucial in determining the ease of reading and other cognitive
properties of type such as perceived beauty and memorability. These factors
affect not only the legibility and clarity of information presentation but also
the likability of a typeface.
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2024 22:32:49 GMT"
}
] | 1,709,856,000,000 | [
[
"Brown",
"Nik Bear",
""
]
] |
2403.04105 | Lekang Jiang | Lekang Jiang, Stephan Goetz | Artificial Intelligence Exploring the Patent Field | 53 pages, 14 figures, 5 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advanced language-processing and machine-learning techniques promise massive
efficiency improvements in the previously widely manual field of patent and
technical knowledge management. This field presents large-scale and complex
data with very precise contents and language representation of those contents.
Particularly, patent texts can differ from mundane texts in various aspects,
which entails significant opportunities and challenges. This paper presents a
systematic overview of patent-related tasks and popular methodologies with a
special focus on evolving and promising techniques. Language processing and
particularly large language models as well as the recent boost of general
generative methods promise to become game changers in the patent field. The
patent literature and the fact-based argumentative procedures around patents
appear almost as an ideal use case. However, patents entail a number of
difficulties with which existing models struggle. The paper introduces
fundamental aspects of patents and patent-related data that affect technology
that wants to explore or manage them. It further reviews existing methods and
approaches and points out how important reliable and unbiased evaluation
metrics become. Although research has made substantial progress on certain
tasks, the performance across many others remains suboptimal, sometimes because
of either the special nature of patents and their language or inconsistencies
between legal terms and the everyday meaning of terms. Moreover, yet few
methods have demonstrated the ability to produce satisfactory text for specific
sections of patents. By pointing out key developments, opportunities, and gaps,
we aim to encourage further research and accelerate the advancement of this
field.
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2024 23:17:16 GMT"
}
] | 1,709,856,000,000 | [
[
"Jiang",
"Lekang",
""
],
[
"Goetz",
"Stephan",
""
]
] |
2403.04106 | Matthew Greenig | Elsa Lawrence, Adham El-Shazly, Srijit Seal, Chaitanya K Joshi, Pietro
Li\`o, Shantanu Singh, Andreas Bender, Pietro Sormanni, Matthew Greenig | Understanding Biology in the Age of Artificial Intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Modern life sciences research is increasingly relying on artificial
intelligence approaches to model biological systems, primarily centered around
the use of machine learning (ML) models. Although ML is undeniably useful for
identifying patterns in large, complex data sets, its widespread application in
biological sciences represents a significant deviation from traditional methods
of scientific inquiry. As such, the interplay between these models and
scientific understanding in biology is a topic with important implications for
the future of scientific research, yet it is a subject that has received little
attention. Here, we draw from an epistemological toolkit to contextualize
recent applications of ML in biological sciences under modern philosophical
theories of understanding, identifying general principles that can guide the
design and application of ML systems to model biological phenomena and advance
scientific knowledge. We propose that conceptions of scientific understanding
as information compression, qualitative intelligibility, and dependency
relation modelling provide a useful framework for interpreting ML-mediated
understanding of biological systems. Through a detailed analysis of two key
application areas of ML in modern biological research - protein structure
prediction and single cell RNA-sequencing - we explore how these features have
thus far enabled ML systems to advance scientific understanding of their target
phenomena, how they may guide the development of future ML models, and the key
obstacles that remain in preventing ML from achieving its potential as a tool
for biological discovery. Consideration of the epistemological features of ML
applications in biology will improve the prospects of these methods to solve
important problems and advance scientific understanding of living systems.
| [
{
"version": "v1",
"created": "Wed, 6 Mar 2024 23:20:34 GMT"
}
] | 1,709,856,000,000 | [
[
"Lawrence",
"Elsa",
""
],
[
"El-Shazly",
"Adham",
""
],
[
"Seal",
"Srijit",
""
],
[
"Joshi",
"Chaitanya K",
""
],
[
"Liò",
"Pietro",
""
],
[
"Singh",
"Shantanu",
""
],
[
"Bender",
"Andreas",
""
],
[
"Sormanni",
"Pietro",
""
],
[
"Greenig",
"Matthew",
""
]
] |
2403.04124 | Longchao Da | Tiejin Chen, Longchao Da, Huixue Zhou, Pingzhi Li, Kaixiong Zhou,
Tianlong Chen, Hua Wei | Privacy-preserving Fine-tuning of Large Language Models through Flatness | Accepted to ICLR 2024 SeT LLM Workshop | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The privacy concerns associated with the use of Large Language Models (LLMs)
have grown recently with the development of LLMs such as ChatGPT. Differential
Privacy (DP) techniques are explored in existing work to mitigate their privacy
risks at the cost of generalization degradation. Our paper reveals that the
flatness of DP-trained models' loss landscape plays an essential role in the
trade-off between their privacy and generalization. We further propose a
holistic framework to enforce appropriate weight flatness, which substantially
improves model generalization with competitive privacy preservation. It
innovates from three coarse-to-grained levels, including perturbation-aware
min-max optimization on model weights within a layer, flatness-guided sparse
prefix-tuning on weights across layers, and weight knowledge distillation
between DP \& non-DP weights copies. Comprehensive experiments of both
black-box and white-box scenarios are conducted to demonstrate the
effectiveness of our proposal in enhancing generalization and maintaining DP
characteristics. For instance, on text classification dataset QNLI, DP-Flat
achieves similar performance with non-private full fine-tuning but with DP
guarantee under privacy budget $\epsilon=3$, and even better performance given
higher privacy budgets. Codes are provided in the supplement.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 00:44:11 GMT"
}
] | 1,709,856,000,000 | [
[
"Chen",
"Tiejin",
""
],
[
"Da",
"Longchao",
""
],
[
"Zhou",
"Huixue",
""
],
[
"Li",
"Pingzhi",
""
],
[
"Zhou",
"Kaixiong",
""
],
[
"Chen",
"Tianlong",
""
],
[
"Wei",
"Hua",
""
]
] |
2403.04135 | Yui Uehara | Yui Uehara | Unsupervised Learning of Harmonic Analysis Based on Neural HSMM with
Code Quality Templates | 20 pages, 5 figures, the original edition of this paper will be
published in the ICNMC2024 Proceedings and this arXiv publication is a copy | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a method of unsupervised learning of harmonic analysis
based on a hidden semi-Markov model (HSMM). We introduce the chord quality
templates, which specify the probability of pitch class emissions given a root
note and a chord quality. Other probability distributions that comprise the
HSMM are automatically learned via unsupervised learning, which has been a
challenge in existing research. The results of the harmonic analysis of the
proposed model were evaluated using existing labeled data. While our proposed
method has yet to perform as well as existing models that used supervised
learning and complex rule design, it has the advantage of not requiring
expensive labeled data or rule elaboration. Furthermore, we also show how to
recognize the tonic without prior knowledge, based on the transition
probabilities of the Markov model.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 01:29:48 GMT"
}
] | 1,709,856,000,000 | [
[
"Uehara",
"Yui",
""
]
] |
2403.04140 | Biqing Qi | Biqing Qi, Junqi Gao, Xingquan Chen, Dong Li, Jianxing Liu, Ligang Wu
and Bowen Zhou | Contrastive Augmented Graph2Graph Memory Interaction for Few Shot
Continual Learning | 12 Pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Few-Shot Class-Incremental Learning (FSCIL) has gained considerable attention
in recent years for its pivotal role in addressing continuously arriving
classes. However, it encounters additional challenges. The scarcity of samples
in new sessions intensifies overfitting, causing incompatibility between the
output features of new and old classes, thereby escalating catastrophic
forgetting. A prevalent strategy involves mitigating catastrophic forgetting
through the Explicit Memory (EM), which comprise of class prototypes. However,
current EM-based methods retrieves memory globally by performing
Vector-to-Vector (V2V) interaction between features corresponding to the input
and prototypes stored in EM, neglecting the geometric structure of local
features. This hinders the accurate modeling of their positional relationships.
To incorporate information of local geometric structure, we extend the V2V
interaction to Graph-to-Graph (G2G) interaction. For enhancing local structures
for better G2G alignment and the prevention of local feature collapse, we
propose the Local Graph Preservation (LGP) mechanism. Additionally, to address
sample scarcity in classes from new sessions, the Contrast-Augmented G2G
(CAG2G) is introduced to promote the aggregation of same class features thus
helps few-shot learning. Extensive comparisons on CIFAR100, CUB200, and the
challenging ImageNet-R dataset demonstrate the superiority of our method over
existing methods.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 01:41:12 GMT"
}
] | 1,709,856,000,000 | [
[
"Qi",
"Biqing",
""
],
[
"Gao",
"Junqi",
""
],
[
"Chen",
"Xingquan",
""
],
[
"Li",
"Dong",
""
],
[
"Liu",
"Jianxing",
""
],
[
"Wu",
"Ligang",
""
],
[
"Zhou",
"Bowen",
""
]
] |
2403.04264 | Hoang Giang Pham | Hoang Giang Pham, Tien Thanh Dam, Ngan Ha Duong, Tien Mai and Minh
Hoang Ha | Competitive Facility Location under Random Utilities and Routing
Constraints | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study a facility location problem within a competitive
market context, where customer demand is predicted by a random utility choice
model. Unlike prior research, which primarily focuses on simple constraints
such as a cardinality constraint on the number of selected locations, we
introduce routing constraints that necessitate the selection of locations in a
manner that guarantees the existence of a tour visiting all chosen locations
while adhering to a specified tour length upper bound. Such routing constraints
find crucial applications in various real-world scenarios. The problem at hand
features a non-linear objective function, resulting from the utilization of
random utilities, together with complex routing constraints, making it
computationally challenging. To tackle this problem, we explore three types of
valid cuts, namely, outer-approximation and submodular cuts to handle the
nonlinear objective function, as well as sub-tour elimination cuts to address
the complex routing constraints. These lead to the development of two exact
solution methods: a nested cutting plane and nested branch-and-cut algorithms,
where these valid cuts are iteratively added to a master problem through two
nested loops. We also prove that our nested cutting plane method always
converges to optimality after a finite number of iterations. Furthermore, we
develop a local search-based metaheuristic tailored for solving large-scale
instances and show its pros and cons compared to exact methods. Extensive
experiments are conducted on problem instances of varying sizes, demonstrating
that our approach excels in terms of solution quality and computation time when
compared to other baseline approaches.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 06:56:24 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Mar 2024 20:17:25 GMT"
}
] | 1,710,201,600,000 | [
[
"Pham",
"Hoang Giang",
""
],
[
"Dam",
"Tien Thanh",
""
],
[
"Duong",
"Ngan Ha",
""
],
[
"Mai",
"Tien",
""
],
[
"Ha",
"Minh Hoang",
""
]
] |
2403.04292 | Knud Thomsen | Knud Thomsen | A challenge in A(G)I, cybernetics revived in the Ouroboros Model as one
algorithm for all thinking | 26 pages, 11 figures | Artificial Intelligence and Autonomous Systems Volume 1 Issue 1,
2024 | 10.55092/aias20240001 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A topical challenge for algorithms in general and for automatic image
categorization and generation in particular is presented in the form of a
drawing for AI to understand. In a second vein, AI is challenged to produce
something similar from verbal description. The aim of the paper is to highlight
strengths and deficiencies of current Artificial Intelligence approaches while
coarsely sketching a way forward. A general lack of encompassing
symbol-embedding and (not only) -grounding in some bodily basis is made
responsible for current deficiencies. A concomitant dearth of hierarchical
organization of concepts follows suite. As a remedy for these shortcomings, it
is proposed to take a wide step back and to newly incorporate aspects of
cybernetics and analog control processes. It is claimed that a promising
overarching perspective is provided by the Ouroboros Model with a valid and
versatile algorithmic backbone for general cognition at all accessible levels
of abstraction and capabilities. Reality, rules, truth, and Free Will are all
useful abstractions according to the Ouroboros Model. Logic deduction as well
as intuitive guesses are claimed as produced on the basis of one
compartmentalized memory for schemata and a pattern-matching, i.e., monitoring
process termed consumption analysis. The latter directs attention on short
(attention proper) and also on long times scales (emotional biases). In this
cybernetic approach, discrepancies between expectations and actual activations
(e.g., sensory precepts) drive the general process of cognition and at the same
time steer the storage of new and adapted memory entries. Dedicated structures
in the human brain work in concert according to this scheme.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 07:39:54 GMT"
}
] | 1,709,856,000,000 | [
[
"Thomsen",
"Knud",
""
]
] |
2403.04343 | Yanqi Dai | Yanqi Dai, Dong Jing, Nanyi Fei, Zhiwu Lu | CoTBal: Comprehensive Task Balancing for Multi-Task Visual Instruction
Tuning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual instruction tuning is a key training stage of large multimodal models
(LMMs). Nevertheless, the common practice of indiscriminately mixing
instruction-following data from various tasks may result in suboptimal overall
performance due to different instruction formats and knowledge domains across
tasks. To mitigate this issue, we propose a novel Comprehensive Task Balancing
(CoTBal) algorithm for multi-task visual instruction tuning of LMMs. To our
knowledge, this is the first work that explores multi-task optimization in
visual instruction tuning. Specifically, we consider two key dimensions for
task balancing: (1) Inter-Task Contribution, the phenomenon where learning one
task potentially enhances the performance in other tasks, attributable to the
overlapping knowledge domains, and (2) Intra-Task Difficulty, which refers to
the learning difficulty within a single task. By quantifying these two
dimensions with performance-based metrics, task balancing is thus enabled by
assigning more weights to tasks that offer substantial contributions to others,
receive minimal contributions from others, and also have great intra-task
difficulties. Experiments show that our CoTBal leads to superior overall
performance in multi-task visual instruction tuning.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 09:11:16 GMT"
}
] | 1,709,856,000,000 | [
[
"Dai",
"Yanqi",
""
],
[
"Jing",
"Dong",
""
],
[
"Fei",
"Nanyi",
""
],
[
"Lu",
"Zhiwu",
""
]
] |
2403.04366 | Ang Li | Ang Li, Yiquan Wu, Yifei Liu, Fei Wu, Ming Cai, Kun Kuang | Enhancing Court View Generation with Knowledge Injection and Guidance | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Court View Generation (CVG) is a challenging task in the field of Legal
Artificial Intelligence (LegalAI), which aims to generate court views based on
the plaintiff claims and the fact descriptions. While Pretrained Language
Models (PLMs) have showcased their prowess in natural language generation,
their application to the complex, knowledge-intensive domain of CVG often
reveals inherent limitations. In this paper, we present a novel approach, named
Knowledge Injection and Guidance (KIG), designed to bolster CVG using PLMs. To
efficiently incorporate domain knowledge during the training stage, we
introduce a knowledge-injected prompt encoder for prompt tuning, thereby
reducing computational overhead. Moreover, to further enhance the model's
ability to utilize domain knowledge, we employ a generating navigator, which
dynamically guides the text generation process in the inference stage without
altering the model's architecture, making it readily transferable.
Comprehensive experiments on real-world data demonstrate the effectiveness of
our approach compared to several established baselines, especially in the
responsivity of claims, where it outperforms the best baseline by 11.87%.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 09:51:11 GMT"
}
] | 1,709,856,000,000 | [
[
"Li",
"Ang",
""
],
[
"Wu",
"Yiquan",
""
],
[
"Liu",
"Yifei",
""
],
[
"Wu",
"Fei",
""
],
[
"Cai",
"Ming",
""
],
[
"Kuang",
"Kun",
""
]
] |
2403.04449 | Natalie Kiesler | Imen Azaiz, Natalie Kiesler, Sven Strickroth | Feedback-Generation for Programming Exercises With GPT-4 | accepted at ITiCSE 2024, Milan, Italy | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ever since Large Language Models (LLMs) and related applications have become
broadly available, several studies investigated their potential for assisting
educators and supporting students in higher education. LLMs such as Codex,
GPT-3.5, and GPT 4 have shown promising results in the context of large
programming courses, where students can benefit from feedback and hints if
provided timely and at scale. This paper explores the quality of GPT-4 Turbo's
generated output for prompts containing both the programming task specification
and a student's submission as input. Two assignments from an introductory
programming course were selected, and GPT-4 was asked to generate feedback for
55 randomly chosen, authentic student programming submissions. The output was
qualitatively analyzed regarding correctness, personalization, fault
localization, and other features identified in the material. Compared to prior
work and analyses of GPT-3.5, GPT-4 Turbo shows notable improvements. For
example, the output is more structured and consistent. GPT-4 Turbo can also
accurately identify invalid casing in student programs' output. In some cases,
the feedback also includes the output of the student program. At the same time,
inconsistent feedback was noted such as stating that the submission is correct
but an error needs to be fixed. The present work increases our understanding of
LLMs' potential, limitations, and how to integrate them into e-assessment
systems, pedagogical scenarios, and instructing students who are using
applications based on GPT-4.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 12:37:52 GMT"
}
] | 1,709,856,000,000 | [
[
"Azaiz",
"Imen",
""
],
[
"Kiesler",
"Natalie",
""
],
[
"Strickroth",
"Sven",
""
]
] |
2403.04471 | Elliott Thornley | Elliott Thornley | The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | I explain the shutdown problem: the problem of designing artificial agents
that (1) shut down when a shutdown button is pressed, (2) don't try to prevent
or cause the pressing of the shutdown button, and (3) otherwise pursue goals
competently. I prove three theorems that make the difficulty precise. These
theorems show that agents satisfying some innocuous-seeming conditions will
often try to prevent or cause the pressing of the shutdown button, even in
cases where it's costly to do so. And patience trades off against
shutdownability: the more patient an agent, the greater the costs that agent is
willing to incur to manipulate the shutdown button. I end by noting that these
theorems can guide our search for solutions.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 13:16:07 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Apr 2024 15:09:35 GMT"
}
] | 1,712,707,200,000 | [
[
"Thornley",
"Elliott",
""
]
] |
2403.04504 | Jaehyun Lee | Jaehyun Lee, SeongKu Kang, Hwanjo Yu | Improving Matrix Completion by Exploiting Rating Ordinality in Graph
Neural Networks | 4 pages, 2 figures, 3 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Matrix completion is an important area of research in recommender systems.
Recent methods view a rating matrix as a user-item bi-partite graph with
labeled edges denoting observed ratings and predict the edges between the user
and item nodes by using the graph neural network (GNN). Despite their
effectiveness, they treat each rating type as an independent relation type and
thus cannot sufficiently consider the ordinal nature of the ratings. In this
paper, we explore a new approach to exploit rating ordinality for GNN, which
has not been studied well in the literature. We introduce a new method, called
ROGMC, to leverage Rating Ordinality in GNN-based Matrix Completion. It uses
cumulative preference propagation to directly incorporate rating ordinality in
GNN's message passing, allowing for users' stronger preferences to be more
emphasized based on inherent orders of rating types. This process is
complemented by interest regularization which facilitates preference learning
using the underlying interest information. Our extensive experiments show that
ROGMC consistently outperforms the existing strategies of using rating types
for GNN. We expect that our attempt to explore the feasibility of utilizing
rating ordinality for GNN may stimulate further research in this direction.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 14:04:33 GMT"
}
] | 1,710,115,200,000 | [
[
"Lee",
"Jaehyun",
""
],
[
"Kang",
"SeongKu",
""
],
[
"Yu",
"Hwanjo",
""
]
] |
2403.04511 | Nicholas Sukiennik | Nicholas Sukiennik, Chen Gao, Nian Li | Uncovering the Deep Filter Bubble: Narrow Exposure in Short-Video
Recommendation | accepted to WWW 2024 | null | 10.1145/3589334.3648159 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Filter bubbles have been studied extensively within the context of online
content platforms due to their potential to cause undesirable outcomes such as
user dissatisfaction or polarization. With the rise of short-video platforms,
the filter bubble has been given extra attention because these platforms rely
on an unprecedented use of the recommender system to provide relevant content.
In our work, we investigate the deep filter bubble, which refers to the user
being exposed to narrow content within their broad interests. We accomplish
this using one-year interaction data from a top short-video platform in China,
which includes hierarchical data with three levels of categories for each
video. We formalize our definition of a "deep" filter bubble within this
context, and then explore various correlations within the data: first
understanding the evolution of the deep filter bubble over time, and later
revealing some of the factors that give rise to this phenomenon, such as
specific categories, user demographics, and feedback type. We observe that
while the overall proportion of users in a filter bubble remains largely
constant over time, the depth composition of their filter bubble changes. In
addition, we find that some demographic groups that have a higher likelihood of
seeing narrower content and implicit feedback signals can lead to less bubble
formation. Finally, we propose some ways in which recommender systems can be
designed to reduce the risk of a user getting caught in a bubble.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 14:14:40 GMT"
}
] | 1,709,856,000,000 | [
[
"Sukiennik",
"Nicholas",
""
],
[
"Gao",
"Chen",
""
],
[
"Li",
"Nian",
""
]
] |
2403.04541 | Irfan Kareem | Manuel Borroto, Irfan Kareem, Francesco Ricca | Towards Automatic Composition of ASP Programs from Natural Language
Specifications | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper moves the first step towards automating the composition of Answer
Set Programming (ASP) specifications. In particular, the following
contributions are provided: (i) A dataset focused on graph-related problem
specifications, designed to develop and assess tools for ASP automatic coding;
(ii) A two-step architecture, implemented in the NL2ASP tool, for generating
ASP programs from natural language specifications. NL2ASP uses neural machine
translation to transform natural language into Controlled Natural Language
(CNL) statements. Subsequently, CNL statements are converted into ASP code
using the CNL2ASP tool. An experiment confirms the viability of the approach.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 14:36:52 GMT"
}
] | 1,709,856,000,000 | [
[
"Borroto",
"Manuel",
""
],
[
"Kareem",
"Irfan",
""
],
[
"Ricca",
"Francesco",
""
]
] |
2403.04571 | Nikolay Malkin | Yoshua Bengio, Nikolay Malkin | Machine learning and information theory concepts towards an AI
Mathematician | To appear in the Bulletin of the AMS, 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The current state-of-the-art in artificial intelligence is impressive,
especially in terms of mastery of language, but not so much in terms of
mathematical reasoning. What could be missing? Can we learn something useful
about that gap from how the brains of mathematicians go about their craft? This
essay builds on the idea that current deep learning mostly succeeds at system 1
abilities -- which correspond to our intuition and habitual behaviors -- but
still lacks something important regarding system 2 abilities -- which include
reasoning and robust uncertainty estimation. It takes an
information-theoretical posture to ask questions about what constitutes an
interesting mathematical statement, which could guide future work in crafting
an AI mathematician. The focus is not on proving a given theorem but on
discovering new and interesting conjectures. The central hypothesis is that a
desirable body of theorems better summarizes the set of all provable
statements, for example by having a small description length while at the same
time being close (in terms of number of derivation steps) to many provable
statements.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 15:12:06 GMT"
}
] | 1,709,856,000,000 | [
[
"Bengio",
"Yoshua",
""
],
[
"Malkin",
"Nikolay",
""
]
] |
2403.04588 | L\'eopold Mayti\'e | L\'eopold Mayti\'e, Benjamin Devillers, Alexandre Arnold, Rufin
VanRullen | Zero-shot cross-modal transfer of Reinforcement Learning policies
through a Global Workspace | Under review in a conference | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Humans perceive the world through multiple senses, enabling them to create a
comprehensive representation of their surroundings and to generalize
information across domains. For instance, when a textual description of a scene
is given, humans can mentally visualize it. In fields like robotics and
Reinforcement Learning (RL), agents can also access information about the
environment through multiple sensors; yet redundancy and complementarity
between sensors is difficult to exploit as a source of robustness (e.g. against
sensor failure) or generalization (e.g. transfer across domains). Prior
research demonstrated that a robust and flexible multimodal representation can
be efficiently constructed based on the cognitive science notion of a 'Global
Workspace': a unique representation trained to combine information across
modalities, and to broadcast its signal back to each modality. Here, we explore
whether such a brain-inspired multimodal representation could be advantageous
for RL agents. First, we train a 'Global Workspace' to exploit information
collected about the environment via two input modalities (a visual input, or an
attribute vector representing the state of the agent and/or its environment).
Then, we train a RL agent policy using this frozen Global Workspace. In two
distinct environments and tasks, our results reveal the model's ability to
perform zero-shot cross-modal transfer between input modalities, i.e. to apply
to image inputs a policy previously trained on attribute vectors (and
vice-versa), without additional training or fine-tuning. Variants and ablations
of the full Global Workspace (including a CLIP-like multimodal representation
trained via contrastive learning) did not display the same generalization
abilities.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 15:35:29 GMT"
}
] | 1,709,856,000,000 | [
[
"Maytié",
"Léopold",
""
],
[
"Devillers",
"Benjamin",
""
],
[
"Arnold",
"Alexandre",
""
],
[
"VanRullen",
"Rufin",
""
]
] |
2403.04859 | Akansh Maurya | Akansh Maurya, Hewan Shrestha, Mohammad Munem Shahriar | Self-Supervision in Time for Satellite Images(S3-TSS): A novel method of
SSL technique in Satellite images | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | With the limited availability of labeled data with various atmospheric
conditions in remote sensing images, it seems useful to work with
self-supervised algorithms. Few pretext-based algorithms, including from
rotation, spatial context and jigsaw puzzles are not appropriate for satellite
images. Often, satellite images have a higher temporal frequency. So, the
temporal dimension of remote sensing data provides natural augmentation without
requiring us to create artificial augmentation of images. Here, we propose
S3-TSS, a novel method of self-supervised learning technique that leverages
natural augmentation occurring in temporal dimension. We compare our results
with current state-of-the-art methods and also perform various experiments. We
observed that our method was able to perform better than baseline SeCo in four
downstream datasets. Code for our work can be found here:
https://github.com/hewanshrestha/Why-Self-Supervision-in-Time
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 19:16:17 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Mar 2024 09:32:20 GMT"
}
] | 1,710,201,600,000 | [
[
"Maurya",
"Akansh",
""
],
[
"Shrestha",
"Hewan",
""
],
[
"Shahriar",
"Mohammad Munem",
""
]
] |
2403.04866 | Marco D'Alessandro | Marco D Alessandro, Enrique Calabr\'es, Mikel Elkano | A Modular End-to-End Multimodal Learning Method for Structured and
Unstructured Data | 8 pages, 1 figure | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Multimodal learning is a rapidly growing research field that has
revolutionized multitasking and generative modeling in AI. While much of the
research has focused on dealing with unstructured data (e.g., language, images,
audio, or video), structured data (e.g., tabular data, time series, or signals)
has received less attention. However, many industry-relevant use cases involve
or can be benefited from both types of data. In this work, we propose a
modular, end-to-end multimodal learning method called MAGNUM, which can
natively handle both structured and unstructured data. MAGNUM is flexible
enough to employ any specialized unimodal module to extract, compress, and fuse
information from all available modalities.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 19:29:36 GMT"
}
] | 1,710,115,200,000 | [
[
"Alessandro",
"Marco D",
""
],
[
"Calabrés",
"Enrique",
""
],
[
"Elkano",
"Mikel",
""
]
] |
2403.04893 | Shayne Longpre | Shayne Longpre, Sayash Kapoor, Kevin Klyman, Ashwin Ramaswami, Rishi
Bommasani, Borhane Blili-Hamelin, Yangsibo Huang, Aviya Skowron, Zheng-Xin
Yong, Suhas Kotha, Yi Zeng, Weiyan Shi, Xianjun Yang, Reid Southen, Alexander
Robey, Patrick Chao, Diyi Yang, Ruoxi Jia, Daniel Kang, Sandy Pentland,
Arvind Narayanan, Percy Liang, Peter Henderson | A Safe Harbor for AI Evaluation and Red Teaming | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Independent evaluation and red teaming are critical for identifying the risks
posed by generative AI systems. However, the terms of service and enforcement
strategies used by prominent AI companies to deter model misuse have
disincentives on good faith safety evaluations. This causes some researchers to
fear that conducting such research or releasing their findings will result in
account suspensions or legal reprisal. Although some companies offer researcher
access programs, they are an inadequate substitute for independent research
access, as they have limited community representation, receive inadequate
funding, and lack independence from corporate incentives. We propose that major
AI developers commit to providing a legal and technical safe harbor,
indemnifying public interest safety research and protecting it from the threat
of account suspensions or legal reprisal. These proposals emerged from our
collective experience conducting safety, privacy, and trustworthiness research
on generative AI systems, where norms and incentives could be better aligned
with public interests, without exacerbating model misuse. We believe these
commitments are a necessary step towards more inclusive and unimpeded community
efforts to tackle the risks of generative AI.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 20:55:08 GMT"
}
] | 1,710,115,200,000 | [
[
"Longpre",
"Shayne",
""
],
[
"Kapoor",
"Sayash",
""
],
[
"Klyman",
"Kevin",
""
],
[
"Ramaswami",
"Ashwin",
""
],
[
"Bommasani",
"Rishi",
""
],
[
"Blili-Hamelin",
"Borhane",
""
],
[
"Huang",
"Yangsibo",
""
],
[
"Skowron",
"Aviya",
""
],
[
"Yong",
"Zheng-Xin",
""
],
[
"Kotha",
"Suhas",
""
],
[
"Zeng",
"Yi",
""
],
[
"Shi",
"Weiyan",
""
],
[
"Yang",
"Xianjun",
""
],
[
"Southen",
"Reid",
""
],
[
"Robey",
"Alexander",
""
],
[
"Chao",
"Patrick",
""
],
[
"Yang",
"Diyi",
""
],
[
"Jia",
"Ruoxi",
""
],
[
"Kang",
"Daniel",
""
],
[
"Pentland",
"Sandy",
""
],
[
"Narayanan",
"Arvind",
""
],
[
"Liang",
"Percy",
""
],
[
"Henderson",
"Peter",
""
]
] |
2403.04957 | Xiaogeng Liu | Xiaogeng Liu, Zhiyuan Yu, Yizhe Zhang, Ning Zhang, Chaowei Xiao | Automatic and Universal Prompt Injection Attacks against Large Language
Models | Pre-print, code is available at
https://github.com/SheltonLiu-N/Universal-Prompt-Injection | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) excel in processing and generating human
language, powered by their ability to interpret and follow instructions.
However, their capabilities can be exploited through prompt injection attacks.
These attacks manipulate LLM-integrated applications into producing responses
aligned with the attacker's injected content, deviating from the user's actual
requests. The substantial risks posed by these attacks underscore the need for
a thorough understanding of the threats. Yet, research in this area faces
challenges due to the lack of a unified goal for such attacks and their
reliance on manually crafted prompts, complicating comprehensive assessments of
prompt injection robustness. We introduce a unified framework for understanding
the objectives of prompt injection attacks and present an automated
gradient-based method for generating highly effective and universal prompt
injection data, even in the face of defensive measures. With only five training
samples (0.3% relative to the test data), our attack can achieve superior
performance compared with baselines. Our findings emphasize the importance of
gradient-based testing, which can avoid overestimation of robustness,
especially for defense mechanisms.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 23:46:20 GMT"
}
] | 1,710,115,200,000 | [
[
"Liu",
"Xiaogeng",
""
],
[
"Yu",
"Zhiyuan",
""
],
[
"Zhang",
"Yizhe",
""
],
[
"Zhang",
"Ning",
""
],
[
"Xiao",
"Chaowei",
""
]
] |
2403.05000 | Pengcheng Li | Jianzong Wang, Pengcheng Li, Xulong Zhang, Ning Cheng, Jing Xiao | Medical Speech Symptoms Classification via Disentangled Representation | Accepted by the 27th International Conference on Computer Supported
Cooperative Work in Design (CSCWD 2024) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intent is defined for understanding spoken language in existing works. Both
textual features and acoustic features involved in medical speech contain
intent, which is important for symptomatic diagnosis. In this paper, we propose
a medical speech classification model named DRSC that automatically learns to
disentangle intent and content representations from textual-acoustic data for
classification. The intent representations of the text domain and the
Mel-spectrogram domain are extracted via intent encoders, and then the
reconstructed text feature and the Mel-spectrogram feature are obtained through
two exchanges. After combining the intent from two domains into a joint
representation, the integrated intent representation is fed into a decision
layer for classification. Experimental results show that our model obtains an
average accuracy rate of 95% in detecting 25 different medical symptoms.
| [
{
"version": "v1",
"created": "Fri, 8 Mar 2024 02:42:34 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Mar 2024 01:51:37 GMT"
},
{
"version": "v3",
"created": "Tue, 30 Apr 2024 01:47:37 GMT"
}
] | 1,714,521,600,000 | [
[
"Wang",
"Jianzong",
""
],
[
"Li",
"Pengcheng",
""
],
[
"Zhang",
"Xulong",
""
],
[
"Cheng",
"Ning",
""
],
[
"Xiao",
"Jing",
""
]
] |
2403.05025 | Dingkang Yang | Dingkang Yang, Dongling Xiao, Ke Li, Yuzheng Wang, Zhaoyu Chen, Jinjie
Wei, Lihua Zhang | Towards Multimodal Human Intention Understanding Debiasing via
Subject-Deconfounding | 14 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal intention understanding (MIU) is an indispensable component of
human expression analysis (e.g., sentiment or humor) from heterogeneous
modalities, including visual postures, linguistic contents, and acoustic
behaviors. Existing works invariably focus on designing sophisticated
structures or fusion strategies to achieve impressive improvements.
Unfortunately, they all suffer from the subject variation problem due to data
distribution discrepancies among subjects. Concretely, MIU models are easily
misled by distinct subjects with different expression customs and
characteristics in the training data to learn subject-specific spurious
correlations, significantly limiting performance and generalizability across
uninitiated subjects.Motivated by this observation, we introduce a
recapitulative causal graph to formulate the MIU procedure and analyze the
confounding effect of subjects. Then, we propose SuCI, a simple yet effective
causal intervention module to disentangle the impact of subjects acting as
unobserved confounders and achieve model training via true causal effects. As a
plug-and-play component, SuCI can be widely applied to most methods that seek
unbiased predictions. Comprehensive experiments on several MIU benchmarks
clearly demonstrate the effectiveness of the proposed module.
| [
{
"version": "v1",
"created": "Fri, 8 Mar 2024 04:03:54 GMT"
}
] | 1,710,115,200,000 | [
[
"Yang",
"Dingkang",
""
],
[
"Xiao",
"Dongling",
""
],
[
"Li",
"Ke",
""
],
[
"Wang",
"Yuzheng",
""
],
[
"Chen",
"Zhaoyu",
""
],
[
"Wei",
"Jinjie",
""
],
[
"Zhang",
"Lihua",
""
]
] |
2403.05029 | Chengyang Zhang | Chengyang Zhang, Yong Zhang, Qitan Shao, Jiangtao Feng, Bo Li, Yisheng
Lv, Xinglin Piao, Baocai Yin | BjTT: A Large-scale Multimodal Dataset for Traffic Prediction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic prediction is one of the most significant foundations in Intelligent
Transportation Systems (ITS). Traditional traffic prediction methods rely only
on historical traffic data to predict traffic trends and face two main
challenges. 1) insensitivity to unusual events. 2) limited performance in
long-term prediction. In this work, we explore how generative models combined
with text describing the traffic system can be applied for traffic generation,
and name the task Text-to-Traffic Generation (TTG). The key challenge of the
TTG task is how to associate text with the spatial structure of the road
network and traffic data for generating traffic situations. To this end, we
propose ChatTraffic, the first diffusion model for text-to-traffic generation.
To guarantee the consistency between synthetic and real data, we augment a
diffusion model with the Graph Convolutional Network (GCN) to extract spatial
correlations of traffic data. In addition, we construct a large dataset
containing text-traffic pairs for the TTG task. We benchmarked our model
qualitatively and quantitatively on the released dataset. The experimental
results indicate that ChatTraffic can generate realistic traffic situations
from the text. Our code and dataset are available at
https://github.com/ChyaZhang/ChatTraffic.
| [
{
"version": "v1",
"created": "Fri, 8 Mar 2024 04:19:56 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Mar 2024 08:10:47 GMT"
}
] | 1,710,460,800,000 | [
[
"Zhang",
"Chengyang",
""
],
[
"Zhang",
"Yong",
""
],
[
"Shao",
"Qitan",
""
],
[
"Feng",
"Jiangtao",
""
],
[
"Li",
"Bo",
""
],
[
"Lv",
"Yisheng",
""
],
[
"Piao",
"Xinglin",
""
],
[
"Yin",
"Baocai",
""
]
] |
2403.05112 | Tanvi Verma | Tanvi Verma, Linh Le Dinh, Nicholas Tan, Xinxing Xu, Chingyu Cheng,
Yong Liu | RLPeri: Accelerating Visual Perimetry Test with Reinforcement Learning
and Convolutional Feature Extraction | Published at AAAI-24 | The 38th Annual AAAI Conference on Artificial Intelligence, 2024 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Visual perimetry is an important eye examination that helps detect vision
problems caused by ocular or neurological conditions. During the test, a
patient's gaze is fixed at a specific location while light stimuli of varying
intensities are presented in central and peripheral vision. Based on the
patient's responses to the stimuli, the visual field mapping and sensitivity
are determined. However, maintaining high levels of concentration throughout
the test can be challenging for patients, leading to increased examination
times and decreased accuracy.
In this work, we present RLPeri, a reinforcement learning-based approach to
optimize visual perimetry testing. By determining the optimal sequence of
locations and initial stimulus values, we aim to reduce the examination time
without compromising accuracy. Additionally, we incorporate reward shaping
techniques to further improve the testing performance. To monitor the patient's
responses over time during testing, we represent the test's state as a pair of
3D matrices. We apply two different convolutional kernels to extract spatial
features across locations as well as features across different stimulus values
for each location. Through experiments, we demonstrate that our approach
results in a 10-20% reduction in examination time while maintaining the
accuracy as compared to state-of-the-art methods. With the presented approach,
we aim to make visual perimetry testing more efficient and patient-friendly,
while still providing accurate results.
| [
{
"version": "v1",
"created": "Fri, 8 Mar 2024 07:19:43 GMT"
}
] | 1,710,115,200,000 | [
[
"Verma",
"Tanvi",
""
],
[
"Dinh",
"Linh Le",
""
],
[
"Tan",
"Nicholas",
""
],
[
"Xu",
"Xinxing",
""
],
[
"Cheng",
"Chingyu",
""
],
[
"Liu",
"Yong",
""
]
] |
2403.05130 | Wangtao Sun | Wangtao Sun, Shizhu He, Jun Zhao, Kang Liu | From Chain to Tree: Refining Chain-like Rules into Tree-like Rules on
Knowledge Graphs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With good explanatory power and controllability, rule-based methods play an
important role in many tasks such as knowledge reasoning and decision support.
However, existing studies primarily focused on learning chain-like rules, which
limit their semantic expressions and accurate prediction abilities. As a
result, chain-like rules usually fire on the incorrect grounding values,
producing inaccurate or even erroneous reasoning results. In this paper, we
propose the concept of tree-like rules on knowledge graphs to expand the
application scope and improve the reasoning ability of rule-based methods.
Meanwhile, we propose an effective framework for refining chain-like rules into
tree-like rules. Experimental comparisons on four public datasets show that the
proposed framework can easily adapt to other chain-like rule induction methods
and the refined tree-like rules consistently achieve better performances than
chain-like rules on link prediction. The data and code of this paper can be
available at https://anonymous.4open.science/r/tree-rule-E3CD/.
| [
{
"version": "v1",
"created": "Fri, 8 Mar 2024 07:55:42 GMT"
}
] | 1,710,115,200,000 | [
[
"Sun",
"Wangtao",
""
],
[
"He",
"Shizhu",
""
],
[
"Zhao",
"Jun",
""
],
[
"Liu",
"Kang",
""
]
] |
2403.05229 | Nan Liu | Siqi Li, Yuqing Shang, Ziwen Wang, Qiming Wu, Chuan Hong, Yilin Ning,
Di Miao, Marcus Eng Hock Ong, Bibhas Chakraborty, Nan Liu | Developing Federated Time-to-Event Scores Using Heterogeneous Real-World
Survival Data | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Survival analysis serves as a fundamental component in numerous healthcare
applications, where the determination of the time to specific events (such as
the onset of a certain disease or death) for patients is crucial for clinical
decision-making. Scoring systems are widely used for swift and efficient risk
prediction. However, existing methods for constructing survival scores presume
that data originates from a single source, posing privacy challenges in
collaborations with multiple data owners. We propose a novel framework for
building federated scoring systems for multi-site survival outcomes, ensuring
both privacy and communication efficiency. We applied our approach to sites
with heterogeneous survival data originating from emergency departments in
Singapore and the United States. Additionally, we independently developed local
scores at each site. In testing datasets from each participant site, our
proposed federated scoring system consistently outperformed all local models,
evidenced by higher integrated area under the receiver operating characteristic
curve (iAUC) values, with a maximum improvement of 11.6%. Additionally, the
federated score's time-dependent AUC(t) values showed advantages over local
scores, exhibiting narrower confidence intervals (CIs) across most time points.
The model developed through our proposed method exhibits effective performance
on each local site, signifying noteworthy implications for healthcare research.
Sites participating in our proposed federated scoring model training gained
benefits by acquiring survival models with enhanced prediction accuracy and
efficiency. This study demonstrates the effectiveness of our privacy-preserving
federated survival score generation framework and its applicability to
real-world heterogeneous survival data.
| [
{
"version": "v1",
"created": "Fri, 8 Mar 2024 11:32:00 GMT"
}
] | 1,710,115,200,000 | [
[
"Li",
"Siqi",
""
],
[
"Shang",
"Yuqing",
""
],
[
"Wang",
"Ziwen",
""
],
[
"Wu",
"Qiming",
""
],
[
"Hong",
"Chuan",
""
],
[
"Ning",
"Yilin",
""
],
[
"Miao",
"Di",
""
],
[
"Ong",
"Marcus Eng Hock",
""
],
[
"Chakraborty",
"Bibhas",
""
],
[
"Liu",
"Nan",
""
]
] |
2403.05260 | Hui Liu | Wei Duan, Hui Liu | Predicting Single-cell Drug Sensitivity by Adaptive Weighted Feature for
Adversarial Multi-source Domain Adaptation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The development of single-cell sequencing technology had promoted the
generation of a large amount of single-cell transcriptional profiles, providing
valuable opportunities to explore drug-resistant cell subpopulations in a
tumor. However, the drug sensitivity data in single-cell level is still scarce
to date, pressing an urgent and highly challenging task for computational
prediction of the drug sensitivity to individual cells. This paper proposed
scAdaDrug, a multi-source adaptive weighting model to predict single-cell drug
sensitivity. We used an autoencoder to extract domain-invariant features
related to drug sensitivity from multiple source domains by exploiting
adversarial domain adaptation. Especially, we introduced an adaptive weight
generator to produce importance-aware and mutual independent weights, which
could adaptively modulate the embedding of each sample in dimension-level for
both source and target domains. Extensive experimental results showed that our
model achieved state-of-the-art performance in predicting drug sensitivity on
sinle-cell datasets, as well as on cell line and patient datasets.
| [
{
"version": "v1",
"created": "Fri, 8 Mar 2024 12:31:03 GMT"
}
] | 1,710,115,200,000 | [
[
"Duan",
"Wei",
""
],
[
"Liu",
"Hui",
""
]
] |
2403.05265 | Zinan Zeng | Zinan Zeng, Sen Ye, Zijian Cai, Heng Wang, Yuhan Liu, Haokai Zhang,
Minnan Luo | MMoE: Robust Spoiler Detection with Multi-modal Information and
Domain-aware Mixture-of-Experts | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online movie review websites are valuable for information and discussion
about movies. However, the massive spoiler reviews detract from the
movie-watching experience, making spoiler detection an important task. Previous
methods simply focus on reviews' text content, ignoring the heterogeneity of
information in the platform. For instance, the metadata and the corresponding
user's information of a review could be helpful. Besides, the spoiler language
of movie reviews tends to be genre-specific, thus posing a domain
generalization challenge for existing methods. To this end, we propose MMoE, a
multi-modal network that utilizes information from multiple modalities to
facilitate robust spoiler detection and adopts Mixture-of-Experts to enhance
domain generalization. MMoE first extracts graph, text, and meta feature from
the user-movie network, the review's textual content, and the review's metadata
respectively. To handle genre-specific spoilers, we then adopt
Mixture-of-Experts architecture to process information in three modalities to
promote robustness. Finally, we use an expert fusion layer to integrate the
features from different perspectives and make predictions based on the fused
embedding. Experiments demonstrate that MMoE achieves state-of-the-art
performance on two widely-used spoiler detection datasets, surpassing previous
SOTA methods by 2.56% and 8.41% in terms of accuracy and F1-score. Further
experiments also demonstrate MMoE's superiority in robustness and
generalization.
| [
{
"version": "v1",
"created": "Fri, 8 Mar 2024 12:42:04 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Mar 2024 03:43:54 GMT"
}
] | 1,710,460,800,000 | [
[
"Zeng",
"Zinan",
""
],
[
"Ye",
"Sen",
""
],
[
"Cai",
"Zijian",
""
],
[
"Wang",
"Heng",
""
],
[
"Liu",
"Yuhan",
""
],
[
"Zhang",
"Haokai",
""
],
[
"Luo",
"Minnan",
""
]
] |
2403.05307 | Jinyang Li | Jinyang Li, Nan Huo, Yan Gao, Jiayi Shi, Yingxiu Zhao, Ge Qu, Yurong
Wu, Chenhao Ma, Jian-Guang Lou, Reynold Cheng | Tapilot-Crossing: Benchmarking and Evolving LLMs Towards Interactive
Data Analysis Agents | 30 pages, 7 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Interactive Data Analysis, the collaboration between humans and LLM agents,
enables real-time data exploration for informed decision-making. The challenges
and costs of collecting realistic interactive logs for data analysis hinder the
quantitative evaluation of Large Language Model (LLM) agents in this task. To
mitigate this issue, we introduce Tapilot-Crossing, a new benchmark to evaluate
LLM agents on interactive data analysis. Tapilot-Crossing contains 1024
interactions, covering 4 practical scenarios: Normal, Action, Private, and
Private Action. Notably, Tapilot-Crossing is constructed by an economical
multi-agent environment, Decision Company, with few human efforts. We evaluate
popular and advanced LLM agents in Tapilot-Crossing, which underscores the
challenges of interactive data analysis. Furthermore, we propose Adaptive
Interaction Reflection (AIR), a self-generated reflection strategy that guides
LLM agents to learn from successful history. Experiments demonstrate that Air
can evolve LLMs into effective interactive data analysis agents, achieving a
relative performance improvement of up to 44.5%.
| [
{
"version": "v1",
"created": "Fri, 8 Mar 2024 13:34:20 GMT"
}
] | 1,710,115,200,000 | [
[
"Li",
"Jinyang",
""
],
[
"Huo",
"Nan",
""
],
[
"Gao",
"Yan",
""
],
[
"Shi",
"Jiayi",
""
],
[
"Zhao",
"Yingxiu",
""
],
[
"Qu",
"Ge",
""
],
[
"Wu",
"Yurong",
""
],
[
"Ma",
"Chenhao",
""
],
[
"Lou",
"Jian-Guang",
""
],
[
"Cheng",
"Reynold",
""
]
] |
2403.05407 | Abdolmahdi Bagheri | Abdolmahdi Bagheri, Mahdi Dehshiri, Babak Nadjar Araabi, Alireza
Akhondi Asl | Algorithmic Identification of Essential Exogenous Nodes for Causal
Sufficiency in Brain Networks | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the investigation of any causal mechanisms, such as the brain's causal
networks, the assumption of causal sufficiency plays a critical role. Notably,
neglecting this assumption can result in significant errors, a fact that is
often disregarded in the causal analysis of brain networks. In this study, we
propose an algorithmic identification approach for determining essential
exogenous nodes that satisfy the critical need for causal sufficiency to adhere
to it in such inquiries. Our approach consists of three main steps: First, by
capturing the essence of the Peter-Clark (PC) algorithm, we conduct
independence tests for pairs of regions within a network, as well as for the
same pairs conditioned on nodes from other networks. Next, we distinguish
candidate confounders by analyzing the differences between the conditional and
unconditional results, using the Kolmogorov-Smirnov test. Subsequently, we
utilize Non-Factorized identifiable Variational Autoencoders (NF-iVAE) along
with the Correlation Coefficient index (CCI) metric to identify the confounding
variables within these candidate nodes. Applying our method to the Human
Connectome Projects (HCP) movie-watching task data, we demonstrate that while
interactions exist between dorsal and ventral regions, only dorsal regions
serve as confounders for the visual networks, and vice versa. These findings
align consistently with those resulting from the neuroscientific perspective.
Finally, we show the reliability of our results by testing 30 independent runs
for NF-iVAE initialization.
| [
{
"version": "v1",
"created": "Fri, 8 Mar 2024 16:05:47 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Mar 2024 14:35:35 GMT"
}
] | 1,710,720,000,000 | [
[
"Bagheri",
"Abdolmahdi",
""
],
[
"Dehshiri",
"Mahdi",
""
],
[
"Araabi",
"Babak Nadjar",
""
],
[
"Asl",
"Alireza Akhondi",
""
]
] |
2403.05525 | Haoyu Lu | Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu,
Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Hao Yang, Yaofeng Sun, Chengqi
Deng, Hanwei Xu, Zhenda Xie, Chong Ruan | DeepSeek-VL: Towards Real-World Vision-Language Understanding | https://github.com/deepseek-ai/DeepSeek-VL | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present DeepSeek-VL, an open-source Vision-Language (VL) Model designed
for real-world vision and language understanding applications. Our approach is
structured around three key dimensions:
We strive to ensure our data is diverse, scalable, and extensively covers
real-world scenarios including web screenshots, PDFs, OCR, charts, and
knowledge-based content, aiming for a comprehensive representation of practical
contexts. Further, we create a use case taxonomy from real user scenarios and
construct an instruction tuning dataset accordingly. The fine-tuning with this
dataset substantially improves the model's user experience in practical
applications. Considering efficiency and the demands of most real-world
scenarios, DeepSeek-VL incorporates a hybrid vision encoder that efficiently
processes high-resolution images (1024 x 1024), while maintaining a relatively
low computational overhead. This design choice ensures the model's ability to
capture critical semantic and detailed information across various visual tasks.
We posit that a proficient Vision-Language Model should, foremost, possess
strong language abilities. To ensure the preservation of LLM capabilities
during pretraining, we investigate an effective VL pretraining strategy by
integrating LLM training from the beginning and carefully managing the
competitive dynamics observed between vision and language modalities.
The DeepSeek-VL family (both 1.3B and 7B models) showcases superior user
experiences as a vision-language chatbot in real-world applications, achieving
state-of-the-art or competitive performance across a wide range of
visual-language benchmarks at the same model size while maintaining robust
performance on language-centric benchmarks. We have made both 1.3B and 7B
models publicly accessible to foster innovations based on this foundation
model.
| [
{
"version": "v1",
"created": "Fri, 8 Mar 2024 18:46:00 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Mar 2024 16:47:41 GMT"
}
] | 1,710,201,600,000 | [
[
"Lu",
"Haoyu",
""
],
[
"Liu",
"Wen",
""
],
[
"Zhang",
"Bo",
""
],
[
"Wang",
"Bingxuan",
""
],
[
"Dong",
"Kai",
""
],
[
"Liu",
"Bo",
""
],
[
"Sun",
"Jingxiang",
""
],
[
"Ren",
"Tongzheng",
""
],
[
"Li",
"Zhuoshu",
""
],
[
"Yang",
"Hao",
""
],
[
"Sun",
"Yaofeng",
""
],
[
"Deng",
"Chengqi",
""
],
[
"Xu",
"Hanwei",
""
],
[
"Xie",
"Zhenda",
""
],
[
"Ruan",
"Chong",
""
]
] |
2403.05632 | Hongyi Guo | Hongyi Guo, Zhihan Liu, Yufeng Zhang, Zhaoran Wang | Can Large Language Models Play Games? A Case Study of A Self-Play
Approach | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) harness extensive data from the Internet,
storing a broad spectrum of prior knowledge. While LLMs have proven beneficial
as decision-making aids, their reliability is hampered by limitations in
reasoning, hallucination phenomenon, and so on. On the other hand, Monte-Carlo
Tree Search (MCTS) is a heuristic search algorithm that provides reliable
decision-making solutions, achieved through recursive rollouts and self-play.
However, the effectiveness of MCTS relies heavily on heuristic pruning and
external value functions, particularly in complex decision scenarios. This work
introduces an innovative approach that bolsters LLMs with MCTS self-play to
efficiently resolve deterministic turn-based zero-sum games (DTZG), such as
chess and go, without the need for additional training. Specifically, we
utilize LLMs as both action pruners and proxies for value functions without the
need for additional training. We theoretically prove that the suboptimality of
the estimated value in our proposed method scales with $\tilde{\mathcal
O}\Bigl(\frac{|\tilde {\mathcal A}|}{\sqrt{N}} + \epsilon_\mathrm{pruner} +
\epsilon_\mathrm{critic}\Bigr)$, where \(N\) is the number of simulations,
$|\tilde {\mathcal A}|$ is the cardinality of the pruned action space by LLM,
and $\epsilon_\mathrm{pruner}$ and $\epsilon_\mathrm{critic}$ quantify the
errors incurred by adopting LLMs as action space pruner and value function
proxy, respectively. Our experiments in chess and go demonstrate the capability
of our method to address challenges beyond the scope of MCTS and improve the
performance of the directly application of LLMs.
| [
{
"version": "v1",
"created": "Fri, 8 Mar 2024 19:16:29 GMT"
}
] | 1,710,201,600,000 | [
[
"Guo",
"Hongyi",
""
],
[
"Liu",
"Zhihan",
""
],
[
"Zhang",
"Yufeng",
""
],
[
"Wang",
"Zhaoran",
""
]
] |
2403.05801 | Haotian Zheng | Chen Li, Haotian Zheng, Yiping Sun, Cangqing Wang, Liqiang Yu, Che
Chang, Xinyu Tian, Bo Liu | Enhancing Multi-Hop Knowledge Graph Reasoning through Reward Shaping
Techniques | This paper has been accepted by the 2024 5th International Seminar on
Artificial Intelligence, Networking and Information Technology (AINIT 2024) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the realm of computational knowledge representation, Knowledge Graph
Reasoning (KG-R) stands at the forefront of facilitating sophisticated
inferential capabilities across multifarious domains. The quintessence of this
research elucidates the employment of reinforcement learning (RL) strategies,
notably the REINFORCE algorithm, to navigate the intricacies inherent in
multi-hop KG-R. This investigation critically addresses the prevalent
challenges introduced by the inherent incompleteness of Knowledge Graphs (KGs),
which frequently results in erroneous inferential outcomes, manifesting as both
false negatives and misleading positives. By partitioning the Unified Medical
Language System (UMLS) benchmark dataset into rich and sparse subsets, we
investigate the efficacy of pre-trained BERT embeddings and Prompt Learning
methodologies to refine the reward shaping process. This approach not only
enhances the precision of multi-hop KG-R but also sets a new precedent for
future research in the field, aiming to improve the robustness and accuracy of
knowledge inference within complex KG frameworks. Our work contributes a novel
perspective to the discourse on KG reasoning, offering a methodological
advancement that aligns with the academic rigor and scholarly aspirations of
the Natural journal, promising to invigorate further advancements in the realm
of computational knowledge representation.
| [
{
"version": "v1",
"created": "Sat, 9 Mar 2024 05:34:07 GMT"
}
] | 1,710,201,600,000 | [
[
"Li",
"Chen",
""
],
[
"Zheng",
"Haotian",
""
],
[
"Sun",
"Yiping",
""
],
[
"Wang",
"Cangqing",
""
],
[
"Yu",
"Liqiang",
""
],
[
"Chang",
"Che",
""
],
[
"Tian",
"Xinyu",
""
],
[
"Liu",
"Bo",
""
]
] |
2403.05921 | Bohui Zhang | Bohui Zhang and Valentina Anita Carriero and Katrin Schreiberhuber and
Stefani Tsaneva and Luc\'ia S\'anchez Gonz\'alez and Jongmo Kim and Jacopo de
Berardinis | OntoChat: a Framework for Conversational Ontology Engineering using
Language Models | ESWC 2024 Special Track on Large Language Models for Knowledge
Engineering | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Ontology engineering (OE) in large projects poses a number of challenges
arising from the heterogeneous backgrounds of the various stakeholders, domain
experts, and their complex interactions with ontology designers. This
multi-party interaction often creates systematic ambiguities and biases from
the elicitation of ontology requirements, which directly affect the design,
evaluation and may jeopardise the target reuse. Meanwhile, current OE
methodologies strongly rely on manual activities (e.g., interviews, discussion
pages). After collecting evidence on the most crucial OE activities, we
introduce \textbf{OntoChat}, a framework for conversational ontology
engineering that supports requirement elicitation, analysis, and testing. By
interacting with a conversational agent, users can steer the creation of user
stories and the extraction of competency questions, while receiving
computational support to analyse the overall requirements and test early
versions of the resulting ontologies. We evaluate OntoChat by replicating the
engineering of the Music Meta Ontology, and collecting preliminary metrics on
the effectiveness of each component from users. We release all code at
https://github.com/King-s-Knowledge-Graph-Lab/OntoChat.
| [
{
"version": "v1",
"created": "Sat, 9 Mar 2024 14:04:06 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Apr 2024 10:13:24 GMT"
}
] | 1,714,348,800,000 | [
[
"Zhang",
"Bohui",
""
],
[
"Carriero",
"Valentina Anita",
""
],
[
"Schreiberhuber",
"Katrin",
""
],
[
"Tsaneva",
"Stefani",
""
],
[
"González",
"Lucía Sánchez",
""
],
[
"Kim",
"Jongmo",
""
],
[
"de Berardinis",
"Jacopo",
""
]
] |
2403.06568 | Furong Ye | Furong Ye, Chuan Luo, Shaowei Cai | Better Understandings and Configurations in MaxSAT Local Search Solvers
via Anytime Performance Analysis | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Though numerous solvers have been proposed for the MaxSAT problem, and the
benchmark environment such as MaxSAT Evaluations provides a platform for the
comparison of the state-of-the-art solvers, existing assessments were usually
evaluated based on the quality, e.g., fitness, of the best-found solutions
obtained within a given running time budget. However, concerning solely the
final obtained solutions regarding specific time budgets may restrict us from
comprehending the behavior of the solvers along the convergence process. This
paper demonstrates that Empirical Cumulative Distribution Functions can be used
to compare MaxSAT local search solvers' anytime performance across multiple
problem instances and various time budgets. The assessment reveals distinctions
in solvers' performance and displays that the (dis)advantages of solvers adjust
along different running times. This work also exhibits that the quantitative
and high variance assessment of anytime performance can guide machines, i.e.,
automatic configurators, to search for better parameter settings. Our
experimental results show that the hyperparameter optimization tool, i.e.,
SMAC, generally achieves better parameter settings of local search when using
the anytime performance as the cost function, compared to using the fitness of
the best-found solutions.
| [
{
"version": "v1",
"created": "Mon, 11 Mar 2024 10:10:35 GMT"
}
] | 1,710,201,600,000 | [
[
"Ye",
"Furong",
""
],
[
"Luo",
"Chuan",
""
],
[
"Cai",
"Shaowei",
""
]
] |
2403.06995 | Lucas Maziero | Lucas Porto Maziero, F\'abio Luiz Usberti, Celso Cavellucci | Exact algorithms and heuristics for capacitated covering salesman
problems | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper introduces the Capacitated Covering Salesman Problem (CCSP),
approaching the notion of service by coverage in capacitated vehicle routing
problems. In CCSP, locations where vehicles can transit are provided, some of
which have customers with demands. The objective is to service customers
through a fleet of vehicles based in a depot, minimizing the total distance
traversed by the vehicles. CCSP is unique in the sense that customers, to be
serviced, do not need to be visited by a vehicle. Instead, they can be serviced
if they are within a coverage area of the vehicle. This assumption is motivated
by applications in which some customers are unreachable (e.g., forbidden access
to vehicles) or visiting every customer is impractical. In this work,
optimization methodologies are proposed for the CCSP based on ILP (Integer
Linear Programming) and BRKGA (Biased Random-Key Genetic Algorithm)
metaheuristic. Computational experiments conducted on a benchmark of instances
for the CCSP evaluate the performance of the methodologies with respect to
primal bounds. Furthermore, our ILP formulation is extended in order to create
a novel MILP (Mixed Integer Linear Programming) for the Multi-Depot Covering
Tour Vehicle Routing Problem (MDCTVRP). Computational experiments show that the
extended MILP formulation outperformed the previous state-of-the-art exact
approach with respect to optimality gaps. In particular, optimal solutions were
obtained for several previously unsolved instances.
| [
{
"version": "v1",
"created": "Sun, 3 Mar 2024 07:50:29 GMT"
}
] | 1,710,288,000,000 | [
[
"Maziero",
"Lucas Porto",
""
],
[
"Usberti",
"Fábio Luiz",
""
],
[
"Cavellucci",
"Celso",
""
]
] |
2403.06996 | Solve S{\ae}b{\o} | Solve S{\ae}b{\o} and Helge Brovold | On the stochastics of human and artificial creativity | 40 pages, 1 figure with 2 sub-figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | What constitutes human creativity, and is it possible for computers to
exhibit genuine creativity? We argue that achieving human-level intelligence in
computers, or so-called Artificial General Intelligence, necessitates attaining
also human-level creativity. We contribute to this discussion by developing a
statistical representation of human creativity, incorporating prior insights
from stochastic theory, psychology, philosophy, neuroscience, and chaos theory.
This highlights the stochastic nature of the human creative process, which
includes both a bias guided, random proposal step, and an evaluation step
depending on a flexible or transformable bias structure. The acquired
representation of human creativity is subsequently used to assess the
creativity levels of various contemporary AI systems. Our analysis includes
modern AI algorithms such as reinforcement learning, diffusion models, and
large language models, addressing to what extent they measure up to human level
creativity. We conclude that these technologies currently lack the capability
for autonomous creative action at a human level.
| [
{
"version": "v1",
"created": "Sun, 3 Mar 2024 10:38:57 GMT"
}
] | 1,710,288,000,000 | [
[
"Sæbø",
"Solve",
""
],
[
"Brovold",
"Helge",
""
]
] |
2403.07010 | Miin-Shen Yang | Miin-Shen Yang, Yasir Akhtar, Mehboob Ali | On Globular T-Spherical Fuzzy (G-TSF) Sets with Application to G-TSF
Multi-Criteria Group Decision-Making | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this paper, we give the concept of Globular T-Spherical Fuzzy (G-TSF) Sets
(G-TSFSs) as an innovative extension of T-Spherical Fuzzy Sets (TSFSs) and
Circular Spherical Fuzzy Sets (C-SFSs). G-TSFSs represent membership,
indeterminacy, and non-membership degrees using a globular/sphere bound that
can offer a more accurate portrayal of vague, ambiguous, and imprecise
information. By employing a structured representation of data points on a
sphere with a specific center and radius, this model enhances decision-making
processes by enabling a more comprehensive evaluation of objects within a
flexible region. Following the newly defined G-TSFSs, we establish some basic
set operations and introduce fundamental algebraic operations for G-TSF Values
(G-TSFVs). These operations expand the evaluative capabilities of
decision-makers, facilitating more sensitive decision-making processes in a
broader region. To quantify a similarity measure (SM) between GTSFVs, the SM is
defined based on the radius of G-TSFSs. Additionally, Hamming distance and
Euclidean distance are introduced for G-TSFSs. We also present theorems and
examples to elucidate computational mechanisms. Furthermore, we give the G-TSF
Weighted Average (G-TSFWA) and G-TSF Weighted Geometric (G-TSFWG) operators.
Leveraging our proposed SM, a Multi-Criteria Group Decision-Making (MCGDM)
scheme for G-TSFSs, named G-TSF MCGDM (G-TSFMCGDM), is developed to address
group decision-making problems. The applicability and effectiveness of the
proposed G-TSFMCGDM method are demonstrated by applying it to solve the
selection problem of the best venue for professional development training
sessions in a firm. The analysis results affirm the suitability and utility of
the proposed method for resolving MCGDM problems, establishing its
effectiveness in practical decision-making scenarios.
| [
{
"version": "v1",
"created": "Sat, 9 Mar 2024 04:19:50 GMT"
}
] | 1,710,288,000,000 | [
[
"Yang",
"Miin-Shen",
""
],
[
"Akhtar",
"Yasir",
""
],
[
"Ali",
"Mehboob",
""
]
] |
2403.07363 | Yingtao Ren | Yingtao Ren, Xiaomin Zhu, Kaiyuan Bai, Runtong Zhang | A New Random Forest Ensemble of Intuitionistic Fuzzy Decision Trees | null | IEEE Transactions on Fuzzy Systems 31.5 (2023): 1729-1741 | 10.1109/TFUZZ.2022.3215725 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classification is essential to the applications in the field of data mining,
artificial intelligence, and fault detection. There exists a strong need in
developing accurate, suitable, and efficient classification methods and
algorithms with broad applicability. Random forest is a general algorithm that
is often used for classification under complex conditions. Although it has been
widely adopted, its combination with diverse fuzzy theory is still worth
exploring. In this paper, we propose the intuitionistic fuzzy random forest
(IFRF), a new random forest ensemble of intuitionistic fuzzy decision trees
(IFDT). Such trees in forest use intuitionistic fuzzy information gain to
select features and consider hesitation in information transmission. The
proposed method enjoys the power of the randomness from bootstrapped sampling
and feature selection, the flexibility of fuzzy logic and fuzzy sets, and the
robustness of multiple classifier systems. Extensive experiments demonstrate
that the IFRF has competitative and superior performance compared to other
state-of-the-art fuzzy and ensemble algorithms. IFDT is more suitable for
ensemble learning with outstanding classification accuracy. This study is the
first to propose a random forest ensemble based on the intuitionistic fuzzy
theory.
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2024 06:52:24 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Mar 2024 11:08:15 GMT"
}
] | 1,710,806,400,000 | [
[
"Ren",
"Yingtao",
""
],
[
"Zhu",
"Xiaomin",
""
],
[
"Bai",
"Kaiyuan",
""
],
[
"Zhang",
"Runtong",
""
]
] |
2403.07510 | Oliver Kim | Oliver Kim and Mohan Sridharan | Relevance Score: A Landmark-Like Heuristic for Planning | 12 Pages, 3 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Landmarks are facts or actions that appear in all valid solutions of a
planning problem. They have been used successfully to calculate heuristics that
guide the search for a plan. We investigate an extension to this concept by
defining a novel "relevance score" that helps identify facts or actions that
appear in most but not all plans to achieve any given goal. We describe an
approach to compute this relevance score and use it as a heuristic in the
search for a plan. We experimentally compare the performance of our approach
with that of a state of the art landmark-based heuristic planning approach
using benchmark planning problems. While the original landmark-based heuristic
leads to better performance on problems with well-defined landmarks, our
approach substantially improves performance on problems that lack non-trivial
landmarks.
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2024 10:45:45 GMT"
}
] | 1,710,288,000,000 | [
[
"Kim",
"Oliver",
""
],
[
"Sridharan",
"Mohan",
""
]
] |
2403.07566 | Weiwei Gu | Weiwei Gu and Senquan Wang | An Improved Strategy for Blood Glucose Control Using Multi-Step Deep
Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Blood Glucose (BG) control involves keeping an individual's BG within a
healthy range through extracorporeal insulin injections is an important task
for people with type 1 diabetes. However,traditional patient self-management is
cumbersome and risky. Recent research has been devoted to exploring
individualized and automated BG control approaches, among which Deep
Reinforcement Learning (DRL) shows potential as an emerging approach. In this
paper, we use an exponential decay model of drug concentration to convert the
formalization of the BG control problem, which takes into account the delay and
prolongedness of drug effects, from a PAE-POMDP (Prolonged Action
Effect-Partially Observable Markov Decision Process) to a MDP, and we propose a
novel multi-step DRL-based algorithm to solve the problem. The Prioritized
Experience Replay (PER) sampling method is also used in it. Compared to
single-step bootstrapped updates, multi-step learning is more efficient and
reduces the influence from biasing targets. Our proposed method converges
faster and achieves higher cumulative rewards compared to the benchmark in the
same training environment, and improves the time-in-range (TIR), the percentage
of time the patient's BG is within the target range, in the evaluation phase.
Our work validates the effectiveness of multi-step reinforcement learning in BG
control, which may help to explore the optimal glycemic control measure and
improve the survival of diabetic patients.
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2024 11:53:00 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Mar 2024 09:48:34 GMT"
}
] | 1,710,720,000,000 | [
[
"Gu",
"Weiwei",
""
],
[
"Wang",
"Senquan",
""
]
] |
2403.07964 | Maqsood Shah | Maqsood Hussain Shah, Yue Ding, Shaoshu Zhu, Yingqi Gu and Mingming
Liu | Optimal Design and Implementation of an Open-source Emulation Platform
for User-Centric Shared E-mobility Services | 7 pages, 3 figures, 2 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In response to the escalating global challenge of increasing emissions and
pollution in transportation, shared electric mobility services, encompassing
e-cars, e-bikes, and e-scooters, have emerged as a popular strategy. However,
existingshared electric mobility services exhibit critical design deficiencies,
including insufficient service integration, imprecise energy consumption
forecasting, limited scalability and geographical coverage, and a notable
absence of a user-centric perspective, particularly in the context of
multi-modal transportation. More importantly, there is no consolidated
open-source framework which could benefit the e-mobility research community.
This paper aims to bridge this gap by providing a pioneering open-source
framework for shared e-mobility. The proposed framework, with an
agent-in-the-loop approach and modular architecture, is tailored to diverse
user preferences and offers enhanced customization. We demonstrate the
viability of this framework by solving an integrated multi-modal
route-optimization problem using the modified Ant Colony Optimization (ACO)
algorithm. The primary contribution of this work is to provide a collaborative
and transparent framework to tackle the dynamic challenges in the field of
e-mobility research using a consolidated approach.
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2024 11:51:30 GMT"
}
] | 1,710,374,400,000 | [
[
"Shah",
"Maqsood Hussain",
""
],
[
"Ding",
"Yue",
""
],
[
"Zhu",
"Shaoshu",
""
],
[
"Gu",
"Yingqi",
""
],
[
"Liu",
"Mingming",
""
]
] |
2403.08425 | Pedro Henrique Luz de Araujo | Benjamin Roth, Pedro Henrique Luz de Araujo, Yuxi Xia, Saskia
Kaltenbrunner and Christoph Korab | Specification Overfitting in Artificial Intelligence | 40 pages, 2 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Machine learning (ML) and artificial intelligence (AI) approaches are often
criticized for their inherent bias and for their lack of control,
accountability, and transparency. Consequently, regulatory bodies struggle with
containing this technology's potential negative side effects. High-level
requirements such as fairness and robustness need to be formalized into
concrete specification metrics, imperfect proxies that capture isolated aspects
of the underlying requirements. Given possible trade-offs between different
metrics and their vulnerability to over-optimization, integrating specification
metrics in system development processes is not trivial. This paper defines
specification overfitting, a scenario where systems focus excessively on
specified metrics to the detriment of high-level requirements and task
performance. We present an extensive literature survey to categorize how
researchers propose, measure, and optimize specification metrics in several AI
fields (e.g., natural language processing, computer vision, reinforcement
learning). Using a keyword-based search on papers from major AI conferences and
journals between 2018 and mid-2023, we identify and analyze 74 papers that
propose or optimize specification metrics. We find that although most papers
implicitly address specification overfitting (e.g., by reporting more than one
specification metric), they rarely discuss which role specification metrics
should play in system development or explicitly define the scope and
assumptions behind metric formulations.
| [
{
"version": "v1",
"created": "Wed, 13 Mar 2024 11:20:34 GMT"
}
] | 1,710,374,400,000 | [
[
"Roth",
"Benjamin",
""
],
[
"de Araujo",
"Pedro Henrique Luz",
""
],
[
"Xia",
"Yuxi",
""
],
[
"Kaltenbrunner",
"Saskia",
""
],
[
"Korab",
"Christoph",
""
]
] |
2403.08843 | Thi Kim Nhung Dang | Thi Kim Nhung Dang, Milan Lopuha\"a-Zwakenberg, Mari\"elle Stoelinga | Fuzzy Fault Trees Formalized | 14 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Fault tree analysis is a vital method of assessing safety risks. It helps to
identify potential causes of accidents, assess their likelihood and severity,
and suggest preventive measures. Quantitative analysis of fault trees is often
done via the dependability metrics that compute the system's failure behaviour
over time. However, the lack of precise data is a major obstacle to
quantitative analysis, and so to reliability analysis. Fuzzy logic is a popular
framework for dealing with ambiguous values and has applications in many
domains. A number of fuzzy approaches have been proposed to fault tree
analysis, but -- to the best of our knowledge -- none of them provide rigorous
definitions or algorithms for computing fuzzy unreliability values. In this
paper, we define a rigorous framework for fuzzy unreliability values. In
addition, we provide a bottom-up algorithm to efficiently calculate fuzzy
reliability for a system. The algorithm incorporates the concept of
$\alpha$-cuts method. That is, performing binary algebraic operations on
intervals on horizontally discretised $\alpha$-cut representations of fuzzy
numbers. The method preserves the nonlinearity of fuzzy unreliability. Finally,
we illustrate the results obtained from two case studies.
| [
{
"version": "v1",
"created": "Wed, 13 Mar 2024 14:45:54 GMT"
}
] | 1,710,460,800,000 | [
[
"Dang",
"Thi Kim Nhung",
""
],
[
"Lopuhaä-Zwakenberg",
"Milan",
""
],
[
"Stoelinga",
"Mariëlle",
""
]
] |
2403.08910 | \'Angel Aso-Mollar | \'Angel Aso-Mollar, Eva Onaindia | Meta-operators for Enabling Parallel Planning Using Deep Reinforcement
Learning | 9 pages. Submitted to PRL workshop at ICAPS 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | There is a growing interest in the application of Reinforcement Learning (RL)
techniques to AI planning with the aim to come up with general policies.
Typically, the mapping of the transition model of AI planning to the state
transition system of a Markov Decision Process is established by assuming a
one-to-one correspondence of the respective action spaces. In this paper, we
introduce the concept of meta-operator as the result of simultaneously applying
multiple planning operators, and we show that including meta-operators in the
RL action space enables new planning perspectives to be addressed using RL,
such as parallel planning. Our research aims to analyze the performance and
complexity of including meta-operators in the RL process, concretely in domains
where satisfactory outcomes have not been previously achieved using usual
generalized planning models. The main objective of this article is thus to pave
the way towards a redefinition of the RL action space in a manner that is more
closely aligned with the planning perspective.
| [
{
"version": "v1",
"created": "Wed, 13 Mar 2024 19:00:36 GMT"
}
] | 1,710,460,800,000 | [
[
"Aso-Mollar",
"Ángel",
""
],
[
"Onaindia",
"Eva",
""
]
] |
2403.09232 | Alexander Stevens | Alexander Stevens, Chun Ouyang, Johannes De Smedt, Catarina Moreira | Generating Feasible and Plausible Counterfactual Explanations for
Outcome Prediction of Business Processes | Journal Submission | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In recent years, various machine and deep learning architectures have been
successfully introduced to the field of predictive process analytics.
Nevertheless, the inherent opacity of these algorithms poses a significant
challenge for human decision-makers, hindering their ability to understand the
reasoning behind the predictions. This growing concern has sparked the
introduction of counterfactual explanations, designed as human-understandable
what if scenarios, to provide clearer insights into the decision-making process
behind undesirable predictions. The generation of counterfactual explanations,
however, encounters specific challenges when dealing with the sequential nature
of the (business) process cases typically used in predictive process analytics.
Our paper tackles this challenge by introducing a data-driven approach,
REVISEDplus, to generate more feasible and plausible counterfactual
explanations. First, we restrict the counterfactual algorithm to generate
counterfactuals that lie within a high-density region of the process data,
ensuring that the proposed counterfactuals are realistic and feasible within
the observed process data distribution. Additionally, we ensure plausibility by
learning sequential patterns between the activities in the process cases,
utilising Declare language templates. Finally, we evaluate the properties that
define the validity of counterfactuals.
| [
{
"version": "v1",
"created": "Thu, 14 Mar 2024 09:56:35 GMT"
}
] | 1,710,460,800,000 | [
[
"Stevens",
"Alexander",
""
],
[
"Ouyang",
"Chun",
""
],
[
"De Smedt",
"Johannes",
""
],
[
"Moreira",
"Catarina",
""
]
] |
2403.09249 | Imanol Echeverria | Imanol Echeverria, Maialen Murua, Roberto Santana | Leveraging Constraint Programming in a Deep Learning Approach for
Dynamically Solving the Flexible Job-Shop Scheduling Problem | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in the flexible job-shop scheduling problem (FJSSP) are
primarily based on deep reinforcement learning (DRL) due to its ability to
generate high-quality, real-time solutions. However, DRL approaches often fail
to fully harness the strengths of existing techniques such as exact methods or
constraint programming (CP), which can excel at finding optimal or near-optimal
solutions for smaller instances. This paper aims to integrate CP within a deep
learning (DL) based methodology, leveraging the benefits of both. In this
paper, we introduce a method that involves training a DL model using optimal
solutions generated by CP, ensuring the model learns from high-quality data,
thereby eliminating the need for the extensive exploration typical in DRL and
enhancing overall performance. Further, we integrate CP into our DL framework
to jointly construct solutions, utilizing DL for the initial complex stages and
transitioning to CP for optimal resolution as the problem is simplified. Our
hybrid approach has been extensively tested on three public FJSSP benchmarks,
demonstrating superior performance over five state-of-the-art DRL approaches
and a widely-used CP solver. Additionally, with the objective of exploring the
application to other combinatorial optimization problems, promising preliminary
results are presented on applying our hybrid approach to the traveling salesman
problem, combining an exact method with a well-known DRL method.
| [
{
"version": "v1",
"created": "Thu, 14 Mar 2024 10:16:57 GMT"
}
] | 1,710,460,800,000 | [
[
"Echeverria",
"Imanol",
""
],
[
"Murua",
"Maialen",
""
],
[
"Santana",
"Roberto",
""
]
] |
2403.09289 | Anirban Mukherjee | Anirban Mukherjee, Hannah Hanwen Chang | Silico-centric Theory of Mind | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Theory of Mind (ToM) refers to the ability to attribute mental states, such
as beliefs, desires, intentions, and knowledge, to oneself and others, and to
understand that these mental states can differ from one's own and from reality.
We investigate ToM in environments with multiple, distinct, independent AI
agents, each possessing unique internal states, information, and objectives.
Inspired by human false-belief experiments, we present an AI ('focal AI') with
a scenario where its clone undergoes a human-centric ToM assessment. We prompt
the focal AI to assess whether its clone would benefit from additional
instructions. Concurrently, we give its clones the ToM assessment, both with
and without the instructions, thereby engaging the focal AI in higher-order
counterfactual reasoning akin to human mentalizing--with respect to humans in
one test and to other AI in another. We uncover a discrepancy: Contemporary AI
demonstrates near-perfect accuracy on human-centric ToM assessments. Since
information embedded in one AI is identically embedded in its clone, additional
instructions are redundant. Yet, we observe AI crafting elaborate instructions
for their clones, erroneously anticipating a need for assistance. An
independent referee AI agrees with these unsupported expectations. Neither the
focal AI nor the referee demonstrates ToM in our 'silico-centric' test.
| [
{
"version": "v1",
"created": "Thu, 14 Mar 2024 11:22:51 GMT"
}
] | 1,710,460,800,000 | [
[
"Mukherjee",
"Anirban",
""
],
[
"Chang",
"Hannah Hanwen",
""
]
] |
2403.09361 | Jin-Kao Hao | Pengfei He, Jin-Kao Hao, Qinghua Wu | A Multi-population Integrated Approach for Capacitated Location Routing | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The capacitated location-routing problem involves determining the depots from
a set of candidate capacitated depot locations and finding the required routes
from the selected depots to serve a set of customers whereas minimizing a cost
function that includes the cost of opening the chosen depots, the fixed
utilization cost per vehicle used, and the total cost (distance) of the routes.
This paper presents a multi-population integrated framework in which a
multi-depot edge assembly crossover generates promising offspring solutions
from the perspective of both depot location and route edge assembly. The method
includes an effective neighborhood-based local search, a feasibility-restoring
procedure and a diversification-oriented mutation. Of particular interest is
the multi-population scheme which organizes the population into multiple
subpopulations based on depot configurations. Extensive experiments on 281
benchmark instances from the literature show that the algorithm performs
remarkably well, by improving 101 best-known results (new upper bounds) and
matching 84 best-known results. Additional experiments are presented to gain
insight into the role of the key elements of the algorithm.
| [
{
"version": "v1",
"created": "Thu, 14 Mar 2024 13:11:30 GMT"
}
] | 1,710,460,800,000 | [
[
"He",
"Pengfei",
""
],
[
"Hao",
"Jin-Kao",
""
],
[
"Wu",
"Qinghua",
""
]
] |
2403.09404 | Anirban Mukherjee | Anirban Mukherjee, Hannah Hanwen Chang | Heuristic Reasoning in AI: Instrumental Use and Mimetic Absorption | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Deviating from conventional perspectives that frame artificial intelligence
(AI) systems solely as logic emulators, we propose a novel program of heuristic
reasoning. We distinguish between the 'instrumental' use of heuristics to match
resources with objectives, and 'mimetic absorption,' whereby heuristics
manifest randomly and universally. Through a series of innovative experiments,
including variations of the classic Linda problem and a novel application of
the Beauty Contest game, we uncover trade-offs between maximizing accuracy and
reducing effort that shape the conditions under which AIs transition between
exhaustive logical processing and the use of cognitive shortcuts (heuristics).
We provide evidence that AIs manifest an adaptive balancing of precision and
efficiency, consistent with principles of resource-rational human cognition as
explicated in classical theories of bounded rationality and dual-process
theory. Our findings reveal a nuanced picture of AI cognition, where trade-offs
between resources and objectives lead to the emulation of biological systems,
especially human cognition, despite AIs being designed without a sense of self
and lacking introspective capabilities.
| [
{
"version": "v1",
"created": "Thu, 14 Mar 2024 13:53:05 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Mar 2024 12:45:01 GMT"
}
] | 1,710,806,400,000 | [
[
"Mukherjee",
"Anirban",
""
],
[
"Chang",
"Hannah Hanwen",
""
]
] |
2403.09481 | Paloma Rabaey | Paloma Rabaey, Johannes Deleu, Stefan Heytens, Thomas Demeester | Clinical Reasoning over Tabular Data and Text with Bayesian Networks | AI in Medicine 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Bayesian networks are well-suited for clinical reasoning on tabular data, but
are less compatible with natural language data, for which neural networks
provide a successful framework. This paper compares and discusses strategies to
augment Bayesian networks with neural text representations, both in a
generative and discriminative manner. This is illustrated with simulation
results for a primary care use case (diagnosis of pneumonia) and discussed in a
broader clinical context.
| [
{
"version": "v1",
"created": "Thu, 14 Mar 2024 15:25:23 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Mar 2024 16:48:27 GMT"
},
{
"version": "v3",
"created": "Thu, 23 May 2024 13:41:19 GMT"
}
] | 1,716,508,800,000 | [
[
"Rabaey",
"Paloma",
""
],
[
"Deleu",
"Johannes",
""
],
[
"Heytens",
"Stefan",
""
],
[
"Demeester",
"Thomas",
""
]
] |
2403.09806 | Balaji Ganesan | Balaji Ganesan, Matheen Ahmed Pasha, Srinivasa Parkala, Neeraj R
Singh, Gayatri Mishra, Sumit Bhatia, Hima Patel, Somashekar Naganna, Sameep
Mehta | xLP: Explainable Link Prediction for Master Data Management | 8 pages, 4 figures, NeurIPS 2020 Competition and Demonstration Track.
arXiv admin note: text overlap with arXiv:2012.05516 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explaining neural model predictions to users requires creativity. Especially
in enterprise applications, where there are costs associated with users' time,
and their trust in the model predictions is critical for adoption. For link
prediction in master data management, we have built a number of explainability
solutions drawing from research in interpretability, fact verification, path
ranking, neuro-symbolic reasoning and self-explaining AI. In this demo, we
present explanations for link prediction in a creative way, to allow users to
choose explanations they are more comfortable with.
| [
{
"version": "v1",
"created": "Thu, 14 Mar 2024 18:53:44 GMT"
}
] | 1,710,720,000,000 | [
[
"Ganesan",
"Balaji",
""
],
[
"Pasha",
"Matheen Ahmed",
""
],
[
"Parkala",
"Srinivasa",
""
],
[
"Singh",
"Neeraj R",
""
],
[
"Mishra",
"Gayatri",
""
],
[
"Bhatia",
"Sumit",
""
],
[
"Patel",
"Hima",
""
],
[
"Naganna",
"Somashekar",
""
],
[
"Mehta",
"Sameep",
""
]
] |
2403.09925 | Saeid Amiri | Saeid Amiri, Parisa Zehtabi, Danial Dervovic, Michael Cashmore | Surrogate Assisted Monte Carlo Tree Search in Combinatorial Optimization | Accepted to the ICAPS Planning and Scheduling for Financial Services
(FINPLAN) 2023 workshop | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Industries frequently adjust their facilities network by opening new branches
in promising areas and closing branches in areas where they expect low profits.
In this paper, we examine a particular class of facility location problems. Our
objective is to minimize the loss of sales resulting from the removal of
several retail stores. However, estimating sales accurately is expensive and
time-consuming. To overcome this challenge, we leverage Monte Carlo Tree Search
(MCTS) assisted by a surrogate model that computes evaluations faster. Results
suggest that MCTS supported by a fast surrogate function can generate solutions
faster while maintaining a consistent solution compared to MCTS that does not
benefit from the surrogate function.
| [
{
"version": "v1",
"created": "Thu, 14 Mar 2024 23:54:19 GMT"
}
] | 1,710,720,000,000 | [
[
"Amiri",
"Saeid",
""
],
[
"Zehtabi",
"Parisa",
""
],
[
"Dervovic",
"Danial",
""
],
[
"Cashmore",
"Michael",
""
]
] |
2403.10249 | Xinrun Xu | Xinrun Xu and Yuxin Wang and Chaoyi Xu and Ziluo Ding and Jiechuan
Jiang and Zhiming Ding and B\"orje F. Karlsson | A Survey on Game Playing Agents and Large Models: Methods, Applications,
and Challenges | 13 pages, 3 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The swift evolution of Large-scale Models (LMs), either language-focused or
multi-modal, has garnered extensive attention in both academy and industry. But
despite the surge in interest in this rapidly evolving area, there are scarce
systematic reviews on their capabilities and potential in distinct impactful
scenarios. This paper endeavours to help bridge this gap, offering a thorough
examination of the current landscape of LM usage in regards to complex game
playing scenarios and the challenges still open. Here, we seek to
systematically review the existing architectures of LM-based Agents (LMAs) for
games and summarize their commonalities, challenges, and any other insights.
Furthermore, we present our perspective on promising future research avenues
for the advancement of LMs in games. We hope to assist researchers in gaining a
clear understanding of the field and to generate more interest in this highly
impactful research direction. A corresponding resource, continuously updated,
can be found in our GitHub repository.
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2024 12:37:12 GMT"
}
] | 1,710,720,000,000 | [
[
"Xu",
"Xinrun",
""
],
[
"Wang",
"Yuxin",
""
],
[
"Xu",
"Chaoyi",
""
],
[
"Ding",
"Ziluo",
""
],
[
"Jiang",
"Jiechuan",
""
],
[
"Ding",
"Zhiming",
""
],
[
"Karlsson",
"Börje F.",
""
]
] |
2403.10299 | Xinrun Xu | Xinrun Xu and Zhanbiao Lian and Yurong Wu and Manying Lv and Zhiming
Ding and Jian Yan and Shang Jiang | A Multi-constraint and Multi-objective Allocation Model for Emergency
Rescue in IoT Environment | 5 pages, 5 figures, ISCAS 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emergency relief operations are essential in disaster aftermaths,
necessitating effective resource allocation to minimize negative impacts and
maximize benefits. In prolonged crises or extensive disasters, a systematic,
multi-cycle approach is key for timely and informed decision-making. Leveraging
advancements in IoT and spatio-temporal data analytics, we've developed the
Multi-Objective Shuffled Gray-Wolf Frog Leaping Model (MSGW-FLM). This
multi-constraint, multi-objective resource allocation model has been rigorously
tested against 28 diverse challenges, showing superior performance in
comparison to established models such as NSGA-II, IBEA, and MOEA/D. MSGW-FLM's
effectiveness is particularly notable in complex, multi-cycle emergency rescue
scenarios, which involve numerous constraints and objectives. This model
represents a significant step forward in optimizing resource distribution in
emergency response situations.
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2024 13:42:00 GMT"
}
] | 1,710,720,000,000 | [
[
"Xu",
"Xinrun",
""
],
[
"Lian",
"Zhanbiao",
""
],
[
"Wu",
"Yurong",
""
],
[
"Lv",
"Manying",
""
],
[
"Ding",
"Zhiming",
""
],
[
"Yan",
"Jian",
""
],
[
"Jiang",
"Shang",
""
]
] |
2403.10415 | Yongjie Wang | Yongjie Wang, Tong Zhang, Xu Guo and Zhiqi Shen | Gradient based Feature Attribution in Explainable AI: A Technical Review | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The surge in black-box AI models has prompted the need to explain the
internal mechanism and justify their reliability, especially in high-stakes
applications, such as healthcare and autonomous driving. Due to the lack of a
rigorous definition of explainable AI (XAI), a plethora of research related to
explainability, interpretability, and transparency has been developed to
explain and analyze the model from various perspectives. Consequently, with an
exhaustive list of papers, it becomes challenging to have a comprehensive
overview of XAI research from all aspects. Considering the popularity of neural
networks in AI research, we narrow our focus to a specific area of XAI
research: gradient based explanations, which can be directly adopted for neural
network models. In this review, we systematically explore gradient based
explanation methods to date and introduce a novel taxonomy to categorize them
into four distinct classes. Then, we present the essence of technique details
in chronological order and underscore the evolution of algorithms. Next, we
introduce both human and quantitative evaluations to measure algorithm
performance. More importantly, we demonstrate the general challenges in XAI and
specific challenges in gradient based explanations. We hope that this survey
can help researchers understand state-of-the-art progress and their
corresponding disadvantages, which could spark their interest in addressing
these issues in future work.
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2024 15:49:31 GMT"
}
] | 1,710,720,000,000 | [
[
"Wang",
"Yongjie",
""
],
[
"Zhang",
"Tong",
""
],
[
"Guo",
"Xu",
""
],
[
"Shen",
"Zhiqi",
""
]
] |
2403.10502 | Giovanni Casini | Umberto Straccia, Giovanni Casini | Belief Change based on Knowledge Measures | 48 pages, 3 figures, preprint | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge Measures (KMs) aim at quantifying the amount of
knowledge/information that a knowledge base carries. On the other hand, Belief
Change (BC) is the process of changing beliefs (in our case, in terms of
contraction, expansion and revision) taking into account a new piece of
knowledge, which possibly may be in contradiction with the current belief. We
propose a new quantitative BC framework that is based on KMs by defining belief
change operators that try to minimise, from an information-theoretic point of
view, the surprise that the changed belief carries. To this end, we introduce
the principle of minimal surprise. In particular, our contributions are (i) a
general information-theoretic approach to KMs for which [1] is a special case;
(ii) KM-based BC operators that satisfy the so-called AGM postulates; and (iii)
a characterisation of any BC operator that satisfies the AGM postulates as a
KM-based BC operator, i.e., any BC operator satisfying the AGM postulates can
be encoded within our quantitative BC framework. We also introduce quantitative
measures that account for the information loss of contraction, information gain
of expansion and information change of revision. We also give a succinct look
into the problem of iterated revision, which deals with the application of a
sequence of revision operations in our framework, and also illustrate how one
may build from our KM-based contraction operator also one not satisfying the
(in)famous recovery postulate, by focusing on the so-called severe withdrawal
model as an illustrative example.
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2024 17:40:11 GMT"
}
] | 1,710,720,000,000 | [
[
"Straccia",
"Umberto",
""
],
[
"Casini",
"Giovanni",
""
]
] |
2403.10720 | Ye Zhang | Ye Zhang, Mengran Zhu, Kailin Gui, Jiayue Yu, Yong Hao, Haozhan Sun | Development and Application of a Monte Carlo Tree Search Algorithm for
Simulating Da Vinci Code Game Strategies | This paper has been accepted by CVIDL2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this study, we explore the efficiency of the Monte Carlo Tree Search
(MCTS), a prominent decision-making algorithm renowned for its effectiveness in
complex decision environments, contingent upon the volume of simulations
conducted. Notwithstanding its broad applicability, the algorithm's performance
can be adversely impacted in certain scenarios, particularly within the domain
of game strategy development. This research posits that the inherent branch
divergence within the Da Vinci Code board game significantly impedes
parallelism when executed on Graphics Processing Units (GPUs). To investigate
this hypothesis, we implemented and meticulously evaluated two variants of the
MCTS algorithm, specifically designed to assess the impact of branch divergence
on computational performance. Our comparative analysis reveals a linear
improvement in performance with the CPU-based implementation, in stark contrast
to the GPU implementation, which exhibits a non-linear enhancement pattern and
discernible performance troughs. These findings contribute to a deeper
understanding of the MCTS algorithm's behavior in divergent branch scenarios,
highlighting critical considerations for optimizing game strategy algorithms on
parallel computing architectures.
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2024 22:43:37 GMT"
}
] | 1,710,806,400,000 | [
[
"Zhang",
"Ye",
""
],
[
"Zhu",
"Mengran",
""
],
[
"Gui",
"Kailin",
""
],
[
"Yu",
"Jiayue",
""
],
[
"Hao",
"Yong",
""
],
[
"Sun",
"Haozhan",
""
]
] |
2403.10744 | Zhiyi Tan | Zhiyi Tan, Bingkun Bao | Game and Reference: Policy Combination Synthesis for Epidemic Prevention
and Control | 16 pages, single line, 7 figures, written with Springer conference
template | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In recent years, epidemic policy-making models are increasingly being used to
provide reference for governors on prevention and control policies against
catastrophic epidemics such as SARS, H1N1 and COVID-19. Existing studies are
currently constrained by two issues: First, previous methods develop policies
based on effect evaluation, since few of factors in real-world decision-making
can be modeled, the output policies will then easily become extreme. Second,
the subjectivity and cognitive limitation of human make the historical policies
not always optimal for the training of decision models. To these ends, we
present a novel Policy Combination Synthesis (PCS) model for epidemic
policy-making. Specially, to prevent extreme decisions, we introduce
adversarial learning between the model-made policies and the real policies to
force the output policies to be more human-liked. On the other hand, to
minimize the impact of sub-optimal historical policies, we employ contrastive
learning to let the model draw on experience from the best historical policies
under similar scenarios. Both adversarial and contrastive learning are adaptive
based on the comprehensive effects of real policies to ensure the model always
learns useful information. Extensive experiments on real-world data prove the
effectiveness of the proposed model.
| [
{
"version": "v1",
"created": "Sat, 16 Mar 2024 00:26:59 GMT"
}
] | 1,710,806,400,000 | [
[
"Tan",
"Zhiyi",
""
],
[
"Bao",
"Bingkun",
""
]
] |
2403.10930 | Yifeng Zeng | Huifan Gao, Yifeng Zeng and Yinghui Pan | Inducing Individual Students' Learning Strategies through Homomorphic
POMDPs | 11pages, 3figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimizing students' learning strategies is a crucial component in
intelligent tutoring systems. Previous research has demonstrated the
effectiveness of devising personalized learning strategies for students by
modelling their learning processes through partially observable Markov decision
process (POMDP). However, the research holds the assumption that the student
population adheres to a uniform cognitive pattern. While this assumption
simplifies the POMDP modelling process, it evidently deviates from a real-world
scenario, thus reducing the precision of inducing individual students' learning
strategies. In this article, we propose the homomorphic POMDP (H-POMDP) model
to accommodate multiple cognitive patterns and present the parameter learning
approach to automatically construct the H-POMDP model. Based on the H-POMDP
model, we are able to represent different cognitive patterns from the data and
induce more personalized learning strategies for individual students. We
conduct experiments to show that, in comparison to the general POMDP approach,
the H-POMDP model demonstrates better precision when modelling mixed data from
multiple cognitive patterns. Moreover, the learning strategies derived from
H-POMDPs exhibit better personalization in the performance evaluation.
| [
{
"version": "v1",
"created": "Sat, 16 Mar 2024 14:06:29 GMT"
}
] | 1,710,806,400,000 | [
[
"Gao",
"Huifan",
""
],
[
"Zeng",
"Yifeng",
""
],
[
"Pan",
"Yinghui",
""
]
] |
2403.11219 | Abraham Itzhak Weinberg | Abraham Itzhak Weinberg, Cristiano Premebida, Diego Resende Faria | Causality from Bottom to Top: A Survey | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Causality has become a fundamental approach for explaining the relationships
between events, phenomena, and outcomes in various fields of study. It has
invaded various fields and applications, such as medicine, healthcare,
economics, finance, fraud detection, cybersecurity, education, public policy,
recommender systems, anomaly detection, robotics, control, sociology,
marketing, and advertising. In this paper, we survey its development over the
past five decades, shedding light on the differences between causality and
other approaches, as well as the preconditions for using it. Furthermore, the
paper illustrates how causality interacts with new approaches such as
Artificial Intelligence (AI), Generative AI (GAI), Machine and Deep Learning,
Reinforcement Learning (RL), and Fuzzy Logic. We study the impact of causality
on various fields, its contribution, and its interaction with state-of-the-art
approaches. Additionally, the paper exemplifies the trustworthiness and
explainability of causality models. We offer several ways to evaluate causality
models and discuss future directions.
| [
{
"version": "v1",
"created": "Sun, 17 Mar 2024 13:39:43 GMT"
}
] | 1,710,806,400,000 | [
[
"Weinberg",
"Abraham Itzhak",
""
],
[
"Premebida",
"Cristiano",
""
],
[
"Faria",
"Diego Resende",
""
]
] |
2403.12308 | Chao Chen | Chao Chen, Christian Wagner, Jonathan M. Garibaldi | Gradient-based Fuzzy System Optimisation via Automatic Differentiation
-- FuzzyR as a Use Case | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since their introduction, fuzzy sets and systems have become an important
area of research known for its versatility in modelling, knowledge
representation and reasoning, and increasingly its potential within the context
explainable AI. While the applications of fuzzy systems are diverse, there has
been comparatively little advancement in their design from a machine learning
perspective. In other words, while representations such as neural networks have
benefited from a boom in learning capability driven by an increase in
computational performance in combination with advances in their training
mechanisms and available tool, in particular gradient descent, the impact on
fuzzy system design has been limited. In this paper, we discuss
gradient-descent-based optimisation of fuzzy systems, focussing in particular
on automatic differentiation -- crucial to neural network learning -- with a
view to free fuzzy system designers from intricate derivative computations,
allowing for more focus on the functional and explainability aspects of their
design. As a starting point, we present a use case in FuzzyR which demonstrates
how current fuzzy inference system implementations can be adjusted to leverage
powerful features of automatic differentiation tools sets, discussing its
potential for the future of fuzzy system design.
| [
{
"version": "v1",
"created": "Mon, 18 Mar 2024 23:18:16 GMT"
}
] | 1,710,892,800,000 | [
[
"Chen",
"Chao",
""
],
[
"Wagner",
"Christian",
""
],
[
"Garibaldi",
"Jonathan M.",
""
]
] |
2403.12451 | Lirui Luo | Lirui Luo, Guoxi Zhang, Hongming Xu, Yaodong Yang, Cong Fang, Qing Li | INSIGHT: End-to-End Neuro-Symbolic Visual Reinforcement Learning with
Language Explanations | ICML 2024. Project page: https://ins-rl.github.io/ | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Neuro-symbolic reinforcement learning (NS-RL) has emerged as a promising
paradigm for explainable decision-making, characterized by the interpretability
of symbolic policies. NS-RL entails structured state representations for tasks
with visual observations, but previous methods are unable to refine the
structured states with rewards due to a lack of efficiency. Accessibility also
remains to be an issue, as extensive domain knowledge is required to interpret
symbolic policies. In this paper, we present a framework for learning
structured states and symbolic policies jointly, whose key idea is to distill
vision foundation models into a scalable perception module and refine it during
policy learning. Moreover, we design a pipeline to generate language
explanations for policies and decisions using large language models. In
experiments on nine Atari tasks, we verify the efficacy of our approach, and we
also present explanations for policies and decisions.
| [
{
"version": "v1",
"created": "Tue, 19 Mar 2024 05:21:20 GMT"
},
{
"version": "v2",
"created": "Mon, 27 May 2024 04:30:01 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Jun 2024 06:50:51 GMT"
}
] | 1,717,459,200,000 | [
[
"Luo",
"Lirui",
""
],
[
"Zhang",
"Guoxi",
""
],
[
"Xu",
"Hongming",
""
],
[
"Yang",
"Yaodong",
""
],
[
"Fang",
"Cong",
""
],
[
"Li",
"Qing",
""
]
] |
2403.13705 | Aske Plaat | Aske Plaat | Research Re: search & Re-search | PhD thesis Aske Plaat 20 June 1996. AlphaBeta, SSS*, MTD(f) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Search algorithms are often categorized by their node expansion strategy. One
option is the depth-first strategy, a simple backtracking strategy that
traverses the search space in the order in which successor nodes are generated.
An alternative is the best-first strategy, which was designed to make it
possible to use domain-specific heuristic information. By exploring promising
parts of the search space first, best-first algorithms are usually more
efficient than depth-first algorithms.
In programs that play minimax games such as chess and checkers, the
efficiency of the search is of crucial importance. Given the success of
best-first algorithms in other domains, one would expect them to be used for
minimax games too. However, all high-performance game-playing programs are
based on a depth-first algorithm.
This study takes a closer look at a depth-first algorithm, AB, and a
best-first algorithm, SSS. The prevailing opinion on these algorithms is that
SSS offers the potential for a more efficient search, but that its complicated
formulation and exponential memory requirements render it impractical. The
theoretical part of this work shows that there is a surprisingly
straightforward link between the two algorithms -- for all practical purposes,
SSS is a special case of AB. Subsequent empirical evidence proves the
prevailing opinion on SSS to be wrong: it is not a complicated algorithm, it
does not need too much memory, and it is also not more efficient than
depth-first search.
| [
{
"version": "v1",
"created": "Wed, 20 Mar 2024 16:08:57 GMT"
}
] | 1,710,979,200,000 | [
[
"Plaat",
"Aske",
""
]
] |
2403.14100 | Steven Mascaro | Steven Mascaro, Yue Wu, Ross Pearson, Owen Woodberry, Jessica Ramsay,
Tom Snelling, Ann E. Nicholson | Causal knowledge engineering: A case study from COVID-19 | 22 pages (plus 19 pages in appendices), 9 figures, submitted for
review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | COVID-19 appeared abruptly in early 2020, requiring a rapid response amid a
context of great uncertainty. Good quality data and knowledge was initially
lacking, and many early models had to be developed with causal assumptions and
estimations built in to supplement limited data, often with no reliable
approach for identifying, validating and documenting these causal assumptions.
Our team embarked on a knowledge engineering process to develop a causal
knowledge base consisting of several causal BNs for diverse aspects of
COVID-19. The unique challenges of the setting lead to experiments with the
elicitation approach, and what emerged was a knowledge engineering method we
call Causal Knowledge Engineering (CKE). The CKE provides a structured approach
for building a causal knowledge base that can support the development of a
variety of application-specific models. Here we describe the CKE method, and
use our COVID-19 work as a case study to provide a detailed discussion and
analysis of the method.
| [
{
"version": "v1",
"created": "Thu, 21 Mar 2024 03:23:34 GMT"
}
] | 1,711,065,600,000 | [
[
"Mascaro",
"Steven",
""
],
[
"Wu",
"Yue",
""
],
[
"Pearson",
"Ross",
""
],
[
"Woodberry",
"Owen",
""
],
[
"Ramsay",
"Jessica",
""
],
[
"Snelling",
"Tom",
""
],
[
"Nicholson",
"Ann E.",
""
]
] |
2403.14796 | Erez Karpas | Andrew Coles, Erez Karpas, Andrey Lavrinenko, Wheeler Ruml, Solomon
Eyal Shimony, Shahaf Shperberg | Planning and Acting While the Clock Ticks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Standard temporal planning assumes that planning takes place offline and then
execution starts at time 0. Recently, situated temporal planning was
introduced, where planning starts at time 0 and execution occurs after planning
terminates. Situated temporal planning reflects a more realistic scenario where
time passes during planning. However, in situated temporal planning a complete
plan must be generated before any action is executed. In some problems with
time pressure, timing is too tight to complete planning before the first action
must be executed. For example, an autonomous car that has a truck backing
towards it should probably move out of the way now and plan how to get to its
destination later. In this paper, we propose a new problem setting: concurrent
planning and execution, in which actions can be dispatched (executed) before
planning terminates. Unlike previous work on planning and execution, we must
handle wall clock deadlines that affect action applicability and goal
achievement (as in situated planning) while also supporting dispatching actions
before a complete plan has been found. We extend previous work on metareasoning
for situated temporal planning to develop an algorithm for this new setting.
Our empirical evaluation shows that when there is strong time pressure, our
approach outperforms situated temporal planning.
| [
{
"version": "v1",
"created": "Thu, 21 Mar 2024 19:18:47 GMT"
}
] | 1,711,324,800,000 | [
[
"Coles",
"Andrew",
""
],
[
"Karpas",
"Erez",
""
],
[
"Lavrinenko",
"Andrey",
""
],
[
"Ruml",
"Wheeler",
""
],
[
"Shimony",
"Solomon Eyal",
""
],
[
"Shperberg",
"Shahaf",
""
]
] |
2403.15251 | Argaman Mordoch | Argaman Mordoch, Enrico Scala, Roni Stern, Brendan Juba | Safe Learning of PDDL Domains with Conditional Effects -- Extended
Version | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Powerful domain-independent planners have been developed to solve various
types of planning problems. These planners often require a model of the acting
agent's actions, given in some planning domain description language. Manually
designing such an action model is a notoriously challenging task. An
alternative is to automatically learn action models from observation. Such an
action model is called safe if every plan created with it is consistent with
the real, unknown action model. Algorithms for learning such safe action models
exist, yet they cannot handle domains with conditional or universal effects,
which are common constructs in many planning problems. We prove that learning
non-trivial safe action models with conditional effects may require an
exponential number of samples. Then, we identify reasonable assumptions under
which such learning is tractable and propose SAM Learning of Conditional
Effects (Conditional-SAM), the first algorithm capable of doing so. We analyze
Conditional-SAM theoretically and evaluate it experimentally. Our results show
that the action models learned by Conditional-SAM can be used to solve
perfectly most of the test set problems in most of the experimented domains.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2024 14:49:49 GMT"
}
] | 1,711,324,800,000 | [
[
"Mordoch",
"Argaman",
""
],
[
"Scala",
"Enrico",
""
],
[
"Stern",
"Roni",
""
],
[
"Juba",
"Brendan",
""
]
] |
2403.15297 | Tiansi Dong | Tiansi Dong, Mateja Jamnik, Pietro Li\`o | Sphere Neural-Networks for Rational Reasoning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The success of Large Language Models (LLMs), e.g., ChatGPT, is witnessed by
their planetary popularity, their capability of human-like question-answering,
and also by their steadily improved reasoning performance. However, it remains
unclear whether LLMs reason. It is an open problem how traditional neural
networks can be qualitatively extended to go beyond the statistic paradigm and
achieve high-level cognition. Here, we present a minimalist qualitative
extension by generalising computational building blocks from vectors to
spheres. We propose Sphere Neural Networks (SphNNs) for human-like reasoning
through model construction and inspection, and develop SphNN for syllogistic
reasoning, a microcosm of human rationality. Instead of training data, SphNN
uses a neuro-symbolic transition map of neighbourhood spatial relations to
guide transformations from the current sphere configuration towards the target.
SphNN is the first neural model that can determine the validity of long-chained
syllogistic reasoning in one epoch by constructing sphere configurations as
Euler diagrams, with the worst computational complexity of O(N^2). SphNN can
evolve into various types of reasoning, such as spatio-temporal reasoning,
logical reasoning with negation and disjunction, event reasoning,
neuro-symbolic reasoning, and humour understanding (the highest level of
cognition). All these suggest a new kind of Herbert A. Simon's scissors with
two neural blades. SphNNs will tremendously enhance interdisciplinary
collaborations to develop the two neural blades and realise deterministic
neural reasoning and human-bounded rationality and elevate LLMs to reliable
psychological AI. This work suggests that the non-zero radii of spheres are the
missing components that prevent traditional deep-learning systems from reaching
the realm of rational reasoning and cause LLMs to be trapped in the swamp of
hallucination.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2024 15:44:59 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Apr 2024 20:02:20 GMT"
}
] | 1,713,484,800,000 | [
[
"Dong",
"Tiansi",
""
],
[
"Jamnik",
"Mateja",
""
],
[
"Liò",
"Pietro",
""
]
] |
2403.15574 | Yuhan Xia | Yuhan Xia, Qingqing Zhao, Yunfei Long, Ge Xu and Jia Wang | SensoryT5: Infusing Sensorimotor Norms into T5 for Enhanced Fine-grained
Emotion Classification | Accepted by CogALex 2024 conference | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In traditional research approaches, sensory perception and emotion
classification have traditionally been considered separate domains. Yet, the
significant influence of sensory experiences on emotional responses is
undeniable. The natural language processing (NLP) community has often missed
the opportunity to merge sensory knowledge with emotion classification. To
address this gap, we propose SensoryT5, a neuro-cognitive approach that
integrates sensory information into the T5 (Text-to-Text Transfer Transformer)
model, designed specifically for fine-grained emotion classification. This
methodology incorporates sensory cues into the T5's attention mechanism,
enabling a harmonious balance between contextual understanding and sensory
awareness. The resulting model amplifies the richness of emotional
representations. In rigorous tests across various detailed emotion
classification datasets, SensoryT5 showcases improved performance, surpassing
both the foundational T5 model and current state-of-the-art works. Notably,
SensoryT5's success signifies a pivotal change in the NLP domain, highlighting
the potential influence of neuro-cognitive data in refining machine learning
models' emotional sensitivity.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2024 19:03:25 GMT"
}
] | 1,711,411,200,000 | [
[
"Xia",
"Yuhan",
""
],
[
"Zhao",
"Qingqing",
""
],
[
"Long",
"Yunfei",
""
],
[
"Xu",
"Ge",
""
],
[
"Wang",
"Jia",
""
]
] |
2403.15586 | Aashish Ghimire | Aashish Ghimire, James Prather and John Edwards | Generative AI in Education: A Study of Educators' Awareness, Sentiments,
and Influencing Factors | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The rapid advancement of artificial intelligence (AI) and the expanding
integration of large language models (LLMs) have ignited a debate about their
application in education. This study delves into university instructors'
experiences and attitudes toward AI language models, filling a gap in the
literature by analyzing educators' perspectives on AI's role in the classroom
and its potential impacts on teaching and learning. The objective of this
research is to investigate the level of awareness, overall sentiment
towardsadoption, and the factors influencing these attitudes for LLMs and
generative AI-based tools in higher education. Data was collected through a
survey using a Likert scale, which was complemented by follow-up interviews to
gain a more nuanced understanding of the instructors' viewpoints. The collected
data was processed using statistical and thematic analysis techniques. Our
findings reveal that educators are increasingly aware of and generally positive
towards these tools. We find no correlation between teaching style and attitude
toward generative AI. Finally, while CS educators show far more confidence in
their technical understanding of generative AI tools and more positivity
towards them than educators in other fields, they show no more confidence in
their ability to detect AI-generated work.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2024 19:21:29 GMT"
}
] | 1,711,411,200,000 | [
[
"Ghimire",
"Aashish",
""
],
[
"Prather",
"James",
""
],
[
"Edwards",
"John",
""
]
] |
2403.15587 | Cristina Zuheros | Cristina Zuheros and David Herrera-Poyatos and Rosana Montes and
Francisco Herrera | Large language models for crowd decision making based on prompt design
strategies using ChatGPT: models, analysis and challenges | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social Media and Internet have the potential to be exploited as a source of
opinion to enrich Decision Making solutions. Crowd Decision Making (CDM) is a
methodology able to infer opinions and decisions from plain texts, such as
reviews published in social media platforms, by means of Sentiment Analysis.
Currently, the emergence and potential of Large Language Models (LLMs) lead us
to explore new scenarios of automatically understand written texts, also known
as natural language processing. This paper analyzes the use of ChatGPT based on
prompt design strategies to assist in CDM processes to extract opinions and
make decisions. We integrate ChatGPT in CDM processes as a flexible tool that
infer the opinions expressed in texts, providing numerical or linguistic
evaluations where the decision making models are based on the prompt design
strategies. We include a multi-criteria decision making scenario with a
category ontology for criteria. We also consider ChatGPT as an end-to-end CDM
model able to provide a general opinion and score on the alternatives. We
conduct empirical experiments on real data extracted from TripAdvisor, the
TripR-2020Large dataset. The analysis of results show a promising branch for
developing quality decision making models using ChatGPT. Finally, we discuss
the challenges of consistency, sensitivity and explainability associated to the
use of LLMs in CDM processes, raising open questions for future studies.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2024 19:21:44 GMT"
}
] | 1,711,411,200,000 | [
[
"Zuheros",
"Cristina",
""
],
[
"Herrera-Poyatos",
"David",
""
],
[
"Montes",
"Rosana",
""
],
[
"Herrera",
"Francisco",
""
]
] |
2403.15640 | Xin Chen | Xin Chen, I-Hong Hou | Contextual Restless Multi-Armed Bandits with Application to Demand
Response Decision-Making | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper introduces a novel multi-armed bandits framework, termed
Contextual Restless Bandits (CRB), for complex online decision-making. This CRB
framework incorporates the core features of contextual bandits and restless
bandits, so that it can model both the internal state transitions of each arm
and the influence of external global environmental contexts. Using the dual
decomposition method, we develop a scalable index policy algorithm for solving
the CRB problem, and theoretically analyze the asymptotical optimality of this
algorithm. In the case when the arm models are unknown, we further propose a
model-based online learning algorithm based on the index policy to learn the
arm models and make decisions simultaneously. Furthermore, we apply the
proposed CRB framework and the index policy algorithm specifically to the
demand response decision-making problem in smart grids. The numerical
simulations demonstrate the performance and efficiency of our proposed CRB
approaches.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2024 22:35:07 GMT"
}
] | 1,711,411,200,000 | [
[
"Chen",
"Xin",
""
],
[
"Hou",
"I-Hong",
""
]
] |
2403.15728 | Ruijie Liu | Ruijie Liu, Tianxiang Zhan, Zhen Li, Yong Deng | Learnable WSN Deployment of Evidential Collaborative Sensing Model | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In wireless sensor networks (WSNs), coverage and deployment are two most
crucial issues when conducting detection tasks. However, the detection
information collected from sensors is oftentimes not fully utilized and
efficiently integrated. Such sensing model and deployment strategy, thereby,
cannot reach the maximum quality of coverage, particularly when the amount of
sensors within WSNs expands significantly. In this article, we aim at achieving
the optimal coverage quality of WSN deployment. We develop a collaborative
sensing model of sensors to enhance detection capabilities of WSNs, by
leveraging the collaborative information derived from the combination rule
under the framework of evidence theory. In this model, the performance
evaluation of evidential fusion systems is adopted as the criterion of the
sensor selection. A learnable sensor deployment network (LSDNet) considering
both sensor contribution and detection capability, is proposed for achieving
the optimal deployment of WSNs. Moreover, we deeply investigate the algorithm
for finding the requisite minimum number of sensors that realizes the full
coverage of WSNs. A series of numerical examples, along with an application of
forest area monitoring, are employed to demonstrate the effectiveness and the
robustness of the proposed algorithms.
| [
{
"version": "v1",
"created": "Sat, 23 Mar 2024 05:29:09 GMT"
}
] | 1,711,411,200,000 | [
[
"Liu",
"Ruijie",
""
],
[
"Zhan",
"Tianxiang",
""
],
[
"Li",
"Zhen",
""
],
[
"Deng",
"Yong",
""
]
] |
2403.15779 | Youyang Qu | Youyang Qu, Ming Ding, Nan Sun, Kanchana Thilakarathna, Tianqing Zhu,
Dusit Niyato | The Frontier of Data Erasure: Machine Unlearning for Large Language
Models | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) are foundational to AI advancements,
facilitating applications like predictive text generation. Nonetheless, they
pose risks by potentially memorizing and disseminating sensitive, biased, or
copyrighted information from their vast datasets. Machine unlearning emerges as
a cutting-edge solution to mitigate these concerns, offering techniques for
LLMs to selectively discard certain data. This paper reviews the latest in
machine unlearning for LLMs, introducing methods for the targeted forgetting of
information to address privacy, ethical, and legal challenges without
necessitating full model retraining. It divides existing research into
unlearning from unstructured/textual data and structured/classification data,
showcasing the effectiveness of these approaches in removing specific data
while maintaining model efficacy. Highlighting the practicality of machine
unlearning, this analysis also points out the hurdles in preserving model
integrity, avoiding excessive or insufficient data removal, and ensuring
consistent outputs, underlining the role of machine unlearning in advancing
responsible, ethical AI.
| [
{
"version": "v1",
"created": "Sat, 23 Mar 2024 09:26:15 GMT"
}
] | 1,711,411,200,000 | [
[
"Qu",
"Youyang",
""
],
[
"Ding",
"Ming",
""
],
[
"Sun",
"Nan",
""
],
[
"Thilakarathna",
"Kanchana",
""
],
[
"Zhu",
"Tianqing",
""
],
[
"Niyato",
"Dusit",
""
]
] |
2403.15864 | Yihang Zhao | Yihang Zhao, Neil Vetter, Kaveh Aryan | Using Large Language Models for OntoClean-based Ontology Refinement | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores the integration of Large Language Models (LLMs) such as
GPT-3.5 and GPT-4 into the ontology refinement process, specifically focusing
on the OntoClean methodology. OntoClean, critical for assessing the
metaphysical quality of ontologies, involves a two-step process of assigning
meta-properties to classes and verifying a set of constraints. Manually
conducting the first step proves difficult in practice, due to the need for
philosophical expertise and lack of consensus among ontologists. By employing
LLMs with two prompting strategies, the study demonstrates that high accuracy
in the labelling process can be achieved. The findings suggest the potential
for LLMs to enhance ontology refinement, proposing the development of plugin
software for ontology tools to facilitate this integration.
| [
{
"version": "v1",
"created": "Sat, 23 Mar 2024 15:09:50 GMT"
}
] | 1,711,411,200,000 | [
[
"Zhao",
"Yihang",
""
],
[
"Vetter",
"Neil",
""
],
[
"Aryan",
"Kaveh",
""
]
] |
2403.15879 | Gyubok Lee | Gyubok Lee, Woosog Chay, Seonhee Cho, Edward Choi | TrustSQL: A Reliability Benchmark for Text-to-SQL Models with Diverse
Unanswerable Questions | under review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advances in large language models (LLMs) have led to significant
improvements in translating natural language questions into SQL queries. While
achieving high accuracy in SQL generation is crucial, little is known about the
extent to which these text-to-SQL models can reliably handle diverse types of
questions encountered during real-world deployment, including unanswerable
ones. To explore this aspect, we introduce TrustSQL, a new benchmark designed
to assess the reliability of text-to-SQL models in both single-database and
cross-database settings. TrustSQL requires models to provide one of two
outputs: 1) an SQL prediction or 2) abstention from making an SQL prediction,
either due to potential errors in the generated SQL or when faced with
unanswerable questions. For model evaluation, we explore various modeling
approaches specifically designed for this task: 1) optimizing separate models
for answerability detection, SQL generation, and error detection, which are
then integrated into a single pipeline; and 2) developing a unified approach
that uses a single model to solve this task. Experimental results using our new
reliability score show that addressing this challenge involves many different
areas of research and opens new avenues for model development. However, none of
the methods consistently surpasses the reliability scores of a naive baseline
that abstains from SQL predictions for all questions, with varying penalties.
| [
{
"version": "v1",
"created": "Sat, 23 Mar 2024 16:12:52 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Apr 2024 15:33:39 GMT"
}
] | 1,713,312,000,000 | [
[
"Lee",
"Gyubok",
""
],
[
"Chay",
"Woosog",
""
],
[
"Cho",
"Seonhee",
""
],
[
"Choi",
"Edward",
""
]
] |
2403.15916 | Alexandros Nikou PhD | Albin Larsson Forsberg and Alexandros Nikou and Aneta Vulgarakis
Feljan and Jana Tumova | Multi-agent transformer-accelerated RL for satisfaction of STL
specifications | Submitted to L4DC 2024 conference | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the main challenges in multi-agent reinforcement learning is
scalability as the number of agents increases. This issue is further
exacerbated if the problem considered is temporally dependent. State-of-the-art
solutions today mainly follow centralized training with decentralized execution
paradigm in order to handle the scalability concerns. In this paper, we propose
time-dependent multi-agent transformers which can solve the temporally
dependent multi-agent problem efficiently with a centralized approach via the
use of transformers that proficiently handle the large input. We highlight the
efficacy of this method on two problems and use tools from statistics to verify
the probability that the trajectories generated under the policy satisfy the
task. The experiments show that our approach has superior performance against
the literature baseline algorithms in both cases.
| [
{
"version": "v1",
"created": "Sat, 23 Mar 2024 19:13:01 GMT"
}
] | 1,711,411,200,000 | [
[
"Forsberg",
"Albin Larsson",
""
],
[
"Nikou",
"Alexandros",
""
],
[
"Feljan",
"Aneta Vulgarakis",
""
],
[
"Tumova",
"Jana",
""
]
] |
2403.16066 | Youngbin Lee | Yejin Kim, Youngbin Lee, Vincent Yuan, Annika Lee, Yongjae Lee | A Temporal Graph Network Framework for Dynamic Recommendation | Presented at the AAAI 2024 Workshop on Recommendation Ecosystems:
Modeling, Optimization and Incentive Design | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recommender systems, crucial for user engagement on platforms like e-commerce
and streaming services, often lag behind users' evolving preferences due to
static data reliance. After Temporal Graph Networks (TGNs) were proposed,
various studies have shown that TGN can significantly improve situations where
the features of nodes and edges dynamically change over time. However, despite
its promising capabilities, it has not been directly applied in recommender
systems to date. Our study bridges this gap by directly implementing Temporal
Graph Networks (TGN) in recommender systems, a first in this field. Using
real-world datasets and a range of graph and history embedding methods, we show
TGN's adaptability, confirming its effectiveness in dynamic recommendation
scenarios.
| [
{
"version": "v1",
"created": "Sun, 24 Mar 2024 08:33:13 GMT"
}
] | 1,711,411,200,000 | [
[
"Kim",
"Yejin",
""
],
[
"Lee",
"Youngbin",
""
],
[
"Yuan",
"Vincent",
""
],
[
"Lee",
"Annika",
""
],
[
"Lee",
"Yongjae",
""
]
] |
2403.16100 | Louise Dennis Dr | Louise A. Dennis and Michael Fisher | Specifying Agent Ethics (Blue Sky Ideas) | To appear in Coordination, Organizations, Institutions, Norms and
Ethics for Governance of Multi-Agent Systems 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We consider the question of what properties a Machine Ethics system should
have. This question is complicated by the existence of ethical dilemmas with no
agreed upon solution. We provide an example to motivate why we do not believe
falling back on the elicitation of values from stakeholders is sufficient to
guarantee correctness of such systems. We go on to define two broad categories
of ethical property that have arisen in our own work and present a challenge to
the community to approach this question in a more systematic way.
| [
{
"version": "v1",
"created": "Sun, 24 Mar 2024 11:32:43 GMT"
}
] | 1,711,411,200,000 | [
[
"Dennis",
"Louise A.",
""
],
[
"Fisher",
"Michael",
""
]
] |
2403.16101 | Yuya Sasaki | Yuya Sasaki, Sohei Tokuno, Haruka Maeda, Osamu Sakura | Evaluating Fairness Metrics Across Borders from Human Perceptions | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Which fairness metrics are appropriately applicable in your contexts? There
may be instances of discordance regarding the perception of fairness, even when
the outcomes comply with established fairness metrics. Several surveys have
been conducted to evaluate fairness metrics with human perceptions of fairness.
However, these surveys were limited in scope, including only a few hundred
participants within a single country. In this study, we conduct an
international survey to evaluate the appropriateness of various fairness
metrics in decision-making scenarios. We collected responses from 1,000
participants in each of China, France, Japan, and the United States, amassing a
total of 4,000 responses, to analyze the preferences of fairness metrics. Our
survey consists of three distinct scenarios paired with four fairness metrics,
and each participant answers their preference for the fairness metric in each
case. This investigation explores the relationship between personal attributes
and the choice of fairness metrics, uncovering a significant influence of
national context on these preferences.
| [
{
"version": "v1",
"created": "Sun, 24 Mar 2024 11:33:18 GMT"
}
] | 1,711,411,200,000 | [
[
"Sasaki",
"Yuya",
""
],
[
"Tokuno",
"Sohei",
""
],
[
"Maeda",
"Haruka",
""
],
[
"Sakura",
"Osamu",
""
]
] |
2403.16162 | Lu Bai | Lu Bai, Abhishek Gupta, and Yew-Soon Ong | Multi-Task Learning with Multi-Task Optimization | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-task learning solves multiple correlated tasks. However, conflicts may
exist between them. In such circumstances, a single solution can rarely
optimize all the tasks, leading to performance trade-offs. To arrive at a set
of optimized yet well-distributed models that collectively embody different
trade-offs in one algorithmic pass, this paper proposes to view Pareto
multi-task learning through the lens of multi-task optimization. Multi-task
learning is first cast as a multi-objective optimization problem, which is then
decomposed into a diverse set of unconstrained scalar-valued subproblems. These
subproblems are solved jointly using a novel multi-task gradient descent
method, whose uniqueness lies in the iterative transfer of model parameters
among the subproblems during the course of optimization. A theorem proving
faster convergence through the inclusion of such transfers is presented. We
investigate the proposed multi-task learning with multi-task optimization for
solving various problem settings including image classification, scene
understanding, and multi-target regression. Comprehensive experiments confirm
that the proposed method significantly advances the state-of-the-art in
discovering sets of Pareto-optimized models. Notably, on the large image
dataset we tested on, namely NYUv2, the hypervolume convergence achieved by our
method was found to be nearly two times faster than the next-best among the
state-of-the-art.
| [
{
"version": "v1",
"created": "Sun, 24 Mar 2024 14:04:40 GMT"
}
] | 1,711,411,200,000 | [
[
"Bai",
"Lu",
""
],
[
"Gupta",
"Abhishek",
""
],
[
"Ong",
"Yew-Soon",
""
]
] |
2403.16206 | Yuxin Qiao | Tianrui Liu, Qi Cai, Changxin Xu, Bo Hong, Fanghao Ni, Yuxin Qiao, and
Tsungwei Yang | Rumor Detection with a novel graph neural network approach | 10 pages, 5 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The wide spread of rumors on social media has caused a negative impact on
people's daily life, leading to potential panic, fear, and mental health
problems for the public. How to debunk rumors as early as possible remains a
challenging problem. Existing studies mainly leverage information propagation
structure to detect rumors, while very few works focus on correlation among
users that they may coordinate to spread rumors in order to gain large
popularity. In this paper, we propose a new detection model, that jointly
learns both the representations of user correlation and information propagation
to detect rumors on social media. Specifically, we leverage graph neural
networks to learn the representations of user correlation from a bipartite
graph that describes the correlations between users and source tweets, and the
representations of information propagation with a tree structure. Then we
combine the learned representations from these two modules to classify the
rumors. Since malicious users intend to subvert our model after deployment, we
further develop a greedy attack scheme to analyze the cost of three adversarial
attacks: graph attack, comment attack, and joint attack. Evaluation results on
two public datasets illustrate that the proposed MODEL outperforms the
state-of-the-art rumor detection models. We also demonstrate our method
performs well for early rumor detection. Moreover, the proposed detection
method is more robust to adversarial attacks compared to the best existing
method. Importantly, we show that it requires a high cost for attackers to
subvert user correlation pattern, demonstrating the importance of considering
user correlation for rumor detection.
| [
{
"version": "v1",
"created": "Sun, 24 Mar 2024 15:59:47 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Mar 2024 04:23:23 GMT"
},
{
"version": "v3",
"created": "Tue, 2 Apr 2024 01:52:13 GMT"
}
] | 1,712,102,400,000 | [
[
"Liu",
"Tianrui",
""
],
[
"Cai",
"Qi",
""
],
[
"Xu",
"Changxin",
""
],
[
"Hong",
"Bo",
""
],
[
"Ni",
"Fanghao",
""
],
[
"Qiao",
"Yuxin",
""
],
[
"Yang",
"Tsungwei",
""
]
] |
2403.16222 | Manish Bhattarai | Ryan Barron, Maksim E. Eren, Manish Bhattarai, Selma Wanna, Nicholas
Solovyev, Kim Rasmussen, Boian S. Alexandrov, Charles Nicholas, Cynthia
Matuszek | Cyber-Security Knowledge Graph Generation by Hierarchical Nonnegative
Matrix Factorization | Accepted at IEEE ISDFS | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Much of human knowledge in cybersecurity is encapsulated within the
ever-growing volume of scientific papers. As this textual data continues to
expand, the importance of document organization methods becomes increasingly
crucial for extracting actionable insights hidden within large text datasets.
Knowledge Graphs (KGs) serve as a means to store factual information in a
structured manner, providing explicit, interpretable knowledge that includes
domain-specific information from the cybersecurity scientific literature. One
of the challenges in constructing a KG from scientific literature is the
extraction of ontology from unstructured text. In this paper, we address this
topic and introduce a method for building a multi-modal KG by extracting
structured ontology from scientific papers. We demonstrate this concept in the
cybersecurity domain. One modality of the KG represents observable information
from the papers, such as the categories in which they were published or the
authors. The second modality uncovers latent (hidden) patterns of text
extracted through hierarchical and semantic non-negative matrix factorization
(NMF), such as named entities, topics or clusters, and keywords. We illustrate
this concept by consolidating more than two million scientific papers uploaded
to arXiv into the cyber-domain, using hierarchical and semantic NMF, and by
building a cyber-domain-specific KG.
| [
{
"version": "v1",
"created": "Sun, 24 Mar 2024 16:30:05 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Mar 2024 15:28:27 GMT"
}
] | 1,711,497,600,000 | [
[
"Barron",
"Ryan",
""
],
[
"Eren",
"Maksim E.",
""
],
[
"Bhattarai",
"Manish",
""
],
[
"Wanna",
"Selma",
""
],
[
"Solovyev",
"Nicholas",
""
],
[
"Rasmussen",
"Kim",
""
],
[
"Alexandrov",
"Boian S.",
""
],
[
"Nicholas",
"Charles",
""
],
[
"Matuszek",
"Cynthia",
""
]
] |
2403.16289 | Ali Nouri | Ali Nouri, Beatriz Cabrero-Daniel, Fredrik T\"orner, H\.akan
Sivencrona, Christian Berger | Engineering Safety Requirements for Autonomous Driving with Large
Language Models | Accepted in 32nd IEEE International Requirements Engineering 2024
conference, Iceland | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Changes and updates in the requirement artifacts, which can be frequent in
the automotive domain, are a challenge for SafetyOps. Large Language Models
(LLMs), with their impressive natural language understanding and generating
capabilities, can play a key role in automatically refining and decomposing
requirements after each update. In this study, we propose a prototype of a
pipeline of prompts and LLMs that receives an item definition and outputs
solutions in the form of safety requirements. This pipeline also performs a
review of the requirement dataset and identifies redundant or contradictory
requirements. We first identified the necessary characteristics for performing
HARA and then defined tests to assess an LLM's capability in meeting these
criteria. We used design science with multiple iterations and let experts from
different companies evaluate each cycle quantitatively and qualitatively.
Finally, the prototype was implemented at a case company and the responsible
team evaluated its efficiency.
| [
{
"version": "v1",
"created": "Sun, 24 Mar 2024 20:40:51 GMT"
}
] | 1,711,411,200,000 | [
[
"Nouri",
"Ali",
""
],
[
"Cabrero-Daniel",
"Beatriz",
""
],
[
"Törner",
"Fredrik",
""
],
[
"Sivencrona",
"Hȧkan",
""
],
[
"Berger",
"Christian",
""
]
] |
2403.16416 | Lixi Zhu | Lixi Zhu, Xiaowen Huang, Jitao Sang | How Reliable is Your Simulator? Analysis on the Limitations of Current
LLM-based User Simulators for Conversational Recommendation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Conversational Recommender System (CRS) interacts with users through natural
language to understand their preferences and provide personalized
recommendations in real-time. CRS has demonstrated significant potential,
prompting researchers to address the development of more realistic and reliable
user simulators as a key focus. Recently, the capabilities of Large Language
Models (LLMs) have attracted a lot of attention in various fields.
Simultaneously, efforts are underway to construct user simulators based on
LLMs. While these works showcase innovation, they also come with certain
limitations that require attention. In this work, we aim to analyze the
limitations of using LLMs in constructing user simulators for CRS, to guide
future research. To achieve this goal, we conduct analytical validation on the
notable work, iEvaLM. Through multiple experiments on two widely-used datasets
in the field of conversational recommendation, we highlight several issues with
the current evaluation methods for user simulators based on LLMs: (1) Data
leakage, which occurs in conversational history and the user simulator's
replies, results in inflated evaluation results. (2) The success of CRS
recommendations depends more on the availability and quality of conversational
history than on the responses from user simulators. (3) Controlling the output
of the user simulator through a single prompt template proves challenging. To
overcome these limitations, we propose SimpleUserSim, employing a
straightforward strategy to guide the topic toward the target items. Our study
validates the ability of CRS models to utilize the interaction information,
significantly improving the recommendation results.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 04:21:06 GMT"
}
] | 1,711,411,200,000 | [
[
"Zhu",
"Lixi",
""
],
[
"Huang",
"Xiaowen",
""
],
[
"Sang",
"Jitao",
""
]
] |
2403.16427 | Ziyan Wang | Ziyan Wang, Yingpeng Du, Zhu Sun, Haoyan Chua, Kaidong Feng, Wenya
Wang, Jie Zhang | Re2LLM: Reflective Reinforcement Large Language Model for Session-based
Recommendation | 11 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) are emerging as promising approaches to enhance
session-based recommendation (SBR), where both prompt-based and
fine-tuning-based methods have been widely investigated to align LLMs with SBR.
However, the former methods struggle with optimal prompts to elicit the correct
reasoning of LLMs due to the lack of task-specific feedback, leading to
unsatisfactory recommendations. Although the latter methods attempt to
fine-tune LLMs with domain-specific knowledge, they face limitations such as
high computational costs and reliance on open-source backbones. To address such
issues, we propose a Reflective Reinforcement Large Language Model (Re2LLM) for
SBR, guiding LLMs to focus on specialized knowledge essential for more accurate
recommendations effectively and efficiently. In particular, we first design the
Reflective Exploration Module to effectively extract knowledge that is readily
understandable and digestible by LLMs. To be specific, we direct LLMs to
examine recommendation errors through self-reflection and construct a knowledge
base (KB) comprising hints capable of rectifying these errors. To efficiently
elicit the correct reasoning of LLMs, we further devise the Reinforcement
Utilization Module to train a lightweight retrieval agent. It learns to select
hints from the constructed KB based on the task-specific feedback, where the
hints can serve as guidance to help correct LLMs reasoning for better
recommendations. Extensive experiments on multiple real-world datasets
demonstrate that our method consistently outperforms state-of-the-art methods.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 05:12:18 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Mar 2024 07:21:01 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Mar 2024 03:27:24 GMT"
},
{
"version": "v4",
"created": "Fri, 19 Apr 2024 16:26:57 GMT"
}
] | 1,713,744,000,000 | [
[
"Wang",
"Ziyan",
""
],
[
"Du",
"Yingpeng",
""
],
[
"Sun",
"Zhu",
""
],
[
"Chua",
"Haoyan",
""
],
[
"Feng",
"Kaidong",
""
],
[
"Wang",
"Wenya",
""
],
[
"Zhang",
"Jie",
""
]
] |
2403.16501 | Debodeep Banerjee | Debodeep Banerjee, Stefano Teso, Burcu Sayin, Andrea Passerini | Learning To Guide Human Decision Makers With Vision-Language Models | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | There is increasing interest in developing AIs for assisting human
decision-making in high-stakes tasks, such as medical diagnosis, for the
purpose of improving decision quality and reducing cognitive strain. Mainstream
approaches team up an expert with a machine learning model to which safer
decisions are offloaded, thus letting the former focus on cases that demand
their attention. his separation of responsibilities setup, however, is
inadequate for high-stakes scenarios. On the one hand, the expert may end up
over-relying on the machine's decisions due to anchoring bias, thus losing the
human oversight that is increasingly being required by regulatory agencies to
ensure trustworthy AI. On the other hand, the expert is left entirely
unassisted on the (typically hardest) decisions on which the model abstained.
As a remedy, we introduce learning to guide (LTG), an alternative framework in
which - rather than taking control from the human expert - the machine provides
guidance useful for decision making, and the human is entirely responsible for
coming up with a decision. In order to ensure guidance is interpretable} and
task-specific, we develop SLOG, an approach for turning any vision-language
model into a capable generator of textual guidance by leveraging a modicum of
human feedback. Our empirical evaluation highlights the promise of \method on a
challenging, real-world medical diagnosis task.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 07:34:42 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Mar 2024 21:46:45 GMT"
}
] | 1,711,929,600,000 | [
[
"Banerjee",
"Debodeep",
""
],
[
"Teso",
"Stefano",
""
],
[
"Sayin",
"Burcu",
""
],
[
"Passerini",
"Andrea",
""
]
] |
2403.16508 | Dillon Z. Chen | Dillon Z. Chen, Felipe Trevizan, Sylvie Thi\'ebaux | Return to Tradition: Learning Reliable Heuristics with Classical Machine
Learning | Extended version of ICAPS 2024 paper | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Current approaches for learning for planning have yet to achieve competitive
performance against classical planners in several domains, and have poor
overall performance. In this work, we construct novel graph representations of
lifted planning tasks and use the WL algorithm to generate features from them.
These features are used with classical machine learning methods which have up
to 2 orders of magnitude fewer parameters and train up to 3 orders of magnitude
faster than the state-of-the-art deep learning for planning models. Our novel
approach, WL-GOOSE, reliably learns heuristics from scratch and outperforms the
$h^{\text{FF}}$ heuristic in a fair competition setting. It also outperforms or
ties with LAMA on 4 out of 10 domains on coverage and 7 out of 10 domains on
plan quality. WL-GOOSE is the first learning for planning model which achieves
these feats. Furthermore, we study the connections between our novel WL feature
generation method, previous theoretically flavoured learning architectures, and
Description Logic Features for planning.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 07:47:52 GMT"
}
] | 1,711,411,200,000 | [
[
"Chen",
"Dillon Z.",
""
],
[
"Trevizan",
"Felipe",
""
],
[
"Thiébaux",
"Sylvie",
""
]
] |
2403.16524 | Bastin Tony Roy Savarimuthu | Bastin Tony Roy Savarimuthu, Surangika Ranathunga, Stephen Cranefield | Harnessing the power of LLMs for normative reasoning in MASs | 12 pages, 1 figure, accepted to COINE 2024 workshop at AAMAS 2024
(https://coin-workshop.github.io/coine-2024-auckland/accepted_papers.html) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Software agents, both human and computational, do not exist in isolation and
often need to collaborate or coordinate with others to achieve their goals. In
human society, social mechanisms such as norms ensure efficient functioning,
and these techniques have been adopted by researchers in multi-agent systems
(MAS) to create socially aware agents. However, traditional techniques have
limitations, such as operating in limited environments often using brittle
symbolic reasoning. The advent of Large Language Models (LLMs) offers a
promising solution, providing a rich and expressive vocabulary for norms and
enabling norm-capable agents that can perform a range of tasks such as norm
discovery, normative reasoning and decision-making. This paper examines the
potential of LLM-based agents to acquire normative capabilities, drawing on
recent Natural Language Processing (NLP) and LLM research. We present our
vision for creating normative LLM agents. In particular, we discuss how the
recently proposed "LLM agent" approaches can be extended to implement such
normative LLM agents. We also highlight challenges in this emerging field. This
paper thus aims to foster collaboration between MAS, NLP and LLM researchers in
order to advance the field of normative agents.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 08:09:01 GMT"
}
] | 1,711,411,200,000 | [
[
"Savarimuthu",
"Bastin Tony Roy",
""
],
[
"Ranathunga",
"Surangika",
""
],
[
"Cranefield",
"Stephen",
""
]
] |