id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2405.04294 | Xiangpeng Wan | Xiangpeng Wan, Haicheng Deng, Kai Zou, Shiqi Xu | Enhancing the Efficiency and Accuracy of Underlying Asset Reviews in
Structured Finance: The Application of Multi-agent Framework | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structured finance, which involves restructuring diverse assets into
securities like MBS, ABS, and CDOs, enhances capital market efficiency but
presents significant due diligence challenges. This study explores the
integration of artificial intelligence (AI) with traditional asset review
processes to improve efficiency and accuracy in structured finance. Using both
open-sourced and close-sourced large language models (LLMs), we demonstrate
that AI can automate the verification of information between loan applications
and bank statements effectively. While close-sourced models such as GPT-4 show
superior performance, open-sourced models like LLAMA3 offer a cost-effective
alternative. Dual-agent systems further increase accuracy, though this comes
with higher operational costs. This research highlights AI's potential to
minimize manual errors and streamline due diligence, suggesting a broader
application of AI in financial document analysis and risk management.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 13:09:49 GMT"
}
] | 1,715,126,400,000 | [
[
"Wan",
"Xiangpeng",
""
],
[
"Deng",
"Haicheng",
""
],
[
"Zou",
"Kai",
""
],
[
"Xu",
"Shiqi",
""
]
] |
2405.04300 | Mustafa Abdelwahed | Mustafa F Abdelwahed, Joan Espasa, Alice Toniolo, Ian P. Gent | Behaviour Planning: A Toolkit for Diverse Planning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Diverse planning is the problem of generating plans with distinct
characteristics. This is valuable for many real-world scenarios, including
applications related to plan recognition and business process automation. In
this work, we introduce \emph{Behaviour Planning}, a diverse planning toolkit
that can characterise and generate diverse plans based on modular diversity
models. We present a qualitative framework for describing diversity models, a
planning approach for generating plans aligned with any given diversity model,
and provide a practical implementation of an SMT-based behaviour planner. We
showcase how the qualitative approach offered by Behaviour Planning allows it
to overcome various challenges faced by previous approaches. Finally, the
experimental evaluation shows the effectiveness of Behaviour Planning in
generating diverse plans compared to state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 13:18:22 GMT"
}
] | 1,715,126,400,000 | [
[
"Abdelwahed",
"Mustafa F",
""
],
[
"Espasa",
"Joan",
""
],
[
"Toniolo",
"Alice",
""
],
[
"Gent",
"Ian P.",
""
]
] |
2405.04323 | Moritz M\"oller | Alexandra Gobrecht, Felix Tuma, Moritz M\"oller, Thomas Z\"oller, Mark
Zakhvatkin, Alexandra Wuttig, Holger Sommerfeldt and Sven Sch\"utt | Beyond human subjectivity and error: a novel AI grading system | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The grading of open-ended questions is a high-effort, high-impact task in
education. Automating this task promises a significant reduction in workload
for education professionals, as well as more consistent grading outcomes for
students, by circumventing human subjectivity and error. While recent
breakthroughs in AI technology might facilitate such automation, this has not
been demonstrated at scale. It this paper, we introduce a novel automatic short
answer grading (ASAG) system. The system is based on a fine-tuned open-source
transformer model which we trained on large set of exam data from university
courses across a large range of disciplines. We evaluated the trained model's
performance against held-out test data in a first experiment and found high
accuracy levels across a broad spectrum of unseen questions, even in unseen
courses. We further compared the performance of our model with that of
certified human domain experts in a second experiment: we first assembled
another test dataset from real historical exams - the historic grades contained
in that data were awarded to students in a regulated, legally binding
examination process; we therefore considered them as ground truth for our
experiment. We then asked certified human domain experts and our model to grade
the historic student answers again without disclosing the historic grades.
Finally, we compared the hence obtained grades with the historic grades (our
ground truth). We found that for the courses examined, the model deviated less
from the official historic grades than the human re-graders - the model's
median absolute error was 44 % smaller than the human re-graders', implying
that the model is more consistent than humans in grading. These results suggest
that leveraging AI enhanced grading can reduce human subjectivity, improve
consistency and thus ultimately increase fairness.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 13:49:59 GMT"
}
] | 1,715,126,400,000 | [
[
"Gobrecht",
"Alexandra",
""
],
[
"Tuma",
"Felix",
""
],
[
"Möller",
"Moritz",
""
],
[
"Zöller",
"Thomas",
""
],
[
"Zakhvatkin",
"Mark",
""
],
[
"Wuttig",
"Alexandra",
""
],
[
"Sommerfeldt",
"Holger",
""
],
[
"Schütt",
"Sven",
""
]
] |
2405.04333 | Stefaan Verhulst Dr | Hannah Chafetz, Sampriti Saxena, and Stefaan G. Verhulst | A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open
Data and Generative AI | 58 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Since late 2022, generative AI has taken the world by storm, with widespread
use of tools including ChatGPT, Gemini, and Claude. Generative AI and large
language model (LLM) applications are transforming how individuals find and
access data and knowledge. However, the intricate relationship between open
data and generative AI, and the vast potential it holds for driving innovation
in this field remain underexplored areas. This white paper seeks to unpack the
relationship between open data and generative AI and explore possible
components of a new Fourth Wave of Open Data: Is open data becoming AI ready?
Is open data moving towards a data commons approach? Is generative AI making
open data more conversational? Will generative AI improve open data quality and
provenance? Towards this end, we provide a new Spectrum of Scenarios framework.
This framework outlines a range of scenarios in which open data and generative
AI could intersect and what is required from a data quality and provenance
perspective to make open data ready for those specific scenarios. These
scenarios include: pertaining, adaptation, inference and insight generation,
data augmentation, and open-ended exploration. Through this process, we found
that in order for data holders to embrace generative AI to improve open data
access and develop greater insights from open data, they first must make
progress around five key areas: enhance transparency and documentation, uphold
quality and integrity, promote interoperability and standards, improve
accessibility and useability, and address ethical considerations.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 14:01:33 GMT"
}
] | 1,715,126,400,000 | [
[
"Chafetz",
"Hannah",
""
],
[
"Saxena",
"Sampriti",
""
],
[
"Verhulst",
"Stefaan G.",
""
]
] |
2405.04336 | Zhihao Wen | Zhihao Wen, Yuan Fang, Pengcheng Wei, Fayao Liu, Zhenghua Chen, Min Wu | Temporal and Heterogeneous Graph Neural Network for Remaining Useful
Life Prediction | 12 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting Remaining Useful Life (RUL) plays a crucial role in the
prognostics and health management of industrial systems that involve a variety
of interrelated sensors. Given a constant stream of time series sensory data
from such systems, deep learning models have risen to prominence at identifying
complex, nonlinear temporal dependencies in these data. In addition to the
temporal dependencies of individual sensors, spatial dependencies emerge as
important correlations among these sensors, which can be naturally modelled by
a temporal graph that describes time-varying spatial relationships. However,
the majority of existing studies have relied on capturing discrete snapshots of
this temporal graph, a coarse-grained approach that leads to loss of temporal
information. Moreover, given the variety of heterogeneous sensors, it becomes
vital that such inherent heterogeneity is leveraged for RUL prediction in
temporal sensor graphs. To capture the nuances of the temporal and spatial
relationships and heterogeneous characteristics in an interconnected graph of
sensors, we introduce a novel model named Temporal and Heterogeneous Graph
Neural Networks (THGNN). Specifically, THGNN aggregates historical data from
neighboring nodes to accurately capture the temporal dynamics and spatial
correlations within the stream of sensor data in a fine-grained manner.
Moreover, the model leverages Feature-wise Linear Modulation (FiLM) to address
the diversity of sensor types, significantly improving the model's capacity to
learn the heterogeneity in the data sources. Finally, we have validated the
effectiveness of our approach through comprehensive experiments. Our empirical
findings demonstrate significant advancements on the N-CMAPSS dataset,
achieving improvements of up to 19.2% and 31.6% in terms of two different
evaluation metrics over state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 14:08:57 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Jun 2024 04:49:21 GMT"
}
] | 1,717,459,200,000 | [
[
"Wen",
"Zhihao",
""
],
[
"Fang",
"Yuan",
""
],
[
"Wei",
"Pengcheng",
""
],
[
"Liu",
"Fayao",
""
],
[
"Chen",
"Zhenghua",
""
],
[
"Wu",
"Min",
""
]
] |
2405.04443 | Simon Werner | Simon Werner, Katharina Christ, Laura Bernardy, Marion G. M\"uller,
Achim Rettinger | POV Learning: Individual Alignment of Multimodal Models using Human
Perception | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Aligning machine learning systems with human expectations is mostly attempted
by training with manually vetted human behavioral samples, typically explicit
feedback. This is done on a population level since the context that is
capturing the subjective Point-Of-View (POV) of a concrete person in a specific
situational context is not retained in the data. However, we argue that
alignment on an individual level can boost the subjective predictive
performance for the individual user interacting with the system considerably.
Since perception differs for each person, the same situation is observed
differently. Consequently, the basis for decision making and the subsequent
reasoning processes and observable reactions differ. We hypothesize that
individual perception patterns can be used for improving the alignment on an
individual level. We test this, by integrating perception information into
machine learning systems and measuring their predictive performance
wrt.~individual subjective assessments. For our empirical study, we collect a
novel data set of multimodal stimuli and corresponding eye tracking sequences
for the novel task of Perception-Guided Crossmodal Entailment and tackle it
with our Perception-Guided Multimodal Transformer. Our findings suggest that
exploiting individual perception signals for the machine learning of subjective
human assessments provides a valuable cue for individual alignment. It does not
only improve the overall predictive performance from the point-of-view of the
individual user but might also contribute to steering AI systems towards every
person's individual expectations and values.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 16:07:29 GMT"
}
] | 1,715,126,400,000 | [
[
"Werner",
"Simon",
""
],
[
"Christ",
"Katharina",
""
],
[
"Bernardy",
"Laura",
""
],
[
"Müller",
"Marion G.",
""
],
[
"Rettinger",
"Achim",
""
]
] |
2405.04453 | Jiajun Liu | Jiajun Liu, Wenjun Ke, Peng Wang, Ziyu Shang, Jinhua Gao, Guozheng Li,
Ke Ji, Yanhe Liu | Towards Continual Knowledge Graph Embedding via Incremental Distillation | Accepted by AAAI 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional knowledge graph embedding (KGE) methods typically require
preserving the entire knowledge graph (KG) with significant training costs when
new knowledge emerges. To address this issue, the continual knowledge graph
embedding (CKGE) task has been proposed to train the KGE model by learning
emerging knowledge efficiently while simultaneously preserving decent old
knowledge. However, the explicit graph structure in KGs, which is critical for
the above goal, has been heavily ignored by existing CKGE methods. On the one
hand, existing methods usually learn new triples in a random order, destroying
the inner structure of new KGs. On the other hand, old triples are preserved
with equal priority, failing to alleviate catastrophic forgetting effectively.
In this paper, we propose a competitive method for CKGE based on incremental
distillation (IncDE), which considers the full use of the explicit graph
structure in KGs. First, to optimize the learning order, we introduce a
hierarchical strategy, ranking new triples for layer-by-layer learning. By
employing the inter- and intra-hierarchical orders together, new triples are
grouped into layers based on the graph structure features. Secondly, to
preserve the old knowledge effectively, we devise a novel incremental
distillation mechanism, which facilitates the seamless transfer of entity
representations from the previous layer to the next one, promoting old
knowledge preservation. Finally, we adopt a two-stage training paradigm to
avoid the over-corruption of old knowledge influenced by under-trained new
knowledge. Experimental results demonstrate the superiority of IncDE over
state-of-the-art baselines. Notably, the incremental distillation mechanism
contributes to improvements of 0.2%-6.5% in the mean reciprocal rank (MRR)
score.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 16:16:00 GMT"
}
] | 1,715,126,400,000 | [
[
"Liu",
"Jiajun",
""
],
[
"Ke",
"Wenjun",
""
],
[
"Wang",
"Peng",
""
],
[
"Shang",
"Ziyu",
""
],
[
"Gao",
"Jinhua",
""
],
[
"Li",
"Guozheng",
""
],
[
"Ji",
"Ke",
""
],
[
"Liu",
"Yanhe",
""
]
] |
2405.04776 | Karthik Valmeekam | Kaya Stechly, Karthik Valmeekam, Subbarao Kambhampati | Chain of Thoughtlessness? An Analysis of CoT in Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language model (LLM) performance on reasoning problems typically does
not generalize out of distribution. Previous work has claimed that this can be
mitigated with chain of thought prompting-a method of demonstrating solution
procedures-with the intuition that it is possible to in-context teach an LLM an
algorithm for solving the problem. This paper presents a case study of chain of
thought on problems from Blocksworld, a classical planning domain, and examines
the performance of two state-of-the-art LLMs across two axes: generality of
examples given in prompt, and complexity of problems queried with each prompt.
While our problems are very simple, we only find meaningful performance
improvements from chain of thought prompts when those prompts are exceedingly
specific to their problem class, and that those improvements quickly
deteriorate as the size n of the query-specified stack grows past the size of
stacks shown in the examples. We also create scalable variants of three domains
commonly studied in previous CoT papers and demonstrate the existence of
similar failure modes. Our results hint that, contrary to previous claims in
the literature, CoT's performance improvements do not stem from the model
learning general algorithmic procedures via demonstrations but depend on
carefully engineering highly problem specific prompts. This spotlights
drawbacks of chain of thought, especially the sharp tradeoff between possible
performance gains and the amount of human labor necessary to generate examples
with correct reasoning traces.
| [
{
"version": "v1",
"created": "Wed, 8 May 2024 02:48:28 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jun 2024 02:44:52 GMT"
}
] | 1,717,718,400,000 | [
[
"Stechly",
"Kaya",
""
],
[
"Valmeekam",
"Karthik",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2405.04868 | Olga Mashkova | Olga Mashkova, Fernando Zhapa-Camacho, Robert Hoehndorf | Enhancing Geometric Ontology Embeddings for $\mathcal{EL}^{++}$ with
Negative Sampling and Deductive Closure Filtering | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Ontology embeddings map classes, relations, and individuals in ontologies
into $\mathbb{R}^n$, and within $\mathbb{R}^n$ similarity between entities can
be computed or new axioms inferred. For ontologies in the Description Logic
$\mathcal{EL}^{++}$, several embedding methods have been developed that
explicitly generate models of an ontology. However, these methods suffer from
some limitations; they do not distinguish between statements that are
unprovable and provably false, and therefore they may use entailed statements
as negatives. Furthermore, they do not utilize the deductive closure of an
ontology to identify statements that are inferred but not asserted. We
evaluated a set of embedding methods for $\mathcal{EL}^{++}$ ontologies based
on high-dimensional ball representation of concept descriptions, incorporating
several modifications that aim to make use of the ontology deductive closure.
In particular, we designed novel negative losses that account both for the
deductive closure and different types of negatives. We demonstrate that our
embedding methods improve over the baseline ontology embedding in the task of
knowledge base or ontology completion.
| [
{
"version": "v1",
"created": "Wed, 8 May 2024 07:50:21 GMT"
}
] | 1,715,212,800,000 | [
[
"Mashkova",
"Olga",
""
],
[
"Zhapa-Camacho",
"Fernando",
""
],
[
"Hoehndorf",
"Robert",
""
]
] |
2405.04937 | Michael Mock | Michael Mock (1), Sebastian Schmidt (1), Felix M\"uller (2 and 1),
Rebekka G\"orge (1), Anna Schmitz (1), Elena Haedecke (2 and 1), Angelika
Voss (1), Dirk Hecker (1), Maximillian Poretschkin (1 and 2) ((1) Fraunhofer
Institute for Intelligent Analysis and Information Systems IAIS Sankt
Augustin, Germany, (2) University of Bonn, Bonn, Germany) | Developing trustworthy AI applications with foundation models | 24 pages, 11 figures | null | 10.24406/publica-2987 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The trustworthiness of AI applications has been the subject of recent
research and is also addressed in the EU's recently adopted AI Regulation. The
currently emerging foundation models in the field of text, speech and image
processing offer completely new possibilities for developing AI applications.
This whitepaper shows how the trustworthiness of an AI application developed
with foundation models can be evaluated and ensured. For this purpose, the
application-specific, risk-based approach for testing and ensuring the
trustworthiness of AI applications, as developed in the 'AI Assessment Catalog
- Guideline for Trustworthy Artificial Intelligence' by Fraunhofer IAIS, is
transferred to the context of foundation models. Special consideration is given
to the fact that specific risks of foundation models can have an impact on the
AI application and must also be taken into account when checking
trustworthiness. Chapter 1 of the white paper explains the fundamental
relationship between foundation models and AI applications based on them in
terms of trustworthiness. Chapter 2 provides an introduction to the technical
construction of foundation models and Chapter 3 shows how AI applications can
be developed based on them. Chapter 4 provides an overview of the resulting
risks regarding trustworthiness. Chapter 5 shows which requirements for AI
applications and foundation models are to be expected according to the draft of
the European Union's AI Regulation and Chapter 6 finally shows the system and
procedure for meeting trustworthiness requirements.
| [
{
"version": "v1",
"created": "Wed, 8 May 2024 10:08:45 GMT"
}
] | 1,715,212,800,000 | [
[
"Mock",
"Michael",
"",
"2 and 1"
],
[
"Schmidt",
"Sebastian",
"",
"2 and 1"
],
[
"Müller",
"Felix",
"",
"2 and 1"
],
[
"Görge",
"Rebekka",
"",
"2 and 1"
],
[
"Schmitz",
"Anna",
"",
"2 and 1"
],
[
"Haedecke",
"Elena",
"",
"2 and 1"
],
[
"Voss",
"Angelika",
"",
"1 and 2"
],
[
"Hecker",
"Dirk",
"",
"1 and 2"
],
[
"Poretschkin",
"Maximillian",
"",
"1 and 2"
]
] |
2405.05146 | Suzana Veljanovska | Hans Dermot Doran and Suzana Veljanovska | Hybrid Convolutional Neural Networks with Reliability Guarantee | 2024 54th Annual IEEE/IFIP International Conference on Dependable
Systems and Networks (DSN 2024). Dependable and Secure Machine Learning
Workshop (DSML 2024), Brisbane, Australia, June 24-27, 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Making AI safe and dependable requires the generation of dependable models
and dependable execution of those models. We propose redundant execution as a
well-known technique that can be used to ensure reliable execution of the AI
model. This generic technique will extend the application scope of
AI-accelerators that do not feature well-documented safety or dependability
properties. Typical redundancy techniques incur at least double or triple the
computational expense of the original. We adopt a co-design approach,
integrating reliable model execution with non-reliable execution, focusing that
additional computational expense only where it is strictly necessary. We
describe the design, implementation and some preliminary results of a hybrid
CNN.
| [
{
"version": "v1",
"created": "Wed, 8 May 2024 15:39:38 GMT"
},
{
"version": "v2",
"created": "Thu, 9 May 2024 09:31:36 GMT"
}
] | 1,715,299,200,000 | [
[
"Doran",
"Hans Dermot",
""
],
[
"Veljanovska",
"Suzana",
""
]
] |
2405.05594 | Ting Han Wei | Owen Randall, Martin M\"uller, Ting Han Wei, Ryan Hayward | Expected Work Search: Combining Win Rate and Proof Size Estimation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Expected Work Search (EWS), a new game solving algorithm. EWS
combines win rate estimation, as used in Monte Carlo Tree Search, with proof
size estimation, as used in Proof Number Search. The search efficiency of EWS
stems from minimizing a novel notion of Expected Work, which predicts the
expected computation required to solve a position. EWS outperforms traditional
solving algorithms on the games of Go and Hex. For Go, we present the first
solution to the empty 5x5 board with the commonly used positional superko
ruleset. For Hex, our algorithm solves the empty 8x8 board in under 4 minutes.
Experiments show that EWS succeeds both with and without extensive
domain-specific knowledge.
| [
{
"version": "v1",
"created": "Thu, 9 May 2024 07:33:06 GMT"
}
] | 1,715,299,200,000 | [
[
"Randall",
"Owen",
""
],
[
"Müller",
"Martin",
""
],
[
"Wei",
"Ting Han",
""
],
[
"Hayward",
"Ryan",
""
]
] |
2405.05662 | Wietze Koops | Wietze Koops, Sebastian Junges, Nils Jansen | Approximate Dec-POMDP Solving Using Multi-Agent A* | 19 pages, 3 figures. Extended version (with appendix) of the paper to
appear in IJCAI 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an A*-based algorithm to compute policies for finite-horizon
Dec-POMDPs. Our goal is to sacrifice optimality in favor of scalability for
larger horizons. The main ingredients of our approach are (1) using clustered
sliding window memory, (2) pruning the A* search tree, and (3) using novel A*
heuristics. Our experiments show competitive performance to the
state-of-the-art. Moreover, for multiple benchmarks, we achieve superior
performance. In addition, we provide an A* algorithm that finds upper bounds
for the optimum, tailored towards problems with long horizons. The main
ingredient is a new heuristic that periodically reveals the state, thereby
limiting the number of reachable beliefs. Our experiments demonstrate the
efficacy and scalability of the approach.
| [
{
"version": "v1",
"created": "Thu, 9 May 2024 10:33:07 GMT"
}
] | 1,715,299,200,000 | [
[
"Koops",
"Wietze",
""
],
[
"Junges",
"Sebastian",
""
],
[
"Jansen",
"Nils",
""
]
] |
2405.06109 | Rahul Nellikkath | Rahul Nellikkath, Mathieu Tanneau, Pascal Van Hentenryck, Spyros
Chatzivasileiadis | Scalable Exact Verification of Optimization Proxies for Large-Scale
Optimal Power Flow | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Optimal Power Flow (OPF) is a valuable tool for power system operators, but
it is a difficult problem to solve for large systems.
Machine Learning (ML) algorithms, especially Neural Networks-based (NN)
optimization proxies, have emerged as a promising new tool for solving OPF, by
estimating the OPF solution much faster than traditional methods.
However, these ML algorithms act as black boxes, and it is hard to assess
their worst-case performance across the entire range of possible inputs than an
OPF can have.
Previous work has proposed a mixed-integer programming-based methodology to
quantify the worst-case violations caused by a NN trained to estimate the OPF
solution, throughout the entire input domain.
This approach, however, does not scale well to large power systems and more
complex NN models.
This paper addresses these issues by proposing a scalable algorithm to
compute worst-case violations of NN proxies used for approximating large power
systems within a reasonable time limit.
This will help build trust in ML models to be deployed in large
industry-scale power grids.
| [
{
"version": "v1",
"created": "Thu, 9 May 2024 21:30:03 GMT"
}
] | 1,715,558,400,000 | [
[
"Nellikkath",
"Rahul",
""
],
[
"Tanneau",
"Mathieu",
""
],
[
"Van Hentenryck",
"Pascal",
""
],
[
"Chatzivasileiadis",
"Spyros",
""
]
] |
2405.06203 | Joyce Horn Fonteles | Joyce Fonteles, Eduardo Davalos, Ashwin T. S., Yike Zhang, Mengxi
Zhou, Efrat Ayalon, Alicia Lane, Selena Steinberg, Gabriella Anton, Joshua
Danish, Noel Enyedy, Gautam Biswas | A First Step in Using Machine Learning Methods to Enhance Interaction
Analysis for Embodied Learning Environments | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Investigating children's embodied learning in mixed-reality environments,
where they collaboratively simulate scientific processes, requires analyzing
complex multimodal data to interpret their learning and coordination behaviors.
Learning scientists have developed Interaction Analysis (IA) methodologies for
analyzing such data, but this requires researchers to watch hours of videos to
extract and interpret students' learning patterns. Our study aims to simplify
researchers' tasks, using Machine Learning and Multimodal Learning Analytics to
support the IA processes. Our study combines machine learning algorithms and
multimodal analyses to support and streamline researcher efforts in developing
a comprehensive understanding of students' scientific engagement through their
movements, gaze, and affective responses in a simulated scenario. To facilitate
an effective researcher-AI partnership, we present an initial case study to
determine the feasibility of visually representing students' states, actions,
gaze, affect, and movement on a timeline. Our case study focuses on a specific
science scenario where students learn about photosynthesis. The timeline allows
us to investigate the alignment of critical learning moments identified by
multimodal and interaction analysis, and uncover insights into students'
temporal learning progressions.
| [
{
"version": "v1",
"created": "Fri, 10 May 2024 02:40:24 GMT"
}
] | 1,715,558,400,000 | [
[
"Fonteles",
"Joyce",
""
],
[
"Davalos",
"Eduardo",
""
],
[
"S.",
"Ashwin T.",
""
],
[
"Zhang",
"Yike",
""
],
[
"Zhou",
"Mengxi",
""
],
[
"Ayalon",
"Efrat",
""
],
[
"Lane",
"Alicia",
""
],
[
"Steinberg",
"Selena",
""
],
[
"Anton",
"Gabriella",
""
],
[
"Danish",
"Joshua",
""
],
[
"Enyedy",
"Noel",
""
],
[
"Biswas",
"Gautam",
""
]
] |
2405.06232 | Tong Xiao | Tong Xiao, Jiayu Liu, Zhenya Huang, Jinze Wu, Jing Sha, Shijin Wang,
Enhong Chen | Learning to Solve Geometry Problems via Simulating Human Dual-Reasoning
Process | IJCAI 2024 Accepted | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Geometry Problem Solving (GPS), which is a classic and challenging math
problem, has attracted much attention in recent years. It requires a solver to
comprehensively understand both text and diagram, master essential geometry
knowledge, and appropriately apply it in reasoning. However, existing works
follow a paradigm of neural machine translation and only focus on enhancing the
capability of encoders, which neglects the essential characteristics of human
geometry reasoning. In this paper, inspired by dual-process theory, we propose
a Dual-Reasoning Geometry Solver (DualGeoSolver) to simulate the dual-reasoning
process of humans for GPS. Specifically, we construct two systems in
DualGeoSolver, namely Knowledge System and Inference System. Knowledge System
controls an implicit reasoning process, which is responsible for providing
diagram information and geometry knowledge according to a step-wise reasoning
goal generated by Inference System. Inference System conducts an explicit
reasoning process, which specifies the goal in each reasoning step and applies
the knowledge to generate program tokens for resolving it. The two systems
carry out the above process iteratively, which behaves more in line with human
cognition. We conduct extensive experiments on two benchmark datasets, GeoQA
and GeoQA+. The results demonstrate the superiority of DualGeoSolver in both
solving accuracy and robustness from explicitly modeling human reasoning
process and knowledge application.
| [
{
"version": "v1",
"created": "Fri, 10 May 2024 03:53:49 GMT"
}
] | 1,715,558,400,000 | [
[
"Xiao",
"Tong",
""
],
[
"Liu",
"Jiayu",
""
],
[
"Huang",
"Zhenya",
""
],
[
"Wu",
"Jinze",
""
],
[
"Sha",
"Jing",
""
],
[
"Wang",
"Shijin",
""
],
[
"Chen",
"Enhong",
""
]
] |
2405.06266 | Baichao Long | Jianli Xiao and Baichao Long | A Multi-Channel Spatial-Temporal Transformer Model for Traffic Flow
Forecasting | null | Xiao J, Long B. A Multi-Channel Spatial-Temporal Transformer Model
for Traffic Flow Forecasting[J]. Information Sciences, 2024: 120648 | 10.1016/j.ins.2024.120648 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic flow forecasting is a crucial task in transportation management and
planning. The main challenges for traffic flow forecasting are that (1) as the
length of prediction time increases, the accuracy of prediction will decrease;
(2) the predicted results greatly rely on the extraction of temporal and
spatial dependencies from the road networks. To overcome the challenges
mentioned above, we propose a multi-channel spatial-temporal transformer model
for traffic flow forecasting, which improves the accuracy of the prediction by
fusing results from different channels of traffic data. Our approach leverages
graph convolutional network to extract spatial features from each channel while
using a transformer-based architecture to capture temporal dependencies across
channels. We introduce an adaptive adjacency matrix to overcome limitations in
feature extraction from fixed topological structures. Experimental results on
six real-world datasets demonstrate that introducing a multi-channel mechanism
into the temporal model enhances performance and our proposed model outperforms
state-of-the-art models in terms of accuracy.
| [
{
"version": "v1",
"created": "Fri, 10 May 2024 06:37:07 GMT"
}
] | 1,715,558,400,000 | [
[
"Xiao",
"Jianli",
""
],
[
"Long",
"Baichao",
""
]
] |
2405.06296 | Naoto Sato | Naoto Sato | Fast Evaluation of DNN for Past Dataset in Incremental Learning | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | During the operation of a system including a deep neural network (DNN), new
input values that were not included in the training dataset are given to the
DNN. In such a case, the DNN may be incrementally trained with the new input
values; however, that training may reduce the accuracy of the DNN in regard to
the dataset that was previously obtained and used for the past training. It is
necessary to evaluate the effect of the additional training on the accuracy for
the past dataset. However, evaluation by testing all the input values included
in the past dataset takes time. Therefore, we propose a new method to quickly
evaluate the effect on the accuracy for the past dataset. In the proposed
method, the gradient of the parameter values (such as weight and bias) for the
past dataset is extracted by running the DNN before the training. Then, after
the training, its effect on the accuracy with respect to the past dataset is
calculated from the gradient and update differences of the parameter values. To
show the usefulness of the proposed method, we present experimental results
with several datasets. The results show that the proposed method can estimate
the accuracy change by additional training in a constant time.
| [
{
"version": "v1",
"created": "Fri, 10 May 2024 07:55:08 GMT"
}
] | 1,715,558,400,000 | [
[
"Sato",
"Naoto",
""
]
] |
2405.06413 | Rongyu Zhang | Rongyu Zhang, Yun Chen, Chenrui Wu, Fangxin Wang, Bo Li | Multi-level Personalized Federated Learning on Heterogeneous and
Long-Tailed Data | 14 pages, 10 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated learning (FL) offers a privacy-centric distributed learning
framework, enabling model training on individual clients and central
aggregation without necessitating data exchange. Nonetheless, FL
implementations often suffer from non-i.i.d. and long-tailed class
distributions across mobile applications, e.g., autonomous vehicles, which
leads models to overfitting as local training may converge to sub-optimal. In
our study, we explore the impact of data heterogeneity on model bias and
introduce an innovative personalized FL framework, Multi-level Personalized
Federated Learning (MuPFL), which leverages the hierarchical architecture of FL
to fully harness computational resources at various levels. This framework
integrates three pivotal modules: Biased Activation Value Dropout (BAVD) to
mitigate overfitting and accelerate training; Adaptive Cluster-based Model
Update (ACMU) to refine local models ensuring coherent global aggregation; and
Prior Knowledge-assisted Classifier Fine-tuning (PKCF) to bolster
classification and personalize models in accord with skewed local data with
shared knowledge. Extensive experiments on diverse real-world datasets for
image classification and semantic segmentation validate that MuPFL consistently
outperforms state-of-the-art baselines, even under extreme non-i.i.d. and
long-tail conditions, which enhances accuracy by as much as 7.39% and
accelerates training by up to 80% at most, marking significant advancements in
both efficiency and effectiveness.
| [
{
"version": "v1",
"created": "Fri, 10 May 2024 11:52:53 GMT"
}
] | 1,715,558,400,000 | [
[
"Zhang",
"Rongyu",
""
],
[
"Chen",
"Yun",
""
],
[
"Wu",
"Chenrui",
""
],
[
"Wang",
"Fangxin",
""
],
[
"Li",
"Bo",
""
]
] |
2405.06510 | Yichen Qian | Yichen Qian, Yongyi He, Rong Zhu, Jintao Huang, Zhijian Ma, Haibin
Wang, Yaohua Wang, Xiuyu Sun, Defu Lian, Bolin Ding, Jingren Zhou | UniDM: A Unified Framework for Data Manipulation with Large Language
Models | MLSys24 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Designing effective data manipulation methods is a long standing problem in
data lakes. Traditional methods, which rely on rules or machine learning
models, require extensive human efforts on training data collection and tuning
models. Recent methods apply Large Language Models (LLMs) to resolve multiple
data manipulation tasks. They exhibit bright benefits in terms of performance
but still require customized designs to fit each specific task. This is very
costly and can not catch up with the requirements of big data lake platforms.
In this paper, inspired by the cross-task generality of LLMs on NLP tasks, we
pave the first step to design an automatic and general solution to tackle with
data manipulation tasks. We propose UniDM, a unified framework which
establishes a new paradigm to process data manipulation tasks using LLMs. UniDM
formalizes a number of data manipulation tasks in a unified form and abstracts
three main general steps to solve each task. We develop an automatic context
retrieval to allow the LLMs to retrieve data from data lakes, potentially
containing evidence and factual information. For each step, we design effective
prompts to guide LLMs to produce high quality results. By our comprehensive
evaluation on a variety of benchmarks, our UniDM exhibits great generality and
state-of-the-art performance on a wide variety of data manipulation tasks.
| [
{
"version": "v1",
"created": "Fri, 10 May 2024 14:44:04 GMT"
}
] | 1,715,558,400,000 | [
[
"Qian",
"Yichen",
""
],
[
"He",
"Yongyi",
""
],
[
"Zhu",
"Rong",
""
],
[
"Huang",
"Jintao",
""
],
[
"Ma",
"Zhijian",
""
],
[
"Wang",
"Haibin",
""
],
[
"Wang",
"Yaohua",
""
],
[
"Sun",
"Xiuyu",
""
],
[
"Lian",
"Defu",
""
],
[
"Ding",
"Bolin",
""
],
[
"Zhou",
"Jingren",
""
]
] |
2405.06624 | Joar Skalse | David "davidad" Dalrymple and Joar Skalse and Yoshua Bengio and Stuart
Russell and Max Tegmark and Sanjit Seshia and Steve Omohundro and Christian
Szegedy and Ben Goldhaber and Nora Ammann and Alessandro Abate and Joe
Halpern and Clark Barrett and Ding Zhao and Tan Zhi-Xuan and Jeannette Wing
and Joshua Tenenbaum | Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable
AI Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensuring that AI systems reliably and robustly avoid harmful or dangerous
behaviours is a crucial challenge, especially for AI systems with a high degree
of autonomy and general intelligence, or systems used in safety-critical
contexts. In this paper, we will introduce and define a family of approaches to
AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature
of these approaches is that they aim to produce AI systems which are equipped
with high-assurance quantitative safety guarantees. This is achieved by the
interplay of three core components: a world model (which provides a
mathematical description of how the AI system affects the outside world), a
safety specification (which is a mathematical description of what effects are
acceptable), and a verifier (which provides an auditable proof certificate that
the AI satisfies the safety specification relative to the world model). We
outline a number of approaches for creating each of these three core
components, describe the main technical challenges, and suggest a number of
potential solutions to them. We also argue for the necessity of this approach
to AI safety, and for the inadequacy of the main alternative approaches.
| [
{
"version": "v1",
"created": "Fri, 10 May 2024 17:38:32 GMT"
},
{
"version": "v2",
"created": "Fri, 17 May 2024 13:31:36 GMT"
}
] | 1,716,163,200,000 | [
[
"Dalrymple",
"David \"davidad\"",
""
],
[
"Skalse",
"Joar",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Russell",
"Stuart",
""
],
[
"Tegmark",
"Max",
""
],
[
"Seshia",
"Sanjit",
""
],
[
"Omohundro",
"Steve",
""
],
[
"Szegedy",
"Christian",
""
],
[
"Goldhaber",
"Ben",
""
],
[
"Ammann",
"Nora",
""
],
[
"Abate",
"Alessandro",
""
],
[
"Halpern",
"Joe",
""
],
[
"Barrett",
"Clark",
""
],
[
"Zhao",
"Ding",
""
],
[
"Zhi-Xuan",
"Tan",
""
],
[
"Wing",
"Jeannette",
""
],
[
"Tenenbaum",
"Joshua",
""
]
] |
2405.06846 | Danny Halawi | Danny Halawi, Aron Sarmasi, Siena Saltzen, Joshua McCoy | Dominion: A New Frontier for AI Research | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In recent years, machine learning approaches have made dramatic advances,
reaching superhuman performance in Go, Atari, and poker variants. These games,
and others before them, have served not only as a testbed but have also helped
to push the boundaries of AI research. Continuing this tradition, we examine
the tabletop game Dominion and discuss the properties that make it well-suited
to serve as a benchmark for the next generation of reinforcement learning (RL)
algorithms. We also present the Dominion Online Dataset, a collection of over
2,000,000 games of Dominion played by experienced players on the Dominion
Online webserver. Finally, we introduce an RL baseline bot that uses existing
techniques to beat common heuristic-based bots, and shows competitive
performance against the previously strongest bot, Provincial.
| [
{
"version": "v1",
"created": "Fri, 10 May 2024 23:03:02 GMT"
}
] | 1,715,644,800,000 | [
[
"Halawi",
"Danny",
""
],
[
"Sarmasi",
"Aron",
""
],
[
"Saltzen",
"Siena",
""
],
[
"McCoy",
"Joshua",
""
]
] |
2405.06915 | Ming-Hui Huang | Ming-Hui Huang, Roland T. Rust | Automating Creativity | 46 pages, 2 tables, 4 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Generative AI (GenAI) has spurred the expectation of being creative, due to
its ability to generate content, yet so far, its creativity has somewhat
disappointed, because it is trained using existing data following human
intentions to generate outputs. The purpose of this paper is to explore what is
required to evolve AI from generative to creative. Based on a reinforcement
learning approach and building upon various research streams of computational
creativity, we develop a triple prompt-response-reward engineering framework to
develop the creative capability of GenAI. This framework consists of three
components: 1) a prompt model for expected creativity by developing
discriminative prompts that are objectively, individually, or socially novel,
2) a response model for observed creativity by generating surprising outputs
that are incrementally, disruptively, or radically innovative, and 3) a reward
model for improving creativity over time by incorporating feedback from the AI,
the creator/manager, and/or the customers. This framework enables the
application of GenAI for various levels of creativity strategically.
| [
{
"version": "v1",
"created": "Sat, 11 May 2024 05:05:10 GMT"
}
] | 1,715,644,800,000 | [
[
"Huang",
"Ming-Hui",
""
],
[
"Rust",
"Roland T.",
""
]
] |
2405.07664 | Rui Zhu | Rui Zhu | Geospatial Knowledge Graphs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Geospatial knowledge graphs have emerged as a novel paradigm for representing
and reasoning over geospatial information. In this framework, entities such as
places, people, events, and observations are depicted as nodes, while their
relationships are represented as edges. This graph-based data format lays the
foundation for creating a "FAIR" (Findable, Accessible, Interoperable, and
Reusable) environment, facilitating the management and analysis of geographic
information. This entry first introduces key concepts in knowledge graphs along
with their associated standardization and tools. It then delves into the
application of knowledge graphs in geography and environmental sciences,
emphasizing their role in bridging symbolic and subsymbolic GeoAI to address
cross-disciplinary geospatial challenges. At the end, new research directions
related to geospatial knowledge graphs are outlined.
| [
{
"version": "v1",
"created": "Mon, 13 May 2024 11:45:22 GMT"
}
] | 1,715,644,800,000 | [
[
"Zhu",
"Rui",
""
]
] |
2405.07893 | Daryl Mupupuni | Daryl Mupupuni, Anupama Guntu, Liang Hong, Kamrul Hasan, Leehyun Keel | Science based AI model certification for new operational environments
with application in traffic state estimation | 7 Pages, 5 figures, \c{opyright}2024 IEEE INTERNATIONAL CONFERENCE on
ELECTRO/INFORMATION TECHNOLOGY | null | null | EIT2024-082 | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The expanding role of Artificial Intelligence (AI) in diverse engineering
domains highlights the challenges associated with deploying AI models in new
operational environments, involving substantial investments in data collection
and model training. Rapid application of AI necessitates evaluating the
feasibility of utilizing pre-trained models in unobserved operational settings
with minimal or no additional data. However, interpreting the opaque nature of
AI's black-box models remains a persistent challenge. Addressing this issue,
this paper proposes a science-based certification methodology to assess the
viability of employing pre-trained data-driven models in new operational
environments. The methodology advocates a profound integration of domain
knowledge, leveraging theoretical and analytical models from physics and
related disciplines, with data-driven AI models. This novel approach introduces
tools to facilitate the development of secure engineering systems, providing
decision-makers with confidence in the trustworthiness and safety of AI-based
models across diverse environments characterized by limited training data and
dynamic, uncertain conditions. The paper demonstrates the efficacy of this
methodology in real-world safety-critical scenarios, particularly in the
context of traffic state estimation. Through simulation results, the study
illustrates how the proposed methodology efficiently quantifies physical
inconsistencies exhibited by pre-trained AI models. By utilizing analytical
models, the methodology offers a means to gauge the applicability of
pre-trained AI models in new operational environments. This research
contributes to advancing the understanding and deployment of AI models,
offering a robust certification framework that enhances confidence in their
reliability and safety across a spectrum of operational conditions.
| [
{
"version": "v1",
"created": "Mon, 13 May 2024 16:28:00 GMT"
}
] | 1,715,644,800,000 | [
[
"Mupupuni",
"Daryl",
""
],
[
"Guntu",
"Anupama",
""
],
[
"Hong",
"Liang",
""
],
[
"Hasan",
"Kamrul",
""
],
[
"Keel",
"Leehyun",
""
]
] |
2405.08131 | Jinfeng Zhong | Jinfeng Zhong, Elsa Negre | When factorization meets argumentation: towards argumentative
explanations | arXiv admin note: substantial text overlap with arXiv:2310.16157 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Factorization-based models have gained popularity since the Netflix challenge
{(2007)}. Since that, various factorization-based models have been developed
and these models have been proven to be efficient in predicting users' ratings
towards items. A major concern is that explaining the recommendations generated
by such methods is non-trivial because the explicit meaning of the latent
factors they learn are not always clear. In response, we propose a novel model
that combines factorization-based methods with argumentation frameworks (AFs).
The integration of AFs provides clear meaning at each stage of the model,
enabling it to produce easily understandable explanations for its
recommendations. In this model, for every user-item interaction, an AF is
defined in which the features of items are considered as arguments, and the
users' ratings towards these features determine the strength and polarity of
these arguments. This perspective allows our model to treat feature attribution
as a structured argumentation procedure, where each calculation is marked with
explicit meaning, enhancing its inherent interpretability. Additionally, our
framework seamlessly incorporates side information, such as user contexts,
leading to more accurate predictions. We anticipate at least three practical
applications for our model: creating explanation templates, providing
interactive explanations, and generating contrastive explanations. Through
testing on real-world datasets, we have found that our model, along with its
variants, not only surpasses existing argumentation-based methods but also
competes effectively with current context-free and context-aware methods.
| [
{
"version": "v1",
"created": "Mon, 13 May 2024 19:16:28 GMT"
}
] | 1,715,731,200,000 | [
[
"Zhong",
"Jinfeng",
""
],
[
"Negre",
"Elsa",
""
]
] |
2405.09190 | Marios Tyrovolas | Marios Tyrovolas, Nikolaos D. Kallimanis and Chrysostomos Stylios | Advancing Explainable AI with Causal Analysis in Large-Scale Fuzzy
Cognitive Maps | 6 pages, 4 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In the quest for accurate and interpretable AI models, eXplainable AI (XAI)
has become crucial. Fuzzy Cognitive Maps (FCMs) stand out as an advanced XAI
method because of their ability to synergistically combine and exploit both
expert knowledge and data-driven insights, providing transparency and intrinsic
interpretability. This letter introduces and investigates the "Total Causal
Effect Calculation for FCMs" (TCEC-FCM) algorithm, an innovative approach that,
for the first time, enables the efficient calculation of total causal effects
among concepts in large-scale FCMs by leveraging binary search and graph
traversal techniques, thereby overcoming the challenge of exhaustive causal
path exploration that hinder existing methods. We evaluate the proposed method
across various synthetic FCMs that demonstrate TCEC-FCM's superior performance
over exhaustive methods, marking a significant advancement in causal effect
analysis within FCMs, thus broadening their usability for modern complex XAI
applications.
| [
{
"version": "v1",
"created": "Wed, 15 May 2024 08:53:47 GMT"
}
] | 1,715,817,600,000 | [
[
"Tyrovolas",
"Marios",
""
],
[
"Kallimanis",
"Nikolaos D.",
""
],
[
"Stylios",
"Chrysostomos",
""
]
] |
2405.09292 | Hou-Biao Li | Xuchang Guo and Houbiao Li | Attribute reduction algorithm of rough sets based on spatial
optimization | 7 pages, 2 figures, 1 table | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rough set is one of the important methods for rule acquisition and attribute
reduction. The current goal of rough set attribute reduction focuses more on
minimizing the number of reduced attributes, but ignores the spatial similarity
between reduced and decision attributes, which may lead to problems such as
increased number of rules and limited generality. In this paper, a rough set
attribute reduction algorithm based on spatial optimization is proposed. By
introducing the concept of spatial similarity, to find the reduction with the
highest spatial similarity, so that the spatial similarity between reduction
and decision attributes is higher, and more concise and widespread rules are
obtained. In addition, a comparative experiment with the traditional rough set
attribute reduction algorithms is designed to prove the effectiveness of the
rough set attribute reduction algorithm based on spatial optimization, which
has made significant improvements on many datasets.
| [
{
"version": "v1",
"created": "Wed, 15 May 2024 12:30:19 GMT"
}
] | 1,715,817,600,000 | [
[
"Guo",
"Xuchang",
""
],
[
"Li",
"Houbiao",
""
]
] |
2405.09415 | Anna Rapberger | Anna Rapberger, Markus Ulbricht, Francesca Toni | On the Correspondence of Non-flat Assumption-based Argumentation and
Logic Programming with Negation as Failure in the Head | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The relation between (a fragment of) assumption-based argumentation (ABA) and
logic programs (LPs) under stable model semantics is well-studied. However, for
obtaining this relation, the ABA framework needs to be restricted to being
flat, i.e., a fragment where the (defeasible) assumptions can never be
entailed, only assumed to be true or false. Here, we remove this restriction
and show a correspondence between non-flat ABA and LPs with negation as failure
in their head. We then extend this result to so-called set-stable ABA
semantics, originally defined for the fragment of non-flat ABA called bipolar
ABA. We showcase how to define set-stable semantics for LPs with negation as
failure in their head and show the correspondence to set-stable ABA semantics.
| [
{
"version": "v1",
"created": "Wed, 15 May 2024 15:10:03 GMT"
},
{
"version": "v2",
"created": "Fri, 24 May 2024 15:25:22 GMT"
}
] | 1,716,768,000,000 | [
[
"Rapberger",
"Anna",
""
],
[
"Ulbricht",
"Markus",
""
],
[
"Toni",
"Francesca",
""
]
] |
2405.09521 | Tilman Hinnerichs | Tilman Hinnerichs, Robin Manhaeve, Giuseppe Marra, Sebastijan Dumancic | Towards a fully declarative neuro-symbolic language | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Neuro-symbolic systems (NeSy), which claim to combine the best of both
learning and reasoning capabilities of artificial intelligence, are missing a
core property of reasoning systems: Declarativeness. The lack of
declarativeness is caused by the functional nature of neural predicates
inherited from neural networks. We propose and implement a general framework
for fully declarative neural predicates, which hence extends to fully
declarative NeSy frameworks. We first show that the declarative extension
preserves the learning and reasoning capabilities while being able to answer
arbitrary queries while only being trained on a single query type.
| [
{
"version": "v1",
"created": "Wed, 15 May 2024 17:24:34 GMT"
}
] | 1,715,817,600,000 | [
[
"Hinnerichs",
"Tilman",
""
],
[
"Manhaeve",
"Robin",
""
],
[
"Marra",
"Giuseppe",
""
],
[
"Dumancic",
"Sebastijan",
""
]
] |
2405.10729 | Francesco Leofante | Francesco Leofante and Hamed Ayoobi and Adam Dejl and Gabriel Freedman
and Deniz Gorur and Junqi Jiang and Guilherme Paulino-Passos and Antonio Rago
and Anna Rapberger and Fabrizio Russo and Xiang Yin and Dekai Zhang and
Francesca Toni | Contestable AI needs Computational Argumentation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI has become pervasive in recent years, but state-of-the-art approaches
predominantly neglect the need for AI systems to be contestable. Instead,
contestability is advocated by AI guidelines (e.g. by the OECD) and regulation
of automated decision-making (e.g. GDPR). In this position paper we explore how
contestability can be achieved computationally in and for AI. We argue that
contestable AI requires dynamic (human-machine and/or machine-machine)
explainability and decision-making processes, whereby machines can (i) interact
with humans and/or other machines to progressively explain their outputs and/or
their reasoning as well as assess grounds for contestation provided by these
humans and/or other machines, and (ii) revise their decision-making processes
to redress any issues successfully raised during contestation. Given that much
of the current AI landscape is tailored to static AIs, the need to accommodate
contestability will require a radical rethinking, that, we argue, computational
argumentation is ideally suited to support.
| [
{
"version": "v1",
"created": "Fri, 17 May 2024 12:23:18 GMT"
}
] | 1,716,163,200,000 | [
[
"Leofante",
"Francesco",
""
],
[
"Ayoobi",
"Hamed",
""
],
[
"Dejl",
"Adam",
""
],
[
"Freedman",
"Gabriel",
""
],
[
"Gorur",
"Deniz",
""
],
[
"Jiang",
"Junqi",
""
],
[
"Paulino-Passos",
"Guilherme",
""
],
[
"Rago",
"Antonio",
""
],
[
"Rapberger",
"Anna",
""
],
[
"Russo",
"Fabrizio",
""
],
[
"Yin",
"Xiang",
""
],
[
"Zhang",
"Dekai",
""
],
[
"Toni",
"Francesca",
""
]
] |
2405.10768 | Alyzia Maria Konsta | Alyzia-Maria Konsta, Alberto Lluch Lafuente, Christoph Matheja | What should be observed for optimal reward in POMDPs? | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Partially observable Markov Decision Processes (POMDPs) are a standard model
for agents making decisions in uncertain environments. Most work on POMDPs
focuses on synthesizing strategies based on the available capabilities.
However, system designers can often control an agent's observation
capabilities, e.g. by placing or selecting sensors. This raises the question of
how one should select an agent's sensors cost-effectively such that it achieves
the desired goals. In this paper, we study the novel optimal observability
problem OOP: Given a POMDP M, how should one change M's observation
capabilities within a fixed budget such that its (minimal) expected reward
remains below a given threshold? We show that the problem is undecidable in
general and decidable when considering positional strategies only. We present
two algorithms for a decidable fragment of the OOP: one based on optimal
strategies of M's underlying Markov decision process and one based on parameter
synthesis with SMT. We report promising results for variants of typical
examples from the POMDP literature.
| [
{
"version": "v1",
"created": "Fri, 17 May 2024 13:27:57 GMT"
}
] | 1,716,163,200,000 | [
[
"Konsta",
"Alyzia-Maria",
""
],
[
"Lafuente",
"Alberto Lluch",
""
],
[
"Matheja",
"Christoph",
""
]
] |
2405.10883 | Hongyi Yang | Hongyi Yang, Fangyuan Chang, Dian Zhu, Muroi Fumie, Zhao Liu | Application of Artificial Intelligence in Schizophrenia Rehabilitation
Management: Systematic Literature Review | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This review aims to systematically assess the current status and prospects of
artificial intelligence (AI) in the rehabilitation management of patients with
schizophrenia and their impact on the rehabilitation process. We selected 70
studies from 2012 to the present, focusing on application, technology
categories, products, and data types of machine learning, deep learning,
reinforcement learning, and other technologies in mental health interventions
and management. The results indicate that AI can be widely used in symptom
monitoring, relapse risk prediction, and rehabilitation treatment by analyzing
ecological momentary assessment, behavioral, and speech data. This review
further explores the potential challenges and future directions of emerging
products, technologies, and analytical methods based on AI, such as social
media analysis, serious games, and large language models in rehabilitation. In
summary, this study systematically reviews the application status of AI in
schizophrenia rehabilitation management and provides valuable insights and
recommendations for future research paths.
| [
{
"version": "v1",
"created": "Fri, 17 May 2024 16:20:34 GMT"
}
] | 1,716,163,200,000 | [
[
"Yang",
"Hongyi",
""
],
[
"Chang",
"Fangyuan",
""
],
[
"Zhu",
"Dian",
""
],
[
"Fumie",
"Muroi",
""
],
[
"Liu",
"Zhao",
""
]
] |
2405.11250 | Fabrizio Russo | Fabrizio Russo, Anna Rapberger, Francesca Toni | Argumentative Causal Discovery | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Causal discovery amounts to unearthing causal relationships amongst features
in data. It is a crucial companion to causal inference, necessary to build
scientific knowledge without resorting to expensive or impossible randomised
control trials. In this paper, we explore how reasoning with symbolic
representations can support causal discovery. Specifically, we deploy
assumption-based argumentation (ABA), a well-established and powerful knowledge
representation formalism, in combination with causality theories, to learn
graphs which reflect causal dependencies in the data. We prove that our method
exhibits desirable properties, notably that, under natural conditions, it can
retrieve ground-truth causal graphs. We also conduct experiments with an
implementation of our method in answer set programming (ASP) on four datasets
from standard benchmarks in causal discovery, showing that our method compares
well against established baselines.
| [
{
"version": "v1",
"created": "Sat, 18 May 2024 10:34:34 GMT"
},
{
"version": "v2",
"created": "Sun, 26 May 2024 00:00:55 GMT"
}
] | 1,716,854,400,000 | [
[
"Russo",
"Fabrizio",
""
],
[
"Rapberger",
"Anna",
""
],
[
"Toni",
"Francesca",
""
]
] |
2405.11305 | Mutsunori Banbara | Irumi Sugimori, Katsumi Inoue, Hidetomo Nabeshima, Torsten Schaub,
Takehide Soh, Naoyuki Tamura, Mutsunori Banbara | Large Neighborhood Prioritized Search for Combinatorial Optimization
with Answer Set Programming | 11 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Large Neighborhood Prioritized Search (LNPS) for solving
combinatorial optimization problems in Answer Set Programming (ASP). LNPS is a
metaheuristic that starts with an initial solution and then iteratively tries
to find better solutions by alternately destroying and prioritized searching
for a current solution. Due to the variability of neighborhoods, LNPS allows
for flexible search without strongly depending on the destroy operators. We
present an implementation of LNPS based on ASP. The resulting heulingo solver
demonstrates that LNPS can significantly enhance the solving performance of ASP
for optimization. Furthermore, we establish the competitiveness of our LNPS
approach by empirically contrasting it to (adaptive) large neighborhood search.
| [
{
"version": "v1",
"created": "Sat, 18 May 2024 14:37:43 GMT"
}
] | 1,716,249,600,000 | [
[
"Sugimori",
"Irumi",
""
],
[
"Inoue",
"Katsumi",
""
],
[
"Nabeshima",
"Hidetomo",
""
],
[
"Schaub",
"Torsten",
""
],
[
"Soh",
"Takehide",
""
],
[
"Tamura",
"Naoyuki",
""
],
[
"Banbara",
"Mutsunori",
""
]
] |
2405.11346 | Ritesh Chandra | Ritesh Chandra, Shashi Shekhar Kumar, Rushil Patra, and Sonali Agarwal | Decision support system for Forest fire management using Ontology with
Big Data and LLMs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Forests are crucial for ecological balance, but wildfires, a major cause of
forest loss, pose significant risks. Fire weather indices, which assess
wildfire risk and predict resource demands, are vital. With the rise of sensor
networks in fields like healthcare and environmental monitoring, semantic
sensor networks are increasingly used to gather climatic data such as wind
speed, temperature, and humidity. However, processing these data streams to
determine fire weather indices presents challenges, underscoring the growing
importance of effective forest fire detection. This paper discusses using
Apache Spark for early forest fire detection, enhancing fire risk prediction
with meteorological and geographical data. Building on our previous development
of Semantic Sensor Network (SSN) ontologies and Semantic Web Rules Language
(SWRL) for managing forest fires in Monesterial Natural Park, we expanded SWRL
to improve a Decision Support System (DSS) using a Large Language Models (LLMs)
and Spark framework. We implemented real-time alerts with Spark streaming,
tailored to various fire scenarios, and validated our approach using ontology
metrics, query-based evaluations, LLMs score precision, F1 score, and recall
measures.
| [
{
"version": "v1",
"created": "Sat, 18 May 2024 17:30:30 GMT"
}
] | 1,716,249,600,000 | [
[
"Chandra",
"Ritesh",
""
],
[
"Kumar",
"Shashi Shekhar",
""
],
[
"Patra",
"Rushil",
""
],
[
"Agarwal",
"Sonali",
""
]
] |
2405.11841 | Lifeng Fan | Junqi Wang, Chunhui Zhang, Jiapeng Li, Yuxi Ma, Lixing Niu, Jiaheng
Han, Yujia Peng, Yixin Zhu, Lifeng Fan | Evaluating and Modeling Social Intelligence: A Comparative Study of
Human and AI Capabilities | Also published in Proceedings of the Annual Meeting of the Cognitive
Science Society (CogSci), 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Facing the current debate on whether Large Language Models (LLMs) attain
near-human intelligence levels (Mitchell & Krakauer, 2023; Bubeck et al., 2023;
Kosinski, 2023; Shiffrin & Mitchell, 2023; Ullman, 2023), the current study
introduces a benchmark for evaluating social intelligence, one of the most
distinctive aspects of human cognition. We developed a comprehensive
theoretical framework for social dynamics and introduced two evaluation tasks:
Inverse Reasoning (IR) and Inverse Inverse Planning (IIP). Our approach also
encompassed a computational model based on recursive Bayesian inference, adept
at elucidating diverse human behavioral patterns. Extensive experiments and
detailed analyses revealed that humans surpassed the latest GPT models in
overall performance, zero-shot learning, one-shot generalization, and
adaptability to multi-modalities. Notably, GPT models demonstrated social
intelligence only at the most basic order (order = 0), in stark contrast to
human social intelligence (order >= 2). Further examination indicated a
propensity of LLMs to rely on pattern recognition for shortcuts, casting doubt
on their possession of authentic human-level social intelligence. Our codes,
dataset, appendix and human data are released at
https://github.com/bigai-ai/Evaluate-n-Model-Social-Intelligence.
| [
{
"version": "v1",
"created": "Mon, 20 May 2024 07:34:48 GMT"
}
] | 1,716,249,600,000 | [
[
"Wang",
"Junqi",
""
],
[
"Zhang",
"Chunhui",
""
],
[
"Li",
"Jiapeng",
""
],
[
"Ma",
"Yuxi",
""
],
[
"Niu",
"Lixing",
""
],
[
"Han",
"Jiaheng",
""
],
[
"Peng",
"Yujia",
""
],
[
"Zhu",
"Yixin",
""
],
[
"Fan",
"Lifeng",
""
]
] |
2405.12433 | Sudhir Agarwal | Sudhir Agarwal and Anu Sreepathy and David H. Alonso and Prarit Lamba | LLM+Reasoning+Planning for supporting incomplete user queries in
presence of APIs | 9 pages main content, 2 pages references, 12 pages appendix, 5
figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent availability of Large Language Models (LLMs) has led to the
development of numerous LLM-based approaches aimed at providing natural
language interfaces for various end-user tasks. These end-user tasks in turn
can typically be accomplished by orchestrating a given set of APIs. In
practice, natural language task requests (user queries) are often incomplete,
i.e., they may not contain all the information required by the APIs. While LLMs
excel at natural language processing (NLP) tasks, they frequently hallucinate
on missing information or struggle with orchestrating the APIs. The key idea
behind our proposed approach is to leverage logical reasoning and classical AI
planning along with an LLM for accurately answering user queries including
identification and gathering of any missing information in these queries. Our
approach uses an LLM and ASP (Answer Set Programming) solver to translate a
user query to a representation in Planning Domain Definition Language (PDDL)
via an intermediate representation in ASP. We introduce a special API
"get_info_api" for gathering missing information. We model all the APIs as PDDL
actions in a way that supports dataflow between the APIs. Our approach then
uses a classical AI planner to generate an orchestration of API calls
(including calls to get_info_api) to answer the user query. Our evaluation
results show that our approach significantly outperforms a pure LLM based
approach by achieving over 95\% success rate in most cases on a dataset
containing complete and incomplete single goal and multi-goal queries where the
multi-goal queries may or may not require dataflow among the APIs.
| [
{
"version": "v1",
"created": "Tue, 21 May 2024 01:16:34 GMT"
}
] | 1,716,336,000,000 | [
[
"Agarwal",
"Sudhir",
""
],
[
"Sreepathy",
"Anu",
""
],
[
"Alonso",
"David H.",
""
],
[
"Lamba",
"Prarit",
""
]
] |
2405.12541 | Bufang Yang | Bufang Yang, Siyang Jiang, Lilin Xu, Kaiwei Liu, Hai Li, Guoliang
Xing, Hongkai Chen, Xiaofan Jiang, Zhenyu Yan | DrHouse: An LLM-empowered Diagnostic Reasoning System through Harnessing
Outcomes from Sensor Data and Expert Knowledge | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have the potential to transform digital
healthcare, as evidenced by recent advances in LLM-based virtual doctors.
However, current approaches rely on patient's subjective descriptions of
symptoms, causing increased misdiagnosis. Recognizing the value of daily data
from smart devices, we introduce a novel LLM-based multi-turn consultation
virtual doctor system, DrHouse, which incorporates three significant
contributions: 1) It utilizes sensor data from smart devices in the diagnosis
process, enhancing accuracy and reliability. 2) DrHouse leverages continuously
updating medical databases such as Up-to-Date and PubMed to ensure our model
remains at diagnostic standard's forefront. 3) DrHouse introduces a novel
diagnostic algorithm that concurrently evaluates potential diseases and their
likelihood, facilitating more nuanced and informed medical assessments. Through
multi-turn interactions, DrHouse determines the next steps, such as accessing
daily data from smart devices or requesting in-lab tests, and progressively
refines its diagnoses. Evaluations on three public datasets and our
self-collected datasets show that DrHouse can achieve up to an 18.8% increase
in diagnosis accuracy over the state-of-the-art baselines. The results of a
32-participant user study show that 75% medical experts and 91.7% patients are
willing to use DrHouse.
| [
{
"version": "v1",
"created": "Tue, 21 May 2024 07:16:12 GMT"
}
] | 1,716,336,000,000 | [
[
"Yang",
"Bufang",
""
],
[
"Jiang",
"Siyang",
""
],
[
"Xu",
"Lilin",
""
],
[
"Liu",
"Kaiwei",
""
],
[
"Li",
"Hai",
""
],
[
"Xing",
"Guoliang",
""
],
[
"Chen",
"Hongkai",
""
],
[
"Jiang",
"Xiaofan",
""
],
[
"Yan",
"Zhenyu",
""
]
] |
2405.12621 | Matteo Bortoletto | Matteo Bortoletto, Constantin Ruhdorfer, Adnen Abdessaied, Lei Shi,
Andreas Bulling | Limits of Theory of Mind Modelling in Dialogue-Based Collaborative Plan
Acquisition | ACL 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent work on dialogue-based collaborative plan acquisition (CPA) has
suggested that Theory of Mind (ToM) modelling can improve missing knowledge
prediction in settings with asymmetric skill-sets and knowledge. Although ToM
was claimed to be important for effective collaboration, its real impact on
this novel task remains under-explored. By representing plans as graphs and by
exploiting task-specific constraints we show that, as performance on CPA nearly
doubles when predicting one's own missing knowledge, the improvements due to
ToM modelling diminish. This phenomenon persists even when evaluating existing
baseline methods. To better understand the relevance of ToM for CPA, we report
a principled performance comparison of models with and without ToM features.
Results across different models and ablations consistently suggest that learned
ToM features are indeed more likely to reflect latent patterns in the data with
no perceivable link to ToM. This finding calls for a deeper understanding of
the role of ToM in CPA and beyond, as well as new methods for modelling and
evaluating mental states in computational collaborative agents.
| [
{
"version": "v1",
"created": "Tue, 21 May 2024 09:23:39 GMT"
},
{
"version": "v2",
"created": "Tue, 28 May 2024 18:33:23 GMT"
}
] | 1,717,027,200,000 | [
[
"Bortoletto",
"Matteo",
""
],
[
"Ruhdorfer",
"Constantin",
""
],
[
"Abdessaied",
"Adnen",
""
],
[
"Shi",
"Lei",
""
],
[
"Bulling",
"Andreas",
""
]
] |
2405.12785 | Jakub Jakubowski | Jakub Jakubowski, Natalia Wojak-Strzelecka, Rita P. Ribeiro, Sepideh
Pashami, Szymon Bobek, Joao Gama, Grzegorz J Nalepa | Artificial Intelligence Approaches for Predictive Maintenance in the
Steel Industry: A Survey | Preprint submitted to Engineering Applications of Artificial
Intelligence | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predictive Maintenance (PdM) emerged as one of the pillars of Industry 4.0,
and became crucial for enhancing operational efficiency, allowing to minimize
downtime, extend lifespan of equipment, and prevent failures. A wide range of
PdM tasks can be performed using Artificial Intelligence (AI) methods, which
often use data generated from industrial sensors. The steel industry, which is
an important branch of the global economy, is one of the potential
beneficiaries of this trend, given its large environmental footprint, the
globalized nature of the market, and the demanding working conditions. This
survey synthesizes the current state of knowledge in the field of AI-based PdM
within the steel industry and is addressed to researchers and practitioners. We
identified 219 articles related to this topic and formulated five research
questions, allowing us to gain a global perspective on current trends and the
main research gaps. We examined equipment and facilities subjected to PdM,
determined common PdM approaches, and identified trends in the AI methods used
to develop these solutions. We explored the characteristics of the data used in
the surveyed articles and assessed the practical implications of the research
presented there. Most of the research focuses on the blast furnace or hot
rolling, using data from industrial sensors. Current trends show increasing
interest in the domain, especially in the use of deep learning. The main
challenges include implementing the proposed methods in a production
environment, incorporating them into maintenance plans, and enhancing the
accessibility and reproducibility of the research.
| [
{
"version": "v1",
"created": "Tue, 21 May 2024 13:32:46 GMT"
}
] | 1,716,336,000,000 | [
[
"Jakubowski",
"Jakub",
""
],
[
"Wojak-Strzelecka",
"Natalia",
""
],
[
"Ribeiro",
"Rita P.",
""
],
[
"Pashami",
"Sepideh",
""
],
[
"Bobek",
"Szymon",
""
],
[
"Gama",
"Joao",
""
],
[
"Nalepa",
"Grzegorz J",
""
]
] |
2405.12862 | Robert Wray | Steven J. Jones Robert E. Wray | Toward Constraint Compliant Goal Formulation and Planning | 16 pages. 5 figures, 2 tables. Submitted to Advances in Cognitive
Systems | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | One part of complying with norms, rules, and preferences is incorporating
constraints (such as knowledge of ethics) into one's goal formulation and
planning processing. We explore in a simple domain how the encoding of
knowledge in different ethical frameworks influences an agent's goal
formulation and planning processing and demonstrate ability of an agent to
satisfy and satisfice when its collection of relevant constraints includes a
mix of "hard" and "soft" constraints of various types. How the agent attempts
to comply with ethical constraints depends on the ethical framing and we
investigate tradeoffs between deontological framing and utilitarian framing for
complying with an ethical norm. Representative scenarios highlight how
performing the same task with different framings of the same norm leads to
different behaviors. Our explorations suggest an important role for
metacognitive judgments in resolving ethical conflicts during goal formulation
and planning.
| [
{
"version": "v1",
"created": "Tue, 21 May 2024 15:26:06 GMT"
}
] | 1,716,336,000,000 | [
[
"Wray",
"Steven J. Jones Robert E.",
""
]
] |
2405.13231 | Sam McGrath | Sam Whitman McGrath and Jacob Russin | Multiple Realizability and the Rise of Deep Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The multiple realizability thesis holds that psychological states may be
implemented in a diversity of physical systems. The deep learning revolution
seems to be bringing this possibility to life, offering the most plausible
examples of man-made realizations of sophisticated cognitive functions to date.
This paper explores the implications of deep learning models for the multiple
realizability thesis. Among other things, it challenges the widely held view
that multiple realizability entails that the study of the mind can and must be
pursued independently of the study of its implementation in the brain or in
artificial analogues. Although its central contribution is philosophical, the
paper has substantial methodological upshots for contemporary cognitive
science, suggesting that deep neural networks may play a crucial role in
formulating and evaluating hypotheses about cognition, even if they are
interpreted as implementation-level models. In the age of deep learning,
multiple realizability possesses a renewed significance.
| [
{
"version": "v1",
"created": "Tue, 21 May 2024 22:36:49 GMT"
}
] | 1,716,508,800,000 | [
[
"McGrath",
"Sam Whitman",
""
],
[
"Russin",
"Jacob",
""
]
] |
2405.13242 | Guy Davidson | Guy Davidson, Graham Todd, Julian Togelius, Todd M. Gureckis, Brenden
M. Lake | Goals as Reward-Producing Programs | Project website and goal program viewer:
https://exps.gureckislab.org/guydav/goal_programs_viewer/main/ | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | People are remarkably capable of generating their own goals, beginning with
child's play and continuing into adulthood. Despite considerable empirical and
computational work on goals and goal-oriented behavior, models are still far
from capturing the richness of everyday human goals. Here, we bridge this gap
by collecting a dataset of human-generated playful goals, modeling them as
reward-producing programs, and generating novel human-like goals through
program synthesis. Reward-producing programs capture the rich semantics of
goals through symbolic operations that compose, add temporal constraints, and
allow for program execution on behavioral traces to evaluate progress. To build
a generative model of goals, we learn a fitness function over the infinite set
of possible goal programs and sample novel goals with a quality-diversity
algorithm. Human evaluators found that model-generated goals, when sampled from
partitions of program space occupied by human examples, were indistinguishable
from human-created games. We also discovered that our model's internal fitness
scores predict games that are evaluated as more fun to play and more
human-like.
| [
{
"version": "v1",
"created": "Tue, 21 May 2024 23:09:12 GMT"
},
{
"version": "v2",
"created": "Thu, 30 May 2024 14:46:04 GMT"
}
] | 1,717,113,600,000 | [
[
"Davidson",
"Guy",
""
],
[
"Todd",
"Graham",
""
],
[
"Togelius",
"Julian",
""
],
[
"Gureckis",
"Todd M.",
""
],
[
"Lake",
"Brenden M.",
""
]
] |
2405.13352 | Xiaoxin Yin | Xiaoxin Yin | "Turing Tests" For An AI Scientist | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | While LLMs have shown impressive capabilities in solving math or coding
problems, the ability to make scientific discoveries remains a distinct
challenge. This paper proposes a "Turing test for an AI scientist" to assess
whether an AI agent can conduct scientific research independently, without
relying on human-generated knowledge. Drawing inspiration from the historical
development of science, we propose seven benchmark tests that evaluate an AI
agent's ability to make groundbreaking discoveries in various scientific
domains. These tests include inferring the heliocentric model from celestial
observations, discovering the laws of motion in a simulated environment,
deriving the differential equation governing vibrating strings, inferring
Maxwell's equations from electrodynamics simulations, inventing numerical
methods for initial value problems, discovering Huffman coding for data
compression, and developing efficient sorting algorithms. To ensure the
validity of these tests, the AI agent is provided with interactive libraries or
datasets specific to each problem, without access to human knowledge that could
potentially contain information about the target discoveries. The ultimate goal
is to create an AI scientist capable of making novel and impactful scientific
discoveries, surpassing the best human experts in their respective fields.
These "Turing tests" serve as intermediate milestones, assessing the AI agent's
ability to make discoveries that were groundbreaking in their time. If an AI
agent can pass the majority of these seven tests, it would indicate significant
progress towards building an AI scientist, paving the way for future
advancements in autonomous scientific discovery. This paper aims to establish a
benchmark for the capabilities of AI in scientific research and to stimulate
further research in this exciting field.
| [
{
"version": "v1",
"created": "Wed, 22 May 2024 05:14:27 GMT"
}
] | 1,716,508,800,000 | [
[
"Yin",
"Xiaoxin",
""
]
] |
2405.13356 | Mostafa Abdelhadi | Nurullah Sevim, Mostafa Ibrahim, and Sabit Ekin | Large Language Models (LLMs) Assisted Wireless Network Deployment in
Urban Settings | This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The advent of Large Language Models (LLMs) has revolutionized language
understanding and human-like text generation, drawing interest from many other
fields with this question in mind: What else are the LLMs capable of? Despite
their widespread adoption, ongoing research continues to explore new ways to
integrate LLMs into diverse systems.
This paper explores new techniques to harness the power of LLMs for 6G (6th
Generation) wireless communication technologies, a domain where automation and
intelligent systems are pivotal. The inherent adaptability of LLMs to
domain-specific tasks positions them as prime candidates for enhancing wireless
systems in the 6G landscape.
We introduce a novel Reinforcement Learning (RL) based framework that
leverages LLMs for network deployment in wireless communications. Our approach
involves training an RL agent, utilizing LLMs as its core, in an urban setting
to maximize coverage. The agent's objective is to navigate the complexities of
urban environments and identify the network parameters for optimal area
coverage. Additionally, we integrate LLMs with Convolutional Neural Networks
(CNNs) to capitalize on their strengths while mitigating their limitations. The
Deep Deterministic Policy Gradient (DDPG) algorithm is employed for training
purposes. The results suggest that LLM-assisted models can outperform CNN-based
models in some cases while performing at least as well in others.
| [
{
"version": "v1",
"created": "Wed, 22 May 2024 05:19:51 GMT"
}
] | 1,716,508,800,000 | [
[
"Sevim",
"Nurullah",
""
],
[
"Ibrahim",
"Mostafa",
""
],
[
"Ekin",
"Sabit",
""
]
] |
2405.14001 | Sander Beckers | Sander Beckers | Nondeterministic Causal Models | Preliminary version: currently under review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | I generalize acyclic deterministic structural equation models to the
nondeterministic case and argue that it offers an improved semantics for
counterfactuals. The standard, deterministic, semantics developed by Halpern
(and based on the initial proposal of Galles & Pearl) assumes that for each
assignment of values to parent variables there is a unique assignment to their
child variable, and it assumes that the actual world (an assignment of values
to all variables of a model) specifies a unique counterfactual world for each
intervention. Both assumptions are unrealistic, and therefore I drop both of
them in my proposal. I do so by allowing multi-valued functions in the
structural equations. In addition, I adjust the semantics so that the solutions
to the equations that obtained in the actual world are preserved in any
counterfactual world. I motivate the resulting logic by comparing it to the
standard one by Halpern and to more recent proposals that are closer to mine.
Finally, I extend these models to the probabilistic case and show that they
open up the way to identifying counterfactuals even in Causal Bayesian
Networks.
| [
{
"version": "v1",
"created": "Wed, 22 May 2024 21:17:52 GMT"
}
] | 1,716,508,800,000 | [
[
"Beckers",
"Sander",
""
]
] |
2405.14265 | Jerome Arjonilla | Brahim Driss, J\'er\^ome Arjonilla, Hui Wang, Abdallah Saffidine,
Tristan Cazenave | Deep Reinforcement Learning for 5*5 Multiplayer Go | Accepted in EvoApps at Evostar2023 | International Conference on the Applications of Evolutionary
Computation (Part of EvoStar), 2023, 753--764 | 10.1007/978-3-031-30229-9_48 | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In recent years, much progress has been made in computer Go and most of the
results have been obtained thanks to search algorithms (Monte Carlo Tree
Search) and Deep Reinforcement Learning (DRL). In this paper, we propose to use
and analyze the latest algorithms that use search and DRL (AlphaZero and
Descent algorithms) to automatically learn to play an extended version of the
game of Go with more than two players. We show that using search and DRL we
were able to improve the level of play, even though there are more than two
players.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 07:44:24 GMT"
}
] | 1,716,508,800,000 | [
[
"Driss",
"Brahim",
""
],
[
"Arjonilla",
"Jérôme",
""
],
[
"Wang",
"Hui",
""
],
[
"Saffidine",
"Abdallah",
""
],
[
"Cazenave",
"Tristan",
""
]
] |
2405.14333 | Huajian Xin | Huajian Xin, Daya Guo, Zhihong Shao, Zhizhou Ren, Qihao Zhu, Bo Liu,
Chong Ruan, Wenda Li, Xiaodan Liang | DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale
Synthetic Data | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Proof assistants like Lean have revolutionized mathematical proof
verification, ensuring high accuracy and reliability. Although large language
models (LLMs) show promise in mathematical reasoning, their advancement in
formal theorem proving is hindered by a lack of training data. To address this
issue, we introduce an approach to generate extensive Lean 4 proof data derived
from high-school and undergraduate-level mathematical competition problems.
This approach involves translating natural language problems into formal
statements, filtering out low-quality statements, and generating proofs to
create synthetic data. After fine-tuning the DeepSeekMath 7B model on this
synthetic dataset, which comprises 8 million formal statements with proofs, our
model achieved whole-proof generation accuracies of 46.3% with 64 samples and
52% cumulatively on the Lean 4 miniF2F test, surpassing the baseline GPT-4 at
23.0% with 64 samples and a tree search reinforcement learning method at 41.0%.
Additionally, our model successfully proved 5 out of 148 problems in the Lean 4
Formalized International Mathematical Olympiad (FIMO) benchmark, while GPT-4
failed to prove any. These results demonstrate the potential of leveraging
large-scale synthetic data to enhance theorem-proving capabilities in LLMs.
Both the synthetic dataset and the model will be made available to facilitate
further research in this promising field.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 09:03:42 GMT"
}
] | 1,716,508,800,000 | [
[
"Xin",
"Huajian",
""
],
[
"Guo",
"Daya",
""
],
[
"Shao",
"Zhihong",
""
],
[
"Ren",
"Zhizhou",
""
],
[
"Zhu",
"Qihao",
""
],
[
"Liu",
"Bo",
""
],
[
"Ruan",
"Chong",
""
],
[
"Li",
"Wenda",
""
],
[
"Liang",
"Xiaodan",
""
]
] |
2405.14389 | Gaia Saveri | Gaia Saveri, Laura Nenzi, Luca Bortolussi, Jan K\v{r}et\'insk\'y | stl2vec: Semantic and Interpretable Vector Representation of Temporal
Logic | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Integrating symbolic knowledge and data-driven learning algorithms is a
longstanding challenge in Artificial Intelligence. Despite the recognized
importance of this task, a notable gap exists due to the discreteness of
symbolic representations and the continuous nature of machine-learning
computations. One of the desired bridges between these two worlds would be to
define semantically grounded vector representation (feature embedding) of logic
formulae, thus enabling to perform continuous learning and optimization in the
semantic space of formulae. We tackle this goal for knowledge expressed in
Signal Temporal Logic (STL) and devise a method to compute continuous
embeddings of formulae with several desirable properties: the embedding (i) is
finite-dimensional, (ii) faithfully reflects the semantics of the formulae,
(iii) does not require any learning but instead is defined from basic
principles, (iv) is interpretable. Another significant contribution lies in
demonstrating the efficacy of the approach in two tasks: learning model
checking, where we predict the probability of requirements being satisfied in
stochastic processes; and integrating the embeddings into a neuro-symbolic
framework, to constrain the output of a deep-learning generative model to
comply to a given logical specification.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 10:04:56 GMT"
}
] | 1,716,508,800,000 | [
[
"Saveri",
"Gaia",
""
],
[
"Nenzi",
"Laura",
""
],
[
"Bortolussi",
"Luca",
""
],
[
"Křetínský",
"Jan",
""
]
] |
2405.14414 | Haiming Wang | Haiming Wang, Huajian Xin, Zhengying Liu, Wenda Li, Yinya Huang,
Jianqiao Lu, Zhicheng Yang, Jing Tang, Jian Yin, Zhenguo Li, Xiaodan Liang | Proving Theorems Recursively | 21 pages, 5 figures, 3 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advances in automated theorem proving leverages language models to
explore expanded search spaces by step-by-step proof generation. However, such
approaches are usually based on short-sighted heuristics (e.g., log probability
or value function scores) that potentially lead to suboptimal or even
distracting subgoals, preventing us from finding longer proofs. To address this
challenge, we propose POETRY (PrOvE Theorems RecursivelY), which proves
theorems in a recursive, level-by-level manner in the Isabelle theorem prover.
Unlike previous step-by-step methods, POETRY searches for a verifiable sketch
of the proof at each level and focuses on solving the current level's theorem
or conjecture. Detailed proofs of intermediate conjectures within the sketch
are temporarily replaced by a placeholder tactic called sorry, deferring their
proofs to subsequent levels. This approach allows the theorem to be tackled
incrementally by outlining the overall theorem at the first level and then
solving the intermediate conjectures at deeper levels. Experiments are
conducted on the miniF2F and PISA datasets and significant performance gains
are observed in our POETRY approach over state-of-the-art methods. POETRY on
miniF2F achieves an average proving success rate improvement of 5.1%. Moreover,
we observe a substantial increase in the maximum proof length found by POETRY,
from 10 to 26.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 10:35:08 GMT"
}
] | 1,716,508,800,000 | [
[
"Wang",
"Haiming",
""
],
[
"Xin",
"Huajian",
""
],
[
"Liu",
"Zhengying",
""
],
[
"Li",
"Wenda",
""
],
[
"Huang",
"Yinya",
""
],
[
"Lu",
"Jianqiao",
""
],
[
"Yang",
"Zhicheng",
""
],
[
"Tang",
"Jing",
""
],
[
"Yin",
"Jian",
""
],
[
"Li",
"Zhenguo",
""
],
[
"Liang",
"Xiaodan",
""
]
] |
2405.14707 | Aniket Deroy | Aniket Deroy, Naksatra Kumar Bailung, Kripabandhu Ghosh, Saptarshi
Ghosh, Abhijnan Chakraborty | Artificial Intelligence (AI) in Legal Data Mining | Book name-Technology and Analytics for Law and Justice, Page
no-273-297, Chapter no-14 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the availability of vast amounts of data, legal data is often
unstructured, making it difficult even for law practitioners to ingest and
comprehend the same. It is important to organise the legal information in a way
that is useful for practitioners and downstream automation tasks. The word
ontology was used by Greek philosophers to discuss concepts of existence,
being, becoming and reality. Today, scientists use this term to describe the
relation between concepts, data, and entities. A great example for a working
ontology was developed by Dhani and Bhatt. This ontology deals with Indian
court cases on intellectual property rights (IPR) The future of legal
ontologies is likely to be handled by computer experts and legal experts alike.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 15:41:35 GMT"
}
] | 1,716,508,800,000 | [
[
"Deroy",
"Aniket",
""
],
[
"Bailung",
"Naksatra Kumar",
""
],
[
"Ghosh",
"Kripabandhu",
""
],
[
"Ghosh",
"Saptarshi",
""
],
[
"Chakraborty",
"Abhijnan",
""
]
] |
2405.14966 | Nadia M. Ady | Joonas Lahikainen, Nadia M. Ady, Christian Guckelsberger | Creativity and Markov Decision Processes | 10 pages, full paper at 15th International Conference on
Computational Creativity, ICCC'24 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Creativity is already regularly attributed to AI systems outside specialised
computational creativity (CC) communities. However, the evaluation of
creativity in AI at large typically lacks grounding in creativity theory, which
can promote inappropriate attributions and limit the analysis of creative
behaviour. While CC researchers have translated psychological theory into
formal models, the value of these models is limited by a gap to common AI
frameworks. To mitigate this limitation, we identify formal mappings between
Boden's process theory of creativity and Markov Decision Processes (MDPs),
using the Creative Systems Framework as a stepping stone. We study three out of
eleven mappings in detail to understand which types of creative processes,
opportunities for (aberrations), and threats to creativity (uninspiration)
could be observed in an MDP. We conclude by discussing quality criteria for the
selection of such mappings for future work and applications.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 18:16:42 GMT"
}
] | 1,716,768,000,000 | [
[
"Lahikainen",
"Joonas",
""
],
[
"Ady",
"Nadia M.",
""
],
[
"Guckelsberger",
"Christian",
""
]
] |
2405.15383 | Nicola Dainese | Nicola Dainese, Matteo Merler, Minttu Alakuijala, Pekka Marttinen | Generating Code World Models with Large Language Models Guided by Monte
Carlo Tree Search | 10 pages in main text, 24 pages including references and
supplementary materials. 2 figures and 3 tables in the main text, 9 figures
and 12 tables when including the supplementary materials | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this work we consider Code World Models, world models generated by a Large
Language Model (LLM) in the form of Python code for model-based Reinforcement
Learning (RL). Calling code instead of LLMs for planning has the advantages of
being precise, reliable, interpretable, and extremely efficient. However,
writing appropriate Code World Models requires the ability to understand
complex instructions, to generate exact code with non-trivial logic and to
self-debug a long program with feedback from unit tests and environment
trajectories. To address these challenges, we propose Generate, Improve and Fix
with Monte Carlo Tree Search (GIF-MCTS), a new code generation strategy for
LLMs. To test our approach, we introduce the Code World Models Benchmark
(CWMB), a suite of program synthesis and planning tasks comprised of 18 diverse
RL environments paired with corresponding textual descriptions and curated
trajectories. GIF-MCTS surpasses all baselines on the CWMB and two other
benchmarks, and we show that the Code World Models synthesized with it can be
successfully used for planning, resulting in model-based RL agents with greatly
improved sample efficiency and inference speed.
| [
{
"version": "v1",
"created": "Fri, 24 May 2024 09:31:26 GMT"
}
] | 1,716,768,000,000 | [
[
"Dainese",
"Nicola",
""
],
[
"Merler",
"Matteo",
""
],
[
"Alakuijala",
"Minttu",
""
],
[
"Marttinen",
"Pekka",
""
]
] |
2405.15414 | Yuxuan Guo | Yuxuan Guo, Shaohui Peng, Jiaming Guo, Di Huang, Xishan Zhang, Rui
Zhang, Yifan Hao, Ling Li, Zikang Tian, Mingju Gao, Yutai Li, Yiming Gan,
Shuai Liang, Zihao Zhang, Zidong Du, Qi Guo, Xing Hu, Yunji Chen | Luban: Building Open-Ended Creative Agents via Autonomous Embodied
Verification | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Building open agents has always been the ultimate goal in AI research, and
creative agents are the more enticing. Existing LLM agents excel at
long-horizon tasks with well-defined goals (e.g., `mine diamonds' in
Minecraft). However, they encounter difficulties on creative tasks with open
goals and abstract criteria due to the inability to bridge the gap between
them, thus lacking feedback for self-improvement in solving the task. In this
work, we introduce autonomous embodied verification techniques for agents to
fill the gap, laying the groundwork for creative tasks. Specifically, we
propose the Luban agent target creative building tasks in Minecraft, which
equips with two-level autonomous embodied verification inspired by human design
practices: (1) visual verification of 3D structural speculates, which comes
from agent synthesized CAD modeling programs; (2) pragmatic verification of the
creation by generating and verifying environment-relevant functionality
programs based on the abstract criteria. Extensive multi-dimensional human
studies and Elo ratings show that the Luban completes diverse creative building
tasks in our proposed benchmark and outperforms other baselines ($33\%$ to
$100\%$) in both visualization and pragmatism. Additional demos on the
real-world robotic arm show the creation potential of the Luban in the physical
world.
| [
{
"version": "v1",
"created": "Fri, 24 May 2024 10:25:59 GMT"
}
] | 1,716,768,000,000 | [
[
"Guo",
"Yuxuan",
""
],
[
"Peng",
"Shaohui",
""
],
[
"Guo",
"Jiaming",
""
],
[
"Huang",
"Di",
""
],
[
"Zhang",
"Xishan",
""
],
[
"Zhang",
"Rui",
""
],
[
"Hao",
"Yifan",
""
],
[
"Li",
"Ling",
""
],
[
"Tian",
"Zikang",
""
],
[
"Gao",
"Mingju",
""
],
[
"Li",
"Yutai",
""
],
[
"Gan",
"Yiming",
""
],
[
"Liang",
"Shuai",
""
],
[
"Zhang",
"Zihao",
""
],
[
"Du",
"Zidong",
""
],
[
"Guo",
"Qi",
""
],
[
"Hu",
"Xing",
""
],
[
"Chen",
"Yunji",
""
]
] |
2405.15568 | Jenny Zhuoting Zhang | Maxence Faldor, Jenny Zhang, Antoine Cully, Jeff Clune | OMNI-EPIC: Open-endedness via Models of human Notions of Interestingness
with Environments Programmed in Code | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Open-ended and AI-generating algorithms aim to continuously generate and
solve increasingly complex tasks indefinitely, offering a promising path toward
more general intelligence. To accomplish this grand vision, learning must occur
within a vast array of potential tasks. Existing approaches to automatically
generating environments are constrained within manually predefined, often
narrow distributions of environment, limiting their ability to create any
learning environment. To address this limitation, we introduce a novel
framework, OMNI-EPIC, that augments previous work in Open-endedness via Models
of human Notions of Interestingness (OMNI) with Environments Programmed in Code
(EPIC). OMNI-EPIC leverages foundation models to autonomously generate code
specifying the next learnable (i.e., not too easy or difficult for the agent's
current skill set) and interesting (e.g., worthwhile and novel) tasks.
OMNI-EPIC generates both environments (e.g., an obstacle course) and reward
functions (e.g., progress through the obstacle course quickly without touching
red objects), enabling it, in principle, to create any simulatable learning
task. We showcase the explosive creativity of OMNI-EPIC, which continuously
innovates to suggest new, interesting learning challenges. We also highlight
how OMNI-EPIC can adapt to reinforcement learning agents' learning progress,
generating tasks that are of suitable difficulty. Overall, OMNI-EPIC can
endlessly create learnable and interesting environments, further propelling the
development of self-improving AI systems and AI-Generating Algorithms. Project
website with videos: https://dub.sh/omniepic
| [
{
"version": "v1",
"created": "Fri, 24 May 2024 13:57:32 GMT"
}
] | 1,716,768,000,000 | [
[
"Faldor",
"Maxence",
""
],
[
"Zhang",
"Jenny",
""
],
[
"Cully",
"Antoine",
""
],
[
"Clune",
"Jeff",
""
]
] |
2405.15801 | Ljubica Djurovi\'c | Ljubica Djurovi\'c, Maja Lakovi\'c, Nenad Stojanovi\'c | Decision-making algorithm based on the energy of interval-valued fuzzy
soft sets | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In our work, we continue to explore the properties of interval-valued fuzzy
soft sets, which are obtained by combining interval-valued fuzzy sets and soft
sets. We introduce the concept of energy of an interval-valued fuzzy soft set,
as well as pessimistic and optimistic energy, enabling us to construct an
effective decision-making algorithm. Through examples, the paper demonstrates
how the introduced algorithm is successfully applied to problems involving
uncertainty. Additionally, we compare the introduced method with other methods
dealing with similar or related issues.
| [
{
"version": "v1",
"created": "Fri, 17 May 2024 09:54:44 GMT"
}
] | 1,716,854,400,000 | [
[
"Djurović",
"Ljubica",
""
],
[
"Laković",
"Maja",
""
],
[
"Stojanović",
"Nenad",
""
]
] |
2405.15804 | Sarath Sreedharan | Sarath Sreedharan, Anagha Kulkarni, Subbarao Kambhampati | Explainable Human-AI Interaction: A Planning Perspective | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | From its inception, AI has had a rather ambivalent relationship with humans
-- swinging between their augmentation and replacement. Now, as AI technologies
enter our everyday lives at an ever increasing pace, there is a greater need
for AI systems to work synergistically with humans. One critical requirement
for such synergistic human-AI interaction is that the AI systems be explainable
to the humans in the loop. To do this effectively, AI agents need to go beyond
planning with their own models of the world, and take into account the mental
model of the human in the loop. Drawing from several years of research in our
lab, we will discuss how the AI agent can use these mental models to either
conform to human expectations, or change those expectations through explanatory
communication. While the main focus of the book is on cooperative scenarios, we
will point out how the same mental models can be used for obfuscation and
deception. Although the book is primarily driven by our own research in these
areas, in every chapter, we will provide ample connections to relevant research
from other groups.
| [
{
"version": "v1",
"created": "Sun, 19 May 2024 22:22:21 GMT"
}
] | 1,716,854,400,000 | [
[
"Sreedharan",
"Sarath",
""
],
[
"Kulkarni",
"Anagha",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2405.15808 | Edward Chang | Edward Y. Chang | Ensuring Ground Truth Accuracy in Healthcare with the EVINCE framework | 23 pages, 4 tables, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Misdiagnosis is a significant issue in healthcare, leading to harmful
consequences for patients. The propagation of mislabeled data through machine
learning models into clinical practice is unacceptable. This paper proposes
EVINCE, a system designed to 1) improve diagnosis accuracy and 2) rectify
misdiagnoses and minimize training data errors. EVINCE stands for Entropy
Variation through Information Duality with Equal Competence, leveraging this
novel theory to optimize the diagnostic process using multiple Large Language
Models (LLMs) in a structured debate framework. Our empirical study verifies
EVINCE to be effective in achieving its design goals.
| [
{
"version": "v1",
"created": "Mon, 20 May 2024 18:26:36 GMT"
},
{
"version": "v2",
"created": "Tue, 28 May 2024 05:11:50 GMT"
}
] | 1,716,940,800,000 | [
[
"Chang",
"Edward Y.",
""
]
] |
2405.15832 | \'Alvaro Huertas-Garc\'ia | \'Alvaro Huertas-Garc\'ia, Javier Mu\~noz, Enrique De Miguel Ambite,
Marcos Avil\'es Camarmas, Jos\'e F\'elix Ovejero | DETECTA 2.0: Research into non-intrusive methodologies supported by
Industry 4.0 enabling technologies for predictive and cyber-secure
maintenance in SMEs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The integration of predictive maintenance and cybersecurity represents a
transformative advancement for small and medium-sized enterprises (SMEs)
operating within the Industry 4.0 paradigm. Despite their economic importance,
SMEs often face significant challenges in adopting advanced technologies due to
resource constraints and knowledge gaps. The DETECTA 2.0 project addresses
these hurdles by developing an innovative system that harmonizes real-time
anomaly detection, sophisticated analytics, and predictive forecasting
capabilities.
The system employs a semi-supervised methodology, combining unsupervised
anomaly detection with supervised learning techniques. This approach enables
more agile and cost-effective development of AI detection systems,
significantly reducing the time required for manual case review.
At the core lies a Digital Twin interface, providing intuitive real-time
visualizations of machine states and detected anomalies. Leveraging
cutting-edge AI engines, the system intelligently categorizes anomalies based
on observed patterns, differentiating between technical errors and potential
cybersecurity incidents. This discernment is fortified by detailed analytics,
including certainty levels that enhance alert reliability and minimize false
positives.
The predictive engine uses advanced time series algorithms like N-HiTS to
forecast future machine utilization trends. This proactive approach optimizes
maintenance planning, enhances cybersecurity measures, and minimizes unplanned
downtimes despite variable production processes.
With its modular architecture enabling seamless integration across industrial
setups and low implementation costs, DETECTA 2.0 presents an attractive
solution for SMEs to strengthen their predictive maintenance and cybersecurity
strategies.
| [
{
"version": "v1",
"created": "Fri, 24 May 2024 08:38:38 GMT"
}
] | 1,716,854,400,000 | [
[
"Huertas-García",
"Álvaro",
""
],
[
"Muñoz",
"Javier",
""
],
[
"Ambite",
"Enrique De Miguel",
""
],
[
"Camarmas",
"Marcos Avilés",
""
],
[
"Ovejero",
"José Félix",
""
]
] |
2405.15907 | Daniel Bramblett | Daniel Bramblett, Siddharth Srivastava | Belief-State Query Policies for Planning With Preferences Under Partial
Observability | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Planning in real-world settings often entails addressing partial
observability while aligning with users' preferences. We present a novel
framework for expressing users' preferences about agent behavior in a partially
observable setting using parameterized belief-state query (BSQ) preferences in
the setting of goal-oriented partially observable Markov decision processes
(gPOMDPs). We present the first formal analysis of such preferences and prove
that while the expected value of a BSQ preference is not a convex function
w.r.t its parameters, it is piecewise constant and yields an implicit discrete
parameter search space that is finite for finite horizons. This theoretical
result leads to novel algorithms that optimize gPOMDP agent behavior while
guaranteeing user preference compliance. Theoretical analysis proves that our
algorithms converge to the optimal preference-compliant behavior in the limit.
Empirical results show that BSQ preferences provide a computationally feasible
approach for planning with preferences in partially observable settings.
| [
{
"version": "v1",
"created": "Fri, 24 May 2024 20:04:51 GMT"
}
] | 1,716,854,400,000 | [
[
"Bramblett",
"Daniel",
""
],
[
"Srivastava",
"Siddharth",
""
]
] |
2405.16072 | Seyed Arash Sheikholeslam | Seyed Arash Sheikholeslam, Andre Ivanov | SynthAI: A Multi Agent Generative AI Framework for Automated Modular HLS
Design Generation | This work is in progress and we will be updating it | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we introduce SynthAI, a pioneering method for the automated
creation of High-Level Synthesis (HLS) designs. SynthAI integrates ReAct
agents, Chain-of-Thought (CoT) prompting, web search technologies, and the
Retrieval-Augmented Generation (RAG) framework within a structured decision
graph. This innovative approach enables the systematic decomposition of complex
hardware design tasks into multiple stages and smaller, manageable modules. As
a result, SynthAI produces synthesizable designs that closely adhere to
user-specified design objectives and functional requirements. We further
validate the capabilities of SynthAI through several case studies, highlighting
its proficiency in generating complex, multi-module logic designs from a single
initial prompt. The SynthAI code is provided via the following repo:
\url{https://github.com/sarashs/FPGA_AGI}
| [
{
"version": "v1",
"created": "Sat, 25 May 2024 05:45:55 GMT"
}
] | 1,716,854,400,000 | [
[
"Sheikholeslam",
"Seyed Arash",
""
],
[
"Ivanov",
"Andre",
""
]
] |
2405.16191 | Jiarun Wei | Junhao Yu, Jiarun Wei | Rocket Landing Control with Grid Fins and Path-following using MPC | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In this project, we attempt to optimize a landing trajectory of a rocket. The
goal is to minimize the total fuel consumption during the landing process using
different techniques. Once the optimal and feasible trajectory is generated
using batch approach, we attempt to follow the path using a Model Predictive
Control (MPC) based algorithm, called Trajectory Optimizing Path following
Estimation from Demonstration (TOPED), in order to generalize to similar
initial states and models, where we introduce a novel cost function for the MPC
to solve. We further show that TOPED can follow a demonstration trajectory well
in practice under model mismatch and different initial states.
| [
{
"version": "v1",
"created": "Sat, 25 May 2024 11:42:29 GMT"
}
] | 1,716,854,400,000 | [
[
"Yu",
"Junhao",
""
],
[
"Wei",
"Jiarun",
""
]
] |
2405.16334 | Haoyu Wang | Haoyu Wang and Tao Li and Zhiwei Deng and Dan Roth and Yang Li | Devil's Advocate: Anticipatory Reflection for LLM Agents | 16 pages, 6 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this work, we introduce a novel approach that equips LLM agents with
introspection, enhancing consistency and adaptability in solving complex tasks.
Our approach prompts LLM agents to decompose a given task into manageable
subtasks (i.e., to make a plan), and to continuously introspect upon the
suitability and results of their actions. We implement a three-fold
introspective intervention: 1) anticipatory reflection on potential failures
and alternative remedy before action execution, 2) post-action alignment with
subtask objectives and backtracking with remedy to ensure utmost effort in plan
execution, and 3) comprehensive review upon plan completion for future strategy
refinement. By deploying and experimenting with this methodology - a zero-shot
approach - within WebArena for practical tasks in web environments, our agent
demonstrates superior performance over existing zero-shot methods. The
experimental results suggest that our introspection-driven approach not only
enhances the agent's ability to navigate unanticipated challenges through a
robust mechanism of plan execution, but also improves efficiency by reducing
the number of trials and plan revisions needed to achieve a task.
| [
{
"version": "v1",
"created": "Sat, 25 May 2024 19:20:15 GMT"
},
{
"version": "v2",
"created": "Tue, 28 May 2024 03:22:44 GMT"
},
{
"version": "v3",
"created": "Wed, 29 May 2024 14:12:53 GMT"
}
] | 1,717,027,200,000 | [
[
"Wang",
"Haoyu",
""
],
[
"Li",
"Tao",
""
],
[
"Deng",
"Zhiwei",
""
],
[
"Roth",
"Dan",
""
],
[
"Li",
"Yang",
""
]
] |
2405.16929 | Lucas Jarnac | Lucas Jarnac, Yoan Chabot, Miguel Couceiro | Uncertainty Management in the Construction of Knowledge Graphs: a Survey | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Knowledge Graphs (KGs) are a major asset for companies thanks to their great
flexibility in data representation and their numerous applications, e.g.,
vocabulary sharing, Q/A or recommendation systems. To build a KG it is a common
practice to rely on automatic methods for extracting knowledge from various
heterogeneous sources. But in a noisy and uncertain world, knowledge may not be
reliable and conflicts between data sources may occur. Integrating unreliable
data would directly impact the use of the KG, therefore such conflicts must be
resolved. This could be done manually by selecting the best data to integrate.
This first approach is highly accurate, but costly and time-consuming. That is
why recent efforts focus on automatic approaches, which represents a
challenging task since it requires handling the uncertainty of extracted
knowledge throughout its integration into the KG. We survey state-of-the-art
approaches in this direction and present constructions of both open and
enterprise KGs and how their quality is maintained. We then describe different
knowledge extraction methods, introducing additional uncertainty. We also
discuss downstream tasks after knowledge acquisition, including KG completion
using embedding models, knowledge alignment, and knowledge fusion in order to
address the problem of knowledge uncertainty in KG construction. We conclude
with a discussion on the remaining challenges and perspectives when
constructing a KG taking into account uncertainty.
| [
{
"version": "v1",
"created": "Mon, 27 May 2024 08:22:52 GMT"
}
] | 1,716,854,400,000 | [
[
"Jarnac",
"Lucas",
""
],
[
"Chabot",
"Yoan",
""
],
[
"Couceiro",
"Miguel",
""
]
] |
2405.17009 | Xiaoqian Liu | Xiaoqian Liu, Xingzhou Lou, Jianbin Jiao, Junge Zhang | Position: Foundation Agents as the Paradigm Shift for Decision Making | 17 pages, camera-ready version of ICML 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision making demands intricate interplay between perception, memory, and
reasoning to discern optimal policies. Conventional approaches to decision
making face challenges related to low sample efficiency and poor
generalization. In contrast, foundation models in language and vision have
showcased rapid adaptation to diverse new tasks. Therefore, we advocate for the
construction of foundation agents as a transformative shift in the learning
paradigm of agents. This proposal is underpinned by the formulation of
foundation agents with their fundamental characteristics and challenges
motivated by the success of large language models (LLMs). Moreover, we specify
the roadmap of foundation agents from large interactive data collection or
generation, to self-supervised pretraining and adaptation, and knowledge and
value alignment with LLMs. Lastly, we pinpoint critical research questions
derived from the formulation and delineate trends for foundation agents
supported by real-world use cases, addressing both technical and theoretical
aspects to propel the field towards a more comprehensive and impactful future.
| [
{
"version": "v1",
"created": "Mon, 27 May 2024 09:54:50 GMT"
},
{
"version": "v2",
"created": "Tue, 28 May 2024 13:00:14 GMT"
},
{
"version": "v3",
"created": "Wed, 29 May 2024 14:15:09 GMT"
}
] | 1,717,027,200,000 | [
[
"Liu",
"Xiaoqian",
""
],
[
"Lou",
"Xingzhou",
""
],
[
"Jiao",
"Jianbin",
""
],
[
"Zhang",
"Junge",
""
]
] |
2405.17724 | Wei Pang | Wei Pang, Masoumeh Shafieinejad, Lucy Liu, Xi He | ClavaDDPM: Multi-relational Data Synthesis with Cluster-guided Diffusion
Models | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent research in tabular data synthesis has focused on single tables,
whereas real-world applications often involve complex data with tens or
hundreds of interconnected tables. Previous approaches to synthesizing
multi-relational (multi-table) data fall short in two key aspects: scalability
for larger datasets and capturing long-range dependencies, such as correlations
between attributes spread across different tables. Inspired by the success of
diffusion models in tabular data modeling, we introduce
$\textbf{C}luster$ $\textbf{La}tent$ $\textbf{Va}riable$ $guided$
$\textbf{D}enoising$ $\textbf{D}iffusion$ $\textbf{P}robabilistic$
$\textbf{M}odels$ (ClavaDDPM). This novel approach leverages clustering labels
as intermediaries to model relationships between tables, specifically focusing
on foreign key constraints. ClavaDDPM leverages the robust generation
capabilities of diffusion models while incorporating efficient algorithms to
propagate the learned latent variables across tables. This enables ClavaDDPM to
capture long-range dependencies effectively.
Extensive evaluations on multi-table datasets of varying sizes show that
ClavaDDPM significantly outperforms existing methods for these long-range
dependencies while remaining competitive on utility metrics for single-table
data.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 00:42:18 GMT"
}
] | 1,716,940,800,000 | [
[
"Pang",
"Wei",
""
],
[
"Shafieinejad",
"Masoumeh",
""
],
[
"Liu",
"Lucy",
""
],
[
"He",
"Xi",
""
]
] |
2405.17741 | Rui Kong | Rui Kong, Qiyang Li, Xinyu Fang, Qingtian Feng, Qingfeng He, Yazhu
Dong, Weijun Wang, Yuanchun Li, Linghe Kong, Yunxin Liu | LoRA-Switch: Boosting the Efficiency of Dynamic LLM Adapters via
System-Algorithm Co-design | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent literature has found that an effective method to customize or further
improve large language models (LLMs) is to add dynamic adapters, such as
low-rank adapters (LoRA) with Mixture-of-Experts (MoE) structures. Though such
dynamic adapters incur modest computational complexity, they surprisingly lead
to huge inference latency overhead, slowing down the decoding speed by 2.5+
times. In this paper, we analyze the fine-grained costs of the dynamic adapters
and find that the fragmented CUDA kernel calls are the root cause. Therefore,
we propose LoRA-Switch, a system-algorithm co-designed architecture for
efficient dynamic adapters. Unlike most existing dynamic structures that adopt
layer-wise or block-wise dynamic routing, LoRA-Switch introduces a token-wise
routing mechanism. It switches the LoRA adapters and weights for each token and
merges them into the backbone for inference. For efficiency, this switching is
implemented with an optimized CUDA kernel, which fuses the merging operations
for all LoRA adapters at once. Based on experiments with popular open-source
LLMs on common benchmarks, our approach has demonstrated similar accuracy
improvement as existing dynamic adapters, while reducing the decoding latency
by more than 2.4 times.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 01:53:26 GMT"
}
] | 1,716,940,800,000 | [
[
"Kong",
"Rui",
""
],
[
"Li",
"Qiyang",
""
],
[
"Fang",
"Xinyu",
""
],
[
"Feng",
"Qingtian",
""
],
[
"He",
"Qingfeng",
""
],
[
"Dong",
"Yazhu",
""
],
[
"Wang",
"Weijun",
""
],
[
"Li",
"Yuanchun",
""
],
[
"Kong",
"Linghe",
""
],
[
"Liu",
"Yunxin",
""
]
] |
2405.17888 | Jiaxiang Li | Jiaxiang Li, Siliang Zeng, Hoi-To Wai, Chenliang Li, Alfredo Garcia,
Mingyi Hong | Getting More Juice Out of the SFT Data: Reward Learning from Human
Demonstration Improves SFT for LLM Alignment | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aligning human preference and value is an important requirement for
contemporary foundation models. State-of-the-art techniques such as
Reinforcement Learning from Human Feedback (RLHF) often consist of two stages:
1) supervised fine-tuning (SFT), where the model is fine-tuned by learning from
human demonstration data; 2) Preference learning, where preference data is used
to learn a reward model, which is in turn used by a reinforcement learning (RL)
step to fine-tune the model. Such reward model serves as a proxy to human
preference, and it is critical to guide the RL step towards improving the model
quality. In this work, we argue that the SFT stage significantly benefits from
learning a reward model as well. Instead of using the human demonstration data
directly via supervised learning, we propose to leverage an Inverse
Reinforcement Learning (IRL) technique to (explicitly or implicitly) build an
reward model, while learning the policy model. This approach leads to new SFT
algorithms that are not only efficient to implement, but also promote the
ability to distinguish between the preferred and non-preferred continuations.
Moreover, we identify a connection between the proposed IRL based approach, and
certain self-play approach proposed recently, and showed that self-play is a
special case of modeling a reward-learning agent. Theoretically, we show that
the proposed algorithms converge to the stationary solutions of the IRL
problem. Empirically, we align 1B and 7B models using proposed methods and
evaluate them on a reward benchmark model and the HuggingFace Open LLM
Leaderboard. The proposed methods show significant performance improvement over
existing SFT approaches. Our results indicate that it is beneficial to
explicitly or implicitly leverage reward learning throughout the entire
alignment process.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 07:11:05 GMT"
},
{
"version": "v2",
"created": "Wed, 29 May 2024 13:33:33 GMT"
}
] | 1,717,027,200,000 | [
[
"Li",
"Jiaxiang",
""
],
[
"Zeng",
"Siliang",
""
],
[
"Wai",
"Hoi-To",
""
],
[
"Li",
"Chenliang",
""
],
[
"Garcia",
"Alfredo",
""
],
[
"Hong",
"Mingyi",
""
]
] |
2405.17934 | Zhenjie Zhang Dr | Zhenjie Zhang, Yuyang Rao, Hao Xiao, Xiaokui Xiao, Yin Yang | Proof of Quality: A Costless Paradigm for Trustless Generative AI Model
Inference on Blockchains | 12 pages, 5 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Generative AI models, such as GPT-4 and Stable Diffusion, have demonstrated
powerful and disruptive capabilities in natural language and image tasks.
However, deploying these models in decentralized environments remains
challenging. Unlike traditional centralized deployment, systematically
guaranteeing the integrity of AI model services in fully decentralized
environments, particularly on trustless blockchains, is both crucial and
difficult. In this paper, we present a new inference paradigm called
\emph{proof of quality} (PoQ) to enable the deployment of arbitrarily large
generative models on blockchain architecture. Unlike traditional approaches
based on validating inference procedures, such as ZKML or OPML, our PoQ
paradigm focuses on the outcome quality of model inference. Using lightweight
BERT-based cross-encoders as our underlying quality evaluation model, we design
and implement PQML, the first practical protocol for real-world NLP generative
model inference on blockchains, tailored for popular open-source models such as
Llama 3 and Mixtral. Our analysis demonstrates that our protocol is robust
against adversarial but rational participants in ecosystems, where lazy or
dishonest behavior results in fewer benefits compared to well-behaving
participants. The computational overhead of validating the quality evaluation
is minimal, allowing quality validators to complete the quality check within a
second, even using only a CPU. Preliminary simulation results show that PoQ
consensus is generated in milliseconds, 1,000 times faster than any existing
scheme.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 08:00:54 GMT"
},
{
"version": "v2",
"created": "Thu, 30 May 2024 13:26:35 GMT"
}
] | 1,717,113,600,000 | [
[
"Zhang",
"Zhenjie",
""
],
[
"Rao",
"Yuyang",
""
],
[
"Xiao",
"Hao",
""
],
[
"Xiao",
"Xiaokui",
""
],
[
"Yang",
"Yin",
""
]
] |
2405.17950 | Zangir Iklassov | Zangir Iklassov and Yali Du and Farkhad Akimov and Martin Takac | Self-Guiding Exploration for Combinatorial Problems | 22 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have become pivotal in addressing reasoning
tasks across diverse domains, including arithmetic, commonsense, and symbolic
reasoning. They utilize prompting techniques such as Exploration-of-Thought,
Decomposition, and Refinement to effectively navigate and solve intricate
tasks. Despite these advancements, the application of LLMs to Combinatorial
Problems (CPs), known for their NP-hardness and critical roles in logistics and
resource management remains underexplored. To address this gap, we introduce a
novel prompting strategy: Self-Guiding Exploration (SGE), designed to enhance
the performance of solving CPs. SGE operates autonomously, generating multiple
thought trajectories for each CP task. It then breaks these trajectories down
into actionable subtasks, executes them sequentially, and refines the results
to ensure optimal outcomes. We present our research as the first to apply LLMs
to a broad range of CPs and demonstrate that SGE outperforms existing prompting
strategies by over 27.84% in CP optimization performance. Additionally, SGE
achieves a 2.46% higher accuracy over the best existing results in other
reasoning tasks (arithmetic, commonsense, and symbolic).
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 08:26:54 GMT"
}
] | 1,716,940,800,000 | [
[
"Iklassov",
"Zangir",
""
],
[
"Du",
"Yali",
""
],
[
"Akimov",
"Farkhad",
""
],
[
"Takac",
"Martin",
""
]
] |
2405.17956 | Anirudhan Badrinath | Anirudhan Badrinath, Prabhat Agarwal, Jiajing Xu | Hybrid Preference Optimization: Augmenting Direct Preference
Optimization with Auxiliary Objectives | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For aligning large language models (LLMs), prior work has leveraged
reinforcement learning via human feedback (RLHF) or variations of direct
preference optimization (DPO). While DPO offers a simpler framework based on
maximum likelihood estimation, it compromises on the ability to tune language
models to easily maximize non-differentiable and non-binary objectives
according to the LLM designer's preferences (e.g., using simpler language or
minimizing specific kinds of harmful content). These may neither align with
user preferences nor even be able to be captured tractably by binary preference
data. To leverage the simplicity and performance of DPO with the
generalizability of RL, we propose a hybrid approach between DPO and RLHF. With
a simple augmentation to the implicit reward decomposition of DPO, we allow for
tuning LLMs to maximize a set of arbitrary auxiliary rewards using offline RL.
The proposed method, Hybrid Preference Optimization (HPO), shows the ability to
effectively generalize to both user preferences and auxiliary designer
objectives, while preserving alignment performance across a range of
challenging benchmarks and model sizes.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 08:35:48 GMT"
},
{
"version": "v2",
"created": "Wed, 29 May 2024 20:48:47 GMT"
}
] | 1,717,113,600,000 | [
[
"Badrinath",
"Anirudhan",
""
],
[
"Agarwal",
"Prabhat",
""
],
[
"Xu",
"Jiajing",
""
]
] |
2405.18014 | Wenbing Li None | Wenbing Li, Hang Zhou, Junqing Yu, Zikai Song, Wei Yang | Coupled Mamba: Enhanced Multi-modal Fusion with Coupled State Space
Model | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The essence of multi-modal fusion lies in exploiting the complementary
information inherent in diverse modalities. However, prevalent fusion methods
rely on traditional neural architectures and are inadequately equipped to
capture the dynamics of interactions across modalities, particularly in
presence of complex intra- and inter-modality correlations. Recent advancements
in State Space Models (SSMs), notably exemplified by the Mamba model, have
emerged as promising contenders. Particularly, its state evolving process
implies stronger modality fusion paradigm, making multi-modal fusion on SSMs an
appealing direction. However, fusing multiple modalities is challenging for
SSMs due to its hardware-aware parallelism designs. To this end, this paper
proposes the Coupled SSM model, for coupling state chains of multiple
modalities while maintaining independence of intra-modality state processes.
Specifically, in our coupled scheme, we devise an inter-modal hidden states
transition scheme, in which the current state is dependent on the states of its
own chain and that of the neighbouring chains at the previous time-step. To
fully comply with the hardware-aware parallelism, we devise an expedite coupled
state transition scheme and derive its corresponding global convolution kernel
for parallelism. Extensive experiments on CMU-MOSEI, CH-SIMS, CH-SIMSV2 through
multi-domain input verify the effectiveness of our model compared to current
state-of-the-art methods, improved F1-Score by 0.4\%, 0.9\%, and 2.3\% on the
three datasets respectively, 49\% faster inference and 83.7\% GPU memory save.
The results demonstrate that Coupled Mamba model is capable of enhanced
multi-modal fusion.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 09:57:03 GMT"
},
{
"version": "v2",
"created": "Wed, 29 May 2024 05:19:15 GMT"
}
] | 1,717,027,200,000 | [
[
"Li",
"Wenbing",
""
],
[
"Zhou",
"Hang",
""
],
[
"Yu",
"Junqing",
""
],
[
"Song",
"Zikai",
""
],
[
"Yang",
"Wei",
""
]
] |
2405.18016 | Christian Guckelsberger | Lisa Soros, Alyssa Adams, Stefano Kalonaris, Olaf Witkowski, Christian
Guckelsberger | On Creativity and Open-Endedness | 9 pages, accepted for publication in the proceedings of the 2024
International Conference for Artificial Life, Copenhagen, Denmark | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial Life (ALife) as an interdisciplinary field draws inspiration and
influence from a variety of perspectives. Scientific progress crucially
depends, then, on concerted efforts to invite cross-disciplinary dialogue. The
goal of this paper is to revitalize discussions of potential connections
between the fields of Computational Creativity (CC) and ALife, focusing
specifically on the concept of Open-Endedness (OE); the primary goal of CC is
to endow artificial systems with creativity, and ALife has dedicated much
research effort into studying and synthesizing OE and artificial innovation.
However, despite the close proximity of these concepts, their use so far
remains confined to their respective communities, and their relationship is
largely unclear. We provide historical context for research in both domains,
and review the limited work connecting research on creativity and OE
explicitly. We then highlight specific questions to be considered, with the
eventual goals of (i) decreasing conceptual ambiguity by highlighting
similarities and differences between the concepts of OE, (ii) identifying
synergy effects of a research agenda that encompasses both OE and creativity,
and (iii) establishing a dialogue between ALife and CC research.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 09:57:37 GMT"
}
] | 1,716,940,800,000 | [
[
"Soros",
"Lisa",
""
],
[
"Adams",
"Alyssa",
""
],
[
"Kalonaris",
"Stefano",
""
],
[
"Witkowski",
"Olaf",
""
],
[
"Guckelsberger",
"Christian",
""
]
] |
2405.18073 | Sanjay Modgil | Elfia Bezou-Vrakatseli and Oana Cocarascu and Sanjay Modgil | Towards Dialogues for Joint Human-AI Reasoning and Value Alignment | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We argue that enabling human-AI dialogue, purposed to support joint reasoning
(i.e., 'inquiry'), is important for ensuring that AI decision making is aligned
with human values and preferences. In particular, we point to logic-based
models of argumentation and dialogue, and suggest that the traditional focus on
persuasion dialogues be replaced by a focus on inquiry dialogues, and the
distinct challenges that joint inquiry raises. Given recent dramatic advances
in the performance of large language models (LLMs), and the anticipated
increase in their use for decision making, we provide a roadmap for research
into inquiry dialogues for supporting joint human-LLM reasoning tasks that are
ethically salient, and that thereby require that decisions are value aligned.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 11:29:57 GMT"
}
] | 1,716,940,800,000 | [
[
"Bezou-Vrakatseli",
"Elfia",
""
],
[
"Cocarascu",
"Oana",
""
],
[
"Modgil",
"Sanjay",
""
]
] |
2405.18106 | Kai Chen | Kai Chen, Ye Wang, Yitong Li, Aiping Li, Han Yu and Xin Song | A Unified Temporal Knowledge Graph Reasoning Model Towards Interpolation
and Extrapolation | To appear in ACL 2024 main conference | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal knowledge graph (TKG) reasoning has two settings: interpolation
reasoning and extrapolation reasoning. Both of them draw plenty of research
interest and have great significance. Methods of the former de-emphasize the
temporal correlations among facts sequences, while methods of the latter
require strict chronological order of knowledge and ignore inferring clues
provided by missing facts of the past. These limit the practicability of TKG
applications as almost all of the existing TKG reasoning methods are designed
specifically to address either one setting. To this end, this paper proposes an
original Temporal PAth-based Reasoning (TPAR) model for both the interpolation
and extrapolation reasoning. TPAR performs a neural-driven symbolic reasoning
fashion that is robust to ambiguous and noisy temporal data and with fine
interpretability as well. Comprehensive experiments show that TPAR outperforms
SOTA methods on the link prediction task for both the interpolation and the
extrapolation settings. A novel pipeline experimental setting is designed to
evaluate the performances of SOTA combinations and the proposed TPAR towards
interpolation and extrapolation reasoning. More diverse experiments are
conducted to show the robustness and interpretability of TPAR.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 12:13:07 GMT"
}
] | 1,716,940,800,000 | [
[
"Chen",
"Kai",
""
],
[
"Wang",
"Ye",
""
],
[
"Li",
"Yitong",
""
],
[
"Li",
"Aiping",
""
],
[
"Yu",
"Han",
""
],
[
"Song",
"Xin",
""
]
] |
2405.18123 | Martin Balla | Martin Balla, George E.M. Long, James Goodman, Raluca D. Gaina, Diego
Perez-Liebana | PyTAG: Tabletop Games for Multi-Agent Reinforcement Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Modern Tabletop Games present various interesting challenges for Multi-agent
Reinforcement Learning. In this paper, we introduce PyTAG, a new framework that
supports interacting with a large collection of games implemented in the
Tabletop Games framework. In this work we highlight the challenges tabletop
games provide, from a game-playing agent perspective, along with the
opportunities they provide for future research. Additionally, we highlight the
technical challenges that involve training Reinforcement Learning agents on
these games. To explore the Multi-agent setting provided by PyTAG we train the
popular Proximal Policy Optimisation Reinforcement Learning algorithm using
self-play on a subset of games and evaluate the trained policies against some
simple agents and Monte-Carlo Tree Search implemented in the Tabletop Games
framework.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 12:30:28 GMT"
}
] | 1,716,940,800,000 | [
[
"Balla",
"Martin",
""
],
[
"Long",
"George E. M.",
""
],
[
"Goodman",
"James",
""
],
[
"Gaina",
"Raluca D.",
""
],
[
"Perez-Liebana",
"Diego",
""
]
] |
2405.18139 | Sakir Hossain Faruque | Sakir Hossain Faruque, Sharun Akter Khushbu, Sharmin Akter | Unlocking Futures: A Natural Language Driven Career Prediction System
for Computer Science and Software Engineering Students | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | A career is a crucial aspect for any person to fulfill their desires through
hard work. During their studies, students cannot find the best career
suggestions unless they receive meaningful guidance tailored to their skills.
Therefore, we developed an AI-assisted model for early prediction to provide
better career suggestions. Although the task is difficult, proper guidance can
make it easier. Effective career guidance requires understanding a student's
academic skills, interests, and skill-related activities. In this research, we
collected essential information from Computer Science (CS) and Software
Engineering (SWE) students to train a machine learning (ML) model that predicts
career paths based on students' career-related information. To adequately train
the models, we applied Natural Language Processing (NLP) techniques and
completed dataset pre-processing. For comparative analysis, we utilized
multiple classification ML algorithms and deep learning (DL) algorithms. This
study contributes valuable insights to educational advising by providing
specific career suggestions based on the unique features of CS and SWE
students. Additionally, the research helps individual CS and SWE students find
suitable jobs that match their skills, interests, and skill-related activities.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 12:56:57 GMT"
}
] | 1,716,940,800,000 | [
[
"Faruque",
"Sakir Hossain",
""
],
[
"Khushbu",
"Sharun Akter",
""
],
[
"Akter",
"Sharmin",
""
]
] |
2405.18166 | Wei Zhao | Wei Zhao and Zhe Li and Yige Li and Ye Zhang and Jun Sun | Defending Large Language Models Against Jailbreak Attacks via
Layer-specific Editing | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) are increasingly being adopted in a wide range
of real-world applications. Despite their impressive performance, recent
studies have shown that LLMs are vulnerable to deliberately crafted adversarial
prompts even when aligned via Reinforcement Learning from Human Feedback or
supervised fine-tuning. While existing defense methods focus on either
detecting harmful prompts or reducing the likelihood of harmful responses
through various means, defending LLMs against jailbreak attacks based on the
inner mechanisms of LLMs remains largely unexplored. In this work, we
investigate how LLMs response to harmful prompts and propose a novel defense
method termed \textbf{L}ayer-specific \textbf{Ed}iting (LED) to enhance the
resilience of LLMs against jailbreak attacks. Through LED, we reveal that
several critical \textit{safety layers} exist among the early layers of LLMs.
We then show that realigning these safety layers (and some selected additional
layers) with the decoded safe response from selected target layers can
significantly improve the alignment of LLMs against jailbreak attacks.
Extensive experiments across various LLMs (e.g., Llama2, Mistral) show the
effectiveness of LED, which effectively defends against jailbreak attacks while
maintaining performance on benign prompts. Our code is available at
\url{https://github.com/ledllm/ledllm}.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 13:26:12 GMT"
}
] | 1,716,940,800,000 | [
[
"Zhao",
"Wei",
""
],
[
"Li",
"Zhe",
""
],
[
"Li",
"Yige",
""
],
[
"Zhang",
"Ye",
""
],
[
"Sun",
"Jun",
""
]
] |
2405.18246 | Devon Graham Mr | Devon Graham and Kevin Leyton-Brown | Utilitarian Algorithm Configuration for Infinite Parameter Spaces | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Utilitarian algorithm configuration is a general-purpose technique for
automatically searching the parameter space of a given algorithm to optimize
its performance, as measured by a given utility function, on a given set of
inputs. Recently introduced utilitarian configuration procedures offer
optimality guarantees about the returned parameterization while provably
adapting to the hardness of the underlying problem. However, the applicability
of these approaches is severely limited by the fact that they only search a
finite, relatively small set of parameters. They cannot effectively search the
configuration space of algorithms with continuous or uncountable parameters. In
this paper we introduce a new procedure, which we dub COUP (Continuous,
Optimistic Utilitarian Procrastination). COUP is designed to search infinite
parameter spaces efficiently to find good configurations quickly. Furthermore,
COUP maintains the theoretical benefits of previous utilitarian configuration
procedures when applied to finite parameter spaces but is significantly faster,
both provably and experimentally.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 14:58:07 GMT"
}
] | 1,716,940,800,000 | [
[
"Graham",
"Devon",
""
],
[
"Leyton-Brown",
"Kevin",
""
]
] |
2405.18248 | Masataro Asai | Masataro Asai, Stephen Wissow | Extreme Value Monte Carlo Tree Search | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite being successful in board games and reinforcement learning (RL), UCT,
a Monte-Carlo Tree Search (MCTS) combined with UCB1 Multi-Armed Bandit (MAB),
has had limited success in domain-independent planning until recently. Previous
work showed that UCB1, designed for $[0,1]$-bounded rewards, is not appropriate
for estimating the distance-to-go which are potentially unbounded in
$\mathbb{R}$, such as heuristic functions used in classical planning, then
proposed combining MCTS with MABs designed for Gaussian reward distributions
and successfully improved the performance. In this paper, we further sharpen
our understanding of ideal bandits for planning tasks. Existing work has two
issues: First, while Gaussian MABs no longer over-specify the distances as
$h\in [0,1]$, they under-specify them as $h\in [-\infty,\infty]$ while they are
non-negative and can be further bounded in some cases. Second, there is no
theoretical justifications for Full-Bellman backup (Schulte & Keller, 2014)
that backpropagates minimum/maximum of samples. We identified \emph{extreme
value} statistics as a theoretical framework that resolves both issues at once
and propose two bandits, UCB1-Uniform/Power, and apply them to MCTS for
classical planning. We formally prove their regret bounds and empirically
demonstrate their performance in classical planning.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 14:58:43 GMT"
}
] | 1,716,940,800,000 | [
[
"Asai",
"Masataro",
""
],
[
"Wissow",
"Stephen",
""
]
] |
2405.18272 | Christian Blum | Camilo Chac\'on Sartori, Christian Blum, Filippo Bistaffa, Guillem
Rodr\'iguez Corominas | Metaheuristics and Large Language Models Join Forces: Towards an
Integrated Optimization Approach | Submitted for publication in an international journal | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the rise of Large Language Models (LLMs) a couple of years ago,
researchers in metaheuristics (MHs) have wondered how to use their power in a
beneficial way within their algorithms. This paper introduces a novel approach
that leverages LLMs as pattern recognition tools to improve MHs. The resulting
hybrid method, tested in the context of a social network-based combinatorial
optimization problem, outperforms existing state-of-the-art approaches that
combine machine learning with MHs regarding the obtained solution quality. By
carefully designing prompts, we demonstrate that the output obtained from LLMs
can be used as problem knowledge, leading to improved results. Lastly, we
acknowledge LLMs' potential drawbacks and limitations and consider it essential
to examine them to advance this type of research further.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 15:23:46 GMT"
}
] | 1,716,940,800,000 | [
[
"Sartori",
"Camilo Chacón",
""
],
[
"Blum",
"Christian",
""
],
[
"Bistaffa",
"Filippo",
""
],
[
"Corominas",
"Guillem Rodríguez",
""
]
] |
2405.18300 | Kangyao Huang | Kangyao Huang, Di Guo, Xinyu Zhang, Xiangyang Ji, Huaping Liu | CompetEvo: Towards Morphological Evolution from Competition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training an agent to adapt to specific tasks through co-optimization of
morphology and control has widely attracted attention. However, whether there
exists an optimal configuration and tactics for agents in a multiagent
competition scenario is still an issue that is challenging to definitively
conclude. In this context, we propose competitive evolution (CompetEvo), which
co-evolves agents' designs and tactics in confrontation. We build arenas
consisting of three animals and their evolved derivatives, placing agents with
different morphologies in direct competition with each other. The results
reveal that our method enables agents to evolve a more suitable design and
strategy for fighting compared to fixed-morph agents, allowing them to obtain
advantages in combat scenarios. Moreover, we demonstrate the amazing and
impressive behaviors that emerge when confrontations are conducted under
asymmetrical morphs.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 15:53:02 GMT"
}
] | 1,716,940,800,000 | [
[
"Huang",
"Kangyao",
""
],
[
"Guo",
"Di",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Ji",
"Xiangyang",
""
],
[
"Liu",
"Huaping",
""
]
] |
2405.18346 | Anjanava Biswas | Anjanava Biswas, Wrick Talukdar | Intelligent Clinical Documentation: Harnessing Generative AI for
Patient-Centric Clinical Note Generation | 15 pages, 7 figures | International Journal of Innovative Science and Research
Technology: Vol. 9 (2024): No. 5, 994-1008 | 10.38124/ijisrt/IJISRT24MAY1483 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Comprehensive clinical documentation is crucial for effective healthcare
delivery, yet it poses a significant burden on healthcare professionals,
leading to burnout, increased medical errors, and compromised patient safety.
This paper explores the potential of generative AI (Artificial Intelligence) to
streamline the clinical documentation process, specifically focusing on
generating SOAP (Subjective, Objective, Assessment, Plan) and BIRP (Behavior,
Intervention, Response, Plan) notes. We present a case study demonstrating the
application of natural language processing (NLP) and automatic speech
recognition (ASR) technologies to transcribe patient-clinician interactions,
coupled with advanced prompting techniques to generate draft clinical notes
using large language models (LLMs). The study highlights the benefits of this
approach, including time savings, improved documentation quality, and enhanced
patient-centered care. Additionally, we discuss ethical considerations, such as
maintaining patient confidentiality and addressing model biases, underscoring
the need for responsible deployment of generative AI in healthcare settings.
The findings suggest that generative AI has the potential to revolutionize
clinical documentation practices, alleviating administrative burdens and
enabling healthcare professionals to focus more on direct patient care.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 16:43:41 GMT"
}
] | 1,716,940,800,000 | [
[
"Biswas",
"Anjanava",
""
],
[
"Talukdar",
"Wrick",
""
]
] |
2405.18377 | Anthony Sarah | Anthony Sarah, Sharath Nittur Sridhar, Maciej Szankin, Sairam
Sundaresan | LLaMA-NAS: Efficient Neural Architecture Search for Large Language
Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The abilities of modern large language models (LLMs) in solving natural
language processing, complex reasoning, sentiment analysis and other tasks have
been extraordinary which has prompted their extensive adoption. Unfortunately,
these abilities come with very high memory and computational costs which
precludes the use of LLMs on most hardware platforms. To mitigate this, we
propose an effective method of finding Pareto-optimal network architectures
based on LLaMA2-7B using one-shot NAS. In particular, we fine-tune LLaMA2-7B
only once and then apply genetic algorithm-based search to find smaller, less
computationally complex network architectures. We show that, for certain
standard benchmark tasks, the pre-trained LLaMA2-7B network is unnecessarily
large and complex. More specifically, we demonstrate a 1.5x reduction in model
size and 1.3x speedup in throughput for certain tasks with negligible drop in
accuracy. In addition to finding smaller, higher-performing network
architectures, our method does so more effectively and efficiently than certain
pruning or sparsification techniques. Finally, we demonstrate how quantization
is complementary to our method and that the size and complexity of the networks
we find can be further decreased using quantization. We believe that our work
provides a way to automatically create LLMs which can be used on less expensive
and more readily available hardware platforms.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 17:20:44 GMT"
}
] | 1,716,940,800,000 | [
[
"Sarah",
"Anthony",
""
],
[
"Sridhar",
"Sharath Nittur",
""
],
[
"Szankin",
"Maciej",
""
],
[
"Sundaresan",
"Sairam",
""
]
] |
2405.18510 | Willem van der Maden | James Derek Lomas, Willem van der Maden, Sohhom Bandyopadhyay,
Giovanni Lion, Nirmal Patel, Gyanesh Jain, Yanna Litowsky, Haian Xue, Pieter
Desmet | Improved Emotional Alignment of AI and Humans: Human Ratings of Emotions
Expressed by Stable Diffusion v1, DALL-E 2, and DALL-E 3 | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Generative AI systems are increasingly capable of expressing emotions via
text and imagery. Effective emotional expression will likely play a major role
in the efficacy of AI systems -- particularly those designed to support human
mental health and wellbeing. This motivates our present research to better
understand the alignment of AI expressed emotions with the human perception of
emotions. When AI tries to express a particular emotion, how might we assess
whether they are successful? To answer this question, we designed a survey to
measure the alignment between emotions expressed by generative AI and human
perceptions. Three generative image models (DALL-E 2, DALL-E 3 and Stable
Diffusion v1) were used to generate 240 examples of images, each of which was
based on a prompt designed to express five positive and five negative emotions
across both humans and robots. 24 participants recruited from the Prolific
website rated the alignment of AI-generated emotional expressions with a text
prompt used to generate the emotion (i.e., "A robot expressing the emotion
amusement"). The results of our evaluation suggest that generative AI models
are indeed capable of producing emotional expressions that are well-aligned
with a range of human emotions; however, we show that the alignment
significantly depends upon the AI model used and the emotion itself. We analyze
variations in the performance of these systems to identify gaps for future
improvement. We conclude with a discussion of the implications for future AI
systems designed to support mental health and wellbeing.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 18:26:57 GMT"
}
] | 1,717,027,200,000 | [
[
"Lomas",
"James Derek",
""
],
[
"van der Maden",
"Willem",
""
],
[
"Bandyopadhyay",
"Sohhom",
""
],
[
"Lion",
"Giovanni",
""
],
[
"Patel",
"Nirmal",
""
],
[
"Jain",
"Gyanesh",
""
],
[
"Litowsky",
"Yanna",
""
],
[
"Xue",
"Haian",
""
],
[
"Desmet",
"Pieter",
""
]
] |
2405.18553 | Stephen Obadinma | Stephen Obadinma, Alia Lachana, Maia Norman, Jocelyn Rankin, Joanna
Yu, Xiaodan Zhu, Darren Mastropaolo, Deval Pandya, Roxana Sultan, Elham
Dolatabadi | The FAIIR Tool: A Conversational AI Agent Assistant for Youth Mental
Health Service Provision | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | World's healthcare systems and mental health agencies face both a growing
demand for youth mental health services, alongside a simultaneous challenge of
limited resources. Given these constraints, this work presents our experience
in the creation and evaluation of the FAIIR (Frontline Assistant: Issue
Identification and Recommendation) tool, an ensemble of domain-adapted and
fine-tuned transformer models, leveraging natural language processing to
identify issues that youth may be experiencing. We explore the technical
development, performance, and validation processes leveraged for the FAIIR tool
in application to situations of frontline crisis response via Kids Help Phone.
Frontline Crisis Responders assign an issue tag from a defined list following
each conversation. Assisting with the identification of issues of relevance
helps reduce the burden on CRs, ensuring that appropriate resources can be
provided and that active rescues and mandatory reporting can take place in
critical situations requiring immediate de-escalation.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 19:54:46 GMT"
}
] | 1,717,027,200,000 | [
[
"Obadinma",
"Stephen",
""
],
[
"Lachana",
"Alia",
""
],
[
"Norman",
"Maia",
""
],
[
"Rankin",
"Jocelyn",
""
],
[
"Yu",
"Joanna",
""
],
[
"Zhu",
"Xiaodan",
""
],
[
"Mastropaolo",
"Darren",
""
],
[
"Pandya",
"Deval",
""
],
[
"Sultan",
"Roxana",
""
],
[
"Dolatabadi",
"Elham",
""
]
] |
2405.18581 | Hyunjin Seo | Hyunjin Seo, Taewon Kim, June Yong Yang, Eunho Yang | Unleashing the Potential of Text-attributed Graphs: Automatic Relation
Decomposition via Large Language Models | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent advancements in text-attributed graphs (TAGs) have significantly
improved the quality of node features by using the textual modeling
capabilities of language models. Despite this success, utilizing text
attributes to enhance the predefined graph structure remains largely
unexplored. Our extensive analysis reveals that conventional edges on TAGs,
treated as a single relation (e.g., hyperlinks) in previous literature,
actually encompass mixed semantics (e.g., "advised by" and "participates in").
This simplification hinders the representation learning process of Graph Neural
Networks (GNNs) on downstream tasks, even when integrated with advanced node
features. In contrast, we discover that decomposing these edges into distinct
semantic relations significantly enhances the performance of GNNs. Despite
this, manually identifying and labeling of edges to corresponding semantic
relations is labor-intensive, often requiring domain expertise. To this end, we
introduce RoSE (Relation-oriented Semantic Edge-decomposition), a novel
framework that leverages the capability of Large Language Models (LLMs) to
decompose the graph structure by analyzing raw text attributes - in a fully
automated manner. RoSE operates in two stages: (1) identifying meaningful
relations using an LLM-based generator and discriminator, and (2) categorizing
each edge into corresponding relations by analyzing textual contents associated
with connected nodes via an LLM-based decomposer. Extensive experiments
demonstrate that our model-agnostic framework significantly enhances node
classification performance across various datasets, with improvements of up to
16% on the Wisconsin dataset.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 20:54:47 GMT"
}
] | 1,717,027,200,000 | [
[
"Seo",
"Hyunjin",
""
],
[
"Kim",
"Taewon",
""
],
[
"Yang",
"June Yong",
""
],
[
"Yang",
"Eunho",
""
]
] |
2405.18602 | Tae-Wook Kim | Tae-wook Kim, Han-jin Lee, Hyeon-Jin Jung, Ji-Woong Yang, Ellen J.
Hong | SST-GCN: The Sequential based Spatio-Temporal Graph Convolutional
networks for Minute-level and Road-level Traffic Accident Risk Prediction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic accidents are recognized as a major social issue worldwide, causing
numerous injuries and significant costs annually. Consequently, methods for
predicting and preventing traffic accidents have been researched for many
years. With advancements in the field of artificial intelligence, various
studies have applied Machine Learning and Deep Learning techniques to traffic
accident prediction. Modern traffic conditions change rapidly by the minute,
and these changes vary significantly across different roads. In other words,
the risk of traffic accidents changes minute by minute in various patterns for
each road. Therefore, it is desirable to predict traffic accident risk at the
Minute-Level and Road-Level. However, because roads have close and complex
relationships with adjacent roads, research on predicting traffic accidents at
the Minute-Level and Road-Level is challenging. Thus, it is essential to build
a model that can reflect the spatial and temporal characteristics of roads for
traffic accident prediction. Consequently, recent attempts have been made to
use Graph Convolutional Networks to capture the spatial characteristics of
roads and Recurrent Neural Networks to capture their temporal characteristics
for predicting traffic accident risk. This paper proposes the Sequential based
Spatio-Temporal Graph Convolutional Networks (SST-GCN), which combines GCN and
LSTM, to predict traffic accidents at the Minute-Level and Road-Level using a
road dataset constructed in Seoul, the capital of South Korea. Experiments have
demonstrated that SST-GCN outperforms other state-of-the-art models in
Minute-Level predictions.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 21:33:18 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jun 2024 08:44:05 GMT"
}
] | 1,717,459,200,000 | [
[
"Kim",
"Tae-wook",
""
],
[
"Lee",
"Han-jin",
""
],
[
"Jung",
"Hyeon-Jin",
""
],
[
"Yang",
"Ji-Woong",
""
],
[
"Hong",
"Ellen J.",
""
]
] |
2405.18663 | Lianlei Shan | Lianlei Shan, Wenzhang Zhou, Wei Li and Xingyu Ding | Lifelong Learning and Selective Forgetting via Contrastive Strategy | 10 pages, 5 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lifelong learning aims to train a model with good performance for new tasks
while retaining the capacity of previous tasks. However, some practical
scenarios require the system to forget undesirable knowledge due to privacy
issues, which is called selective forgetting. The joint task of the two is
dubbed Learning with Selective Forgetting (LSF). In this paper, we propose a
new framework based on contrastive strategy for LSF. Specifically, for the
preserved classes (tasks), we make features extracted from different samples
within a same class compacted. And for the deleted classes, we make the
features from different samples of a same class dispersed and irregular, i.e.,
the network does not have any regular response to samples from a specific
deleted class as if the network has no training at all. Through maintaining or
disturbing the feature distribution, the forgetting and memory of different
classes can be or independent of each other. Experiments are conducted on four
benchmark datasets, and our method acieves new state-of-the-art.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 23:57:48 GMT"
}
] | 1,717,027,200,000 | [
[
"Shan",
"Lianlei",
""
],
[
"Zhou",
"Wenzhang",
""
],
[
"Li",
"Wei",
""
],
[
"Ding",
"Xingyu",
""
]
] |
2405.18733 | Noah Adhikari | Noah Adhikari and Allen Gu | Efficient Learning in Chinese Checkers: Comparing Parameter Sharing in
Multi-Agent Reinforcement Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | We show that multi-agent reinforcement learning (MARL) with full parameter
sharing outperforms independent and partially shared architectures in the
competitive perfect-information homogenous game of Chinese Checkers. To run our
experiments, we develop a new MARL environment: variable-size, six-player
Chinese Checkers. This custom environment was developed in PettingZoo and
supports all traditional rules of the game including chaining jumps. This is,
to the best of our knowledge, the first implementation of Chinese Checkers that
remains faithful to the true game.
Chinese Checkers is difficult to learn due to its large branching factor and
potentially infinite horizons. We borrow the concept of branching actions
(submoves) from complex action spaces in other RL domains, where a submove may
not end a player's turn immediately. This drastically reduces the
dimensionality of the action space. Our observation space is inspired by
AlphaGo with many binary game boards stacked in a 3D array to encode
information.
The PettingZoo environment, training and evaluation logic, and analysis
scripts can be found on
\href{https://github.com/noahadhikari/pettingzoo-chinese-checkers}{Github}.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 03:27:30 GMT"
}
] | 1,717,027,200,000 | [
[
"Adhikari",
"Noah",
""
],
[
"Gu",
"Allen",
""
]
] |
2405.18823 | Hallah Butt | Hallah Shahid Butt, Benjamin Sch\"afer | Why Reinforcement Learning in Energy Systems Needs Explanations | null | ExEn Workshop 2024 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | With economic development, the complexity of infrastructure has increased
drastically. Similarly, with the shift from fossil fuels to renewable sources
of energy, there is a dire need for such systems that not only predict and
forecast with accuracy but also help in understanding the process of
predictions. Artificial intelligence and machine learning techniques have
helped in finding out wellperforming solutions to different problems in the
energy sector. However, the usage of state-of-the-art techniques like
reinforcement learning is not surprisingly convincing. This paper discusses the
application of reinforcement techniques in energy systems and how explanations
of these models can be helpful
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 07:09:00 GMT"
}
] | 1,717,027,200,000 | [
[
"Butt",
"Hallah Shahid",
""
],
[
"Schäfer",
"Benjamin",
""
]
] |
2405.18867 | Abdul Aziz Ahamed Bahrudeen | Abdul Aziz A.B, A.B Abdul Rahim | Topological Perspectives on Optimal Multimodal Embedding Spaces | 10 pages, 17 figures, 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent strides in multimodal model development have ignited a paradigm shift
in the realm of text-to-image generation. Among these advancements, CLIP stands
out as a remarkable achievement which is a sophisticated autoencoder adept at
encoding both textual and visual information within a unified latent space.
This paper delves into a comparative analysis between CLIP and its recent
counterpart, CLOOB. To unravel the intricate distinctions within the embedding
spaces crafted by these models, we employ topological data analysis. Our
approach encompasses a comprehensive examination of the modality gap drivers,
the clustering structures existing across both high and low dimensions, and the
pivotal role that dimension collapse plays in shaping their respective
embedding spaces. Empirical experiments substantiate the implications of our
analyses on downstream performance across various contextual scenarios. Through
this investigation, we aim to shed light on the nuanced intricacies that
underlie the comparative efficacy of CLIP and CLOOB, offering insights into
their respective strengths and weaknesses, and providing a foundation for
further refinement and advancement in multimodal model research.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 08:28:23 GMT"
}
] | 1,717,027,200,000 | [
[
"B",
"Abdul Aziz A.",
""
],
[
"Rahim",
"A. B Abdul",
""
]
] |
2405.18875 | Tom Bewley | Tom Bewley, Salim I. Amoukou, Saumitra Mishra, Daniele Magazzeni,
Manuela Veloso | Counterfactual Metarules for Local and Global Recourse | Accepted at ICML 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce T-CREx, a novel model-agnostic method for local and global
counterfactual explanation (CE), which summarises recourse options for both
individuals and groups in the form of human-readable rules. It leverages
tree-based surrogate models to learn the counterfactual rules, alongside
'metarules' denoting their regions of optimality, providing both a global
analysis of model behaviour and diverse recourse options for users. Experiments
indicate that T-CREx achieves superior aggregate performance over existing
rule-based baselines on a range of CE desiderata, while being orders of
magnitude faster to run.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 08:35:17 GMT"
}
] | 1,717,027,200,000 | [
[
"Bewley",
"Tom",
""
],
[
"Amoukou",
"Salim I.",
""
],
[
"Mishra",
"Saumitra",
""
],
[
"Magazzeni",
"Daniele",
""
],
[
"Veloso",
"Manuela",
""
]
] |
2405.18910 | Yuxuan Liang | Huaiwu Zhang, Yutong Xia, Siru Zhong, Kun Wang, Zekun Tong, Qingsong
Wen, Roger Zimmermann, Yuxuan Liang | Predicting Parking Availability in Singapore with Cross-Domain Data: A
New Dataset and A Data-Driven Approach | Accepted by IJCAI 2024 (Multi-Year Track On AI And Social Good with
~20% acceptance rate) | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | The increasing number of vehicles highlights the need for efficient parking
space management. Predicting real-time Parking Availability (PA) can help
mitigate traffic congestion and the corresponding social problems, which is a
pressing issue in densely populated cities like Singapore. In this study, we
aim to collectively predict future PA across Singapore with complex factors
from various domains. The contributions in this paper are listed as follows:
(1) A New Dataset: We introduce the \texttt{SINPA} dataset, containing a year's
worth of PA data from 1,687 parking lots in Singapore, enriched with various
spatial and temporal factors. (2) A Data-Driven Approach: We present DeepPA, a
novel deep-learning framework, to collectively and efficiently predict future
PA across thousands of parking lots. (3) Extensive Experiments and Deployment:
DeepPA demonstrates a 9.2% reduction in prediction error for up to 3-hour
forecasts compared to existing advanced models. Furthermore, we implement
DeepPA in a practical web-based platform to provide real-time PA predictions to
aid drivers and inform urban planning for the governors in Singapore. We
release the dataset and source code at https://github.com/yoshall/SINPA.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 09:11:51 GMT"
}
] | 1,717,027,200,000 | [
[
"Zhang",
"Huaiwu",
""
],
[
"Xia",
"Yutong",
""
],
[
"Zhong",
"Siru",
""
],
[
"Wang",
"Kun",
""
],
[
"Tong",
"Zekun",
""
],
[
"Wen",
"Qingsong",
""
],
[
"Zimmermann",
"Roger",
""
],
[
"Liang",
"Yuxuan",
""
]
] |
2405.19012 | Rongyu Zhang | Gaole Dai, Cheng-Ching Tseng, Qingpo Wuwu, Rongyu Zhang, Shaokang
Wang, Ming Lu, Tiejun Huang, Yu Zhou, Ali Ata Tuz, Matthias Gunzer, Jianxu
Chen, Shanghang Zhang | Implicit Neural Image Field for Biological Microscopy Image Compression | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid pace of innovation in biological microscopy imaging has led to
large images, putting pressure on data storage and impeding efficient sharing,
management, and visualization. This necessitates the development of efficient
compression solutions. Traditional CODEC methods struggle to adapt to the
diverse bioimaging data and often suffer from sub-optimal compression. In this
study, we propose an adaptive compression workflow based on Implicit Neural
Representation (INR). This approach permits application-specific compression
objectives, capable of compressing images of any shape and arbitrary pixel-wise
decompression. We demonstrated on a wide range of microscopy images from real
applications that our workflow not only achieved high, controllable compression
ratios (e.g., 512x) but also preserved detailed information critical for
downstream analysis.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 11:51:33 GMT"
}
] | 1,717,027,200,000 | [
[
"Dai",
"Gaole",
""
],
[
"Tseng",
"Cheng-Ching",
""
],
[
"Wuwu",
"Qingpo",
""
],
[
"Zhang",
"Rongyu",
""
],
[
"Wang",
"Shaokang",
""
],
[
"Lu",
"Ming",
""
],
[
"Huang",
"Tiejun",
""
],
[
"Zhou",
"Yu",
""
],
[
"Tuz",
"Ali Ata",
""
],
[
"Gunzer",
"Matthias",
""
],
[
"Chen",
"Jianxu",
""
],
[
"Zhang",
"Shanghang",
""
]
] |
2405.19132 | Andreas Scholl | Andreas Scholl, Daniel Schiffner and Natalie Kiesler | Analyzing Chat Protocols of Novice Programmers Solving Introductory
Programming Tasks with ChatGPT | Accepted at DELFI 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Language Models (LLMs) have taken the world by storm, and students are
assumed to use related tools at a great scale. In this research paper we aim to
gain an understanding of how introductory programming students chat with LLMs
and related tools, e.g., ChatGPT-3.5. To address this goal, computing students
at a large German university were motivated to solve programming exercises with
the assistance of ChatGPT as part of their weekly introductory course
exercises. Then students (n=213) submitted their chat protocols (with 2335
prompts in sum) as data basis for this analysis. The data was analyzed w.r.t.
the prompts, frequencies, the chats' progress, contents, and other use pattern,
which revealed a great variety of interactions, both potentially supportive and
concerning. Learning about students' interactions with ChatGPT will help inform
and align teaching practices and instructions for future introductory
programming courses in higher education.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 14:38:32 GMT"
}
] | 1,717,027,200,000 | [
[
"Scholl",
"Andreas",
""
],
[
"Schiffner",
"Daniel",
""
],
[
"Kiesler",
"Natalie",
""
]
] |
2405.19184 | Yufan Kang | Yufan Kang, Rongsheng Zhang, Wei Shao, Flora D. Salim, Jeffrey Chan | Promoting Two-sided Fairness in Dynamic Vehicle Routing Problem | null | null | 10.1145/3638529.3654207 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic Vehicle Routing Problem (DVRP), is an extension of the classic
Vehicle Routing Problem (VRP), which is a fundamental problem in logistics and
transportation. Typically, DVRPs involve two stakeholders: service providers
that deliver services to customers and customers who raise requests from
different locations. Many real-world applications can be formulated as DVRP
such as ridesharing and non-compliance capture. Apart from original objectives
like optimising total utility or efficiency, DVRP should also consider fairness
for all parties. Unfairness can induce service providers and customers to give
up on the systems, leading to negative financial and social impacts. However,
most existing DVRP-related applications focus on improving fairness from a
single side, and there have been few works considering two-sided fairness and
utility optimisation concurrently. To this end, we propose a novel framework, a
Two-sided Fairness-aware Genetic Algorithm (named 2FairGA), which expands the
genetic algorithm from the original objective solely focusing on utility to
multi-objectives that incorporate two-sided fairness. Subsequently, the impact
of injecting two fairness definitions into the utility-focused model and the
correlation between any pair of the three objectives are explored. Extensive
experiments demonstrate the superiority of our proposed framework compared to
the state-of-the-art.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 15:24:28 GMT"
}
] | 1,717,027,200,000 | [
[
"Kang",
"Yufan",
""
],
[
"Zhang",
"Rongsheng",
""
],
[
"Shao",
"Wei",
""
],
[
"Salim",
"Flora D.",
""
],
[
"Chan",
"Jeffrey",
""
]
] |
2405.19229 | Stylianos Loukas Vasileiou | Stylianos Loukas Vasileiou, William Yeoh, Alessandro Previti, Tran Cao
Son | On Generating Monolithic and Model Reconciling Explanations in
Probabilistic Scenarios | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explanation generation frameworks aim to make AI systems' decisions
transparent and understandable to human users. However, generating explanations
in uncertain environments characterized by incomplete information and
probabilistic models remains a significant challenge. In this paper, we propose
a novel framework for generating probabilistic monolithic explanations and
model reconciling explanations. Monolithic explanations provide self-contained
reasons for an explanandum without considering the agent receiving the
explanation, while model reconciling explanations account for the knowledge of
the agent receiving the explanation. For monolithic explanations, our approach
integrates uncertainty by utilizing probabilistic logic to increase the
probability of the explanandum. For model reconciling explanations, we propose
a framework that extends the logic-based variant of the model reconciliation
problem to account for probabilistic human models, where the goal is to find
explanations that increase the probability of the explanandum while minimizing
conflicts between the explanation and the probabilistic human model. We
introduce explanatory gain and explanatory power as quantitative metrics to
assess the quality of these explanations. Further, we present algorithms that
exploit the duality between minimal correction sets and minimal unsatisfiable
sets to efficiently compute both types of explanations in probabilistic
contexts. Extensive experimental evaluations on various benchmarks demonstrate
the effectiveness and scalability of our approach in generating explanations
under uncertainty.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 16:07:31 GMT"
}
] | 1,717,027,200,000 | [
[
"Vasileiou",
"Stylianos Loukas",
""
],
[
"Yeoh",
"William",
""
],
[
"Previti",
"Alessandro",
""
],
[
"Son",
"Tran Cao",
""
]
] |
2405.19238 | Stylianos Loukas Vasileiou | Stylianos Loukas Vasileiou, William Yeoh | Explanation-based Belief Revision: Moving Beyond Minimalism to
Explanatory Understanding | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In belief revision, agents typically modify their beliefs when they receive
some new piece of information that is in conflict with them. The guiding
principle behind most belief revision frameworks is that of minimalism, which
advocates minimal changes to existing beliefs. However, minimalism may not
necessarily capture the nuanced ways in which human agents reevaluate and
modify their beliefs. In contrast, the explanatory hypothesis indicates that
people are inherently driven to seek explanations for inconsistencies, thereby
striving for explanatory coherence rather than minimal changes when revising
beliefs. Our contribution in this paper is two-fold. Motivated by the
explanatory hypothesis, we first present a novel, yet simple belief revision
operator that, given a belief base and an explanation for an explanandum, it
revises the belief bases in a manner that preserves the explanandum and is not
necessarily minimal. We call this operator explanation-based belief revision.
Second, we conduct two human-subject studies to empirically validate our
approach and investigate belief revision behavior in real-world scenarios. Our
findings support the explanatory hypothesis and provide insights into the
strategies people employ when resolving inconsistencies.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 16:20:51 GMT"
}
] | 1,717,027,200,000 | [
[
"Vasileiou",
"Stylianos Loukas",
""
],
[
"Yeoh",
"William",
""
]
] |