id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2404.10646 | Niklas Strau{\ss} | Niklas Strau{\ss}, Lukas Rottkamp, Sebatian Schmoll, Matthias Schubert | Efficient Parking Search using Shared Fleet Data | Long Version; published at 2021 22nd IEEE International Conference on
Mobile Data Management (MDM) | 2021 22nd IEEE International Conference on Mobile Data Management
(MDM) | 10.1109/MDM52706.2021.00026 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finding an available on-street parking spot is a relevant problem of
day-to-day life. In recent years, cities such as Melbourne and San Francisco
deployed sensors that provide real-time information about the occupation of
parking spots. Finding a free parking spot in such a smart environment can be
modeled and solved as a Markov decision process (MDP). The problem has to
consider uncertainty as available parking spots might not remain available
until arrival due to other vehicles also claiming spots in the meantime.
Knowing the parking intention of every vehicle in the environment would
eliminate this uncertainty. Unfortunately, it does currently not seem realistic
to have such data from all vehicles. In contrast, acquiring data from a subset
of vehicles or a vehicle fleet appears feasible and has the potential to reduce
uncertainty.
In this paper, we examine the question of how useful sharing data within a
vehicle fleet might be for the search times of particular drivers. We use fleet
data to better estimate the availability of parking spots at arrival. Since
optimal solutions for large scenarios are infeasible, we base our method on
approximate solutions, which have been shown to perform well in single-agent
settings. Our experiments are conducted on a simulation using real-world and
synthetic data from the city of Melbourne. The results indicate that fleet data
can significantly reduce search times for an available parking spot.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 15:20:28 GMT"
}
] | 1,713,312,000,000 | [
[
"Strauß",
"Niklas",
""
],
[
"Rottkamp",
"Lukas",
""
],
[
"Schmoll",
"Sebatian",
""
],
[
"Schubert",
"Matthias",
""
]
] |
2404.10683 | Niklas Strau{\ss} | David Winkel, Niklas Strau{\ss}, Matthias Schubert, Thomas Seidl | Simplex Decomposition for Portfolio Allocation Constraints in
Reinforcement Learning | null | ECAI 2023 - 26th European Conference on Artificial Intelligence,
September 30 - October 4, 2023, Krakow, Poland | 10.3233/FAIA230573 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Portfolio optimization tasks describe sequential decision problems in which
the investor's wealth is distributed across a set of assets. Allocation
constraints are used to enforce minimal or maximal investments into particular
subsets of assets to control for objectives such as limiting the portfolio's
exposure to a certain sector due to environmental concerns. Although methods
for constrained Reinforcement Learning (CRL) can optimize policies while
considering allocation constraints, it can be observed that these general
methods yield suboptimal results. In this paper, we propose a novel approach to
handle allocation constraints based on a decomposition of the constraint action
space into a set of unconstrained allocation problems. In particular, we
examine this approach for the case of two constraints. For example, an investor
may wish to invest at least a certain percentage of the portfolio into green
technologies while limiting the investment in the fossil energy sector. We show
that the action space of the task is equivalent to the decomposed action space,
and introduce a new reinforcement learning (RL) approach CAOSD, which is built
on top of the decomposition. The experimental evaluation on real-world
Nasdaq-100 data demonstrates that our approach consistently outperforms
state-of-the-art CRL benchmarks for portfolio optimization.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 16:00:59 GMT"
}
] | 1,713,312,000,000 | [
[
"Winkel",
"David",
""
],
[
"Strauß",
"Niklas",
""
],
[
"Schubert",
"Matthias",
""
],
[
"Seidl",
"Thomas",
""
]
] |
2404.10731 | Bowen Xu | Bowen Xu | What is Meant by AGI? On the Definition of Artificial General
Intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper aims to establish a consensus on AGI's definition. General
intelligence refers to the adaptation to open environments according to certain
principles using limited resources. It emphasizes that adaptation or learning
is an indispensable property of intelligence, and places the controversial part
within the principles of intelligence, which can be described from different
perspectives.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 17:03:50 GMT"
}
] | 1,713,312,000,000 | [
[
"Xu",
"Bowen",
""
]
] |
2404.10740 | Caroline Wang | Caroline Wang, Arrasy Rahman, Ishan Durugkar, Elad Liebman, Peter
Stone | N-Agent Ad Hoc Teamwork | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current approaches to learning cooperative behaviors in multi-agent settings
assume relatively restrictive settings. In standard fully cooperative
multi-agent reinforcement learning, the learning algorithm controls
\textit{all} agents in the scenario, while in ad hoc teamwork, the learning
algorithm usually assumes control over only a $\textit{single}$ agent in the
scenario. However, many cooperative settings in the real world are much less
restrictive. For example, in an autonomous driving scenario, a company might
train its cars with the same learning algorithm, yet once on the road, these
cars must cooperate with cars from another company. Towards generalizing the
class of scenarios that cooperative learning methods can address, we introduce
$N$-agent ad hoc teamwork, in which a set of autonomous agents must interact
and cooperate with dynamically varying numbers and types of teammates at
evaluation time. This paper formalizes the problem, and proposes the
$\textit{Policy Optimization with Agent Modelling}$ (POAM) algorithm. POAM is a
policy gradient, multi-agent reinforcement learning approach to the NAHT
problem, that enables adaptation to diverse teammate behaviors by learning
representations of teammate behaviors. Empirical evaluation on StarCraft II
tasks shows that POAM improves cooperative task returns compared to baseline
approaches, and enables out-of-distribution generalization to unseen teammates.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 17:13:08 GMT"
}
] | 1,713,312,000,000 | [
[
"Wang",
"Caroline",
""
],
[
"Rahman",
"Arrasy",
""
],
[
"Durugkar",
"Ishan",
""
],
[
"Liebman",
"Elad",
""
],
[
"Stone",
"Peter",
""
]
] |
2404.10889 | Erim Yanik | Erim Yanik and Xavier Intes and Suvranu De | Cognitive-Motor Integration in Assessing Bimanual Motor Skills | 12 pages, 3 figures, 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate assessment of bimanual motor skills is essential across various
professions, yet, traditional methods often rely on subjective assessments or
focus solely on motor actions, overlooking the integral role of cognitive
processes. This study introduces a novel approach by leveraging deep neural
networks (DNNs) to analyze and integrate both cognitive decision-making and
motor execution. We tested this methodology by assessing laparoscopic surgery
skills within the Fundamentals of Laparoscopic Surgery program, which is a
prerequisite for general surgery certification. Utilizing video capture of
motor actions and non-invasive functional near-infrared spectroscopy (fNIRS)
for measuring neural activations, our approach precisely classifies subjects by
expertise level and predicts FLS behavioral performance scores, significantly
surpassing traditional single-modality assessments.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 20:20:23 GMT"
}
] | 1,713,398,400,000 | [
[
"Yanik",
"Erim",
""
],
[
"Intes",
"Xavier",
""
],
[
"De",
"Suvranu",
""
]
] |
2404.10901 | Ming Cheng | Ziyi Zhou, Ming Cheng, Yanjun Cui, Xingjian Diao, Zhaorui Ma | CrossGP: Cross-Day Glucose Prediction Excluding Physiological
Information | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing number of diabetic patients is a serious issue in society
today, which has significant negative impacts on people's health and the
country's financial expenditures. Because diabetes may develop into potential
serious complications, early glucose prediction for diabetic patients is
necessary for timely medical treatment. Existing glucose prediction methods
typically utilize patients' private data (e.g. age, gender, ethnicity) and
physiological parameters (e.g. blood pressure, heart rate) as reference
features for glucose prediction, which inevitably leads to privacy protection
concerns. Moreover, these models generally focus on either long-term
(monthly-based) or short-term (minute-based) predictions. Long-term prediction
methods are generally inaccurate because of the external uncertainties that can
greatly affect the glucose values, while short-term ones fail to provide timely
medical guidance. Based on the above issues, we propose CrossGP, a novel
machine-learning framework for cross-day glucose prediction solely based on the
patient's external activities without involving any physiological parameters.
Meanwhile, we implement three baseline models for comparison. Extensive
experiments on Anderson's dataset strongly demonstrate the superior performance
of CrossGP and prove its potential for future real-life applications.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 20:40:59 GMT"
}
] | 1,713,398,400,000 | [
[
"Zhou",
"Ziyi",
""
],
[
"Cheng",
"Ming",
""
],
[
"Cui",
"Yanjun",
""
],
[
"Diao",
"Xingjian",
""
],
[
"Ma",
"Zhaorui",
""
]
] |
2404.10907 | Abhishek Dalvi | Abhishek Dalvi, Neil Ashtekar, Vasant Honavar | Causal Effect Estimation Using Random Hyperplane Tessellations | At CLeaR 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Matching is one of the simplest approaches for estimating causal effects from
observational data. Matching techniques compare the observed outcomes across
pairs of individuals with similar covariate values but different treatment
statuses in order to estimate causal effects. However, traditional matching
techniques are unreliable given high-dimensional covariates due to the infamous
curse of dimensionality. To overcome this challenge, we propose a simple, fast,
yet highly effective approach to matching using Random Hyperplane Tessellations
(RHPT). First, we prove that the RHPT representation is an approximate
balancing score -- thus maintaining the strong ignorability assumption -- and
provide empirical evidence for this claim. Second, we report results of
extensive experiments showing that matching using RHPT outperforms traditional
matching techniques and is competitive with state-of-the-art deep learning
methods for causal effect estimation. In addition, RHPT avoids the need for
computationally expensive training of deep neural networks.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 20:53:58 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Apr 2024 20:30:35 GMT"
}
] | 1,713,830,400,000 | [
[
"Dalvi",
"Abhishek",
""
],
[
"Ashtekar",
"Neil",
""
],
[
"Honavar",
"Vasant",
""
]
] |
2404.11027 | Guangran Cheng | Guangran Cheng, Chuheng Zhang, Wenzhe Cai, Li Zhao, Changyin Sun and
Jiang Bian | Empowering Large Language Models on Robotic Manipulation with Affordance
Prompting | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While large language models (LLMs) are successful in completing various
language processing tasks, they easily fail to interact with the physical world
by generating control sequences properly. We find that the main reason is that
LLMs are not grounded in the physical world. Existing LLM-based approaches
circumvent this problem by relying on additional pre-defined skills or
pre-trained sub-policies, making it hard to adapt to new tasks. In contrast, we
aim to address this problem and explore the possibility to prompt pre-trained
LLMs to accomplish a series of robotic manipulation tasks in a training-free
paradigm. Accordingly, we propose a framework called LLM+A(ffordance) where the
LLM serves as both the sub-task planner (that generates high-level plans) and
the motion controller (that generates low-level control sequences). To ground
these plans and control sequences on the physical world, we develop the
affordance prompting technique that stimulates the LLM to 1) predict the
consequences of generated plans and 2) generate affordance values for relevant
objects. Empirically, we evaluate the effectiveness of LLM+A in various
language-conditioned robotic manipulation tasks, which show that our approach
substantially improves performance by enhancing the feasibility of generated
plans and control and can easily generalize to different environments.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 03:06:32 GMT"
}
] | 1,713,398,400,000 | [
[
"Cheng",
"Guangran",
""
],
[
"Zhang",
"Chuheng",
""
],
[
"Cai",
"Wenzhe",
""
],
[
"Zhao",
"Li",
""
],
[
"Sun",
"Changyin",
""
],
[
"Bian",
"Jiang",
""
]
] |
2404.11122 | Pierre Lepagnol | Pierre Lepagnol (LISN), Thomas Gerald (LISN), Sahar Ghannay (LISN),
Christophe Servan (STL, ILES), Sophie Rosset (LISN) | Small Language Models are Good Too: An Empirical Study of Zero-Shot
Classification | null | LREC-COLING 2024, May 2024, TURIN, Italy | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study is part of the debate on the efficiency of large versus small
language models for text classification by prompting.We assess the performance
of small language models in zero-shot text classification, challenging the
prevailing dominance of large models.Across 15 datasets, our investigation
benchmarks language models from 77M to 40B parameters using different
architectures and scoring functions. Our findings reveal that small models can
effectively classify texts, getting on par with or surpassing their larger
counterparts.We developed and shared a comprehensive open-source repository
that encapsulates our methodologies. This research underscores the notion that
bigger isn't always better, suggesting that resource-efficient small models may
offer viable solutions for specific data classification challenges.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 07:10:28 GMT"
}
] | 1,713,398,400,000 | [
[
"Lepagnol",
"Pierre",
"",
"LISN"
],
[
"Gerald",
"Thomas",
"",
"LISN"
],
[
"Ghannay",
"Sahar",
"",
"LISN"
],
[
"Servan",
"Christophe",
"",
"STL, ILES"
],
[
"Rosset",
"Sophie",
"",
"LISN"
]
] |
2404.11160 | El Hassane Ettifouri | Jessica L\'opez Espejel and Mahaman Sanoussi Yahaya Alassan and
Merieme Bouhandi and Walid Dahhane and El Hassane Ettifouri | Low-Cost Language Models: Survey and Performance Evaluation on Python
Code Generation | Under review at Elsevier's Engineering Applications of Artificial
Intelligence | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Language Models (LLMs) have become the go-to solution for many Natural
Language Processing (NLP) tasks due to their ability to tackle various problems
and produce high-quality results. Specifically, they are increasingly used to
automatically generate code, easing the burden on developers by handling
repetitive tasks. However, this improvement in quality has led to high
computational and memory demands, making LLMs inaccessible to users with
limited resources. In this paper, we focus on Central Processing Unit
(CPU)-compatible models and conduct a thorough semi-manual evaluation of their
strengths and weaknesses in generating Python code. We enhance their
performance by introducing a Chain-of-Thought prompt that guides the model in
problem-solving. Additionally, we propose a dataset of 60 programming problems
with varying difficulty levels for evaluation purposes. Our assessment also
includes testing these models on two state-of-the-art datasets: HumanEval and
EvalPlus. We commit to sharing our dataset and experimental results publicly to
ensure transparency.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 08:16:48 GMT"
}
] | 1,713,398,400,000 | [
[
"Espejel",
"Jessica López",
""
],
[
"Alassan",
"Mahaman Sanoussi Yahaya",
""
],
[
"Bouhandi",
"Merieme",
""
],
[
"Dahhane",
"Walid",
""
],
[
"Ettifouri",
"El Hassane",
""
]
] |
2404.11208 | Nils Ole Breuer | Nils Ole Breuer, Andreas Sauter, Majid Mohammadi, and Erman Acar | CAGE: Causality-Aware Shapley Value for Global Explanations | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As Artificial Intelligence (AI) is having more influence on our everyday
lives, it becomes important that AI-based decisions are transparent and
explainable. As a consequence, the field of eXplainable AI (or XAI) has become
popular in recent years. One way to explain AI models is to elucidate the
predictive importance of the input features for the AI model in general, also
referred to as global explanations. Inspired by cooperative game theory,
Shapley values offer a convenient way for quantifying the feature importance as
explanations. However many methods based on Shapley values are built on the
assumption of feature independence and often overlook causal relations of the
features which could impact their importance for the ML model. Inspired by
studies of explanations at the local level, we propose CAGE (Causally-Aware
Shapley Values for Global Explanations). In particular, we introduce a novel
sampling procedure for out-coalition features that respects the causal
relations of the input features. We derive a practical approach that
incorporates causal knowledge into global explanation and offers the
possibility to interpret the predictive feature importance considering their
causal relation. We evaluate our method on synthetic data and real-world data.
The explanations from our approach suggest that they are not only more
intuitive but also more faithful compared to previous global explanation
methods.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 09:43:54 GMT"
}
] | 1,713,398,400,000 | [
[
"Breuer",
"Nils Ole",
""
],
[
"Sauter",
"Andreas",
""
],
[
"Mohammadi",
"Majid",
""
],
[
"Acar",
"Erman",
""
]
] |
2404.11290 | Hong Qian | Shuo Liu, Junhao Shen, Hong Qian, Aimin Zhou | Inductive Cognitive Diagnosis for Fast Student Learning in Web-Based
Online Intelligent Education Systems | WWW 2024 | null | 10.1145/3589334.3645589 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cognitive diagnosis aims to gauge students' mastery levels based on their
response logs. Serving as a pivotal module in web-based online intelligent
education systems (WOIESs), it plays an upstream and fundamental role in
downstream tasks like learning item recommendation and computerized adaptive
testing. WOIESs are open learning environment where numerous new students
constantly register and complete exercises. In WOIESs, efficient cognitive
diagnosis is crucial to fast feedback and accelerating student learning.
However, the existing cognitive diagnosis methods always employ intrinsically
transductive student-specific embeddings, which become slow and costly due to
retraining when dealing with new students who are unseen during training. To
this end, this paper proposes an inductive cognitive diagnosis model (ICDM) for
fast new students' mastery levels inference in WOIESs. Specifically, in ICDM,
we propose a novel student-centered graph (SCG). Rather than inferring mastery
levels through updating student-specific embedding, we derive the inductive
mastery levels as the aggregated outcomes of students' neighbors in SCG.
Namely, SCG enables to shift the task from finding the most suitable
student-specific embedding that fits the response logs to finding the most
suitable representations for different node types in SCG, and the latter is
more efficient since it no longer requires retraining. To obtain this
representation, ICDM consists of a
construction-aggregation-generation-transformation process to learn the final
representation of students, exercises and concepts. Extensive experiments
across real-world datasets show that, compared with the existing cognitive
diagnosis methods that are always transductive, ICDM is much more faster while
maintains the competitive inference performance for new students.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 11:55:43 GMT"
}
] | 1,713,398,400,000 | [
[
"Liu",
"Shuo",
""
],
[
"Shen",
"Junhao",
""
],
[
"Qian",
"Hong",
""
],
[
"Zhou",
"Aimin",
""
]
] |
2404.11296 | Olivier Buffet | Salom\'e Lepers, Sophie Lemonnier, Vincent Thomas, Olivier Buffet | How to Exhibit More Predictable Behaviors | 10 pages, 7 figures, 2 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper looks at predictability problems, i.e., wherein an agent must
choose its strategy in order to optimize the predictions that an external
observer could make. We address these problems while taking into account
uncertainties on the environment dynamics and on the observed agent's policy.
To that end, we assume that the observer 1. seeks to predict the agent's future
action or state at each time step, and 2. models the agent using a stochastic
policy computed from a known underlying problem, and we leverage on the
framework of observer-aware Markov decision processes (OAMDPs). We propose
action and state predictability performance criteria through reward functions
built on the observer's belief about the agent policy; show that these induced
predictable OAMDPs can be represented by goal-oriented or discounted MDPs; and
analyze the properties of the proposed reward functions both theoretically and
empirically on two types of grid-world problems.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 12:06:17 GMT"
}
] | 1,713,398,400,000 | [
[
"Lepers",
"Salomé",
""
],
[
"Lemonnier",
"Sophie",
""
],
[
"Thomas",
"Vincent",
""
],
[
"Buffet",
"Olivier",
""
]
] |
2404.11408 | James Weichert | James Weichert and Chinecherem Dimobi | DUPE: Detection Undermining via Prompt Engineering for Deepfake Text | 10 pages, 2 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As large language models (LLMs) become increasingly commonplace, concern
about distinguishing between human and AI text increases as well. The growing
power of these models is of particular concern to teachers, who may worry that
students will use LLMs to write school assignments. Facing a technology with
which they are unfamiliar, teachers may turn to publicly-available AI text
detectors. Yet the accuracy of many of these detectors has not been thoroughly
verified, posing potential harm to students who are falsely accused of academic
dishonesty. In this paper, we evaluate three different AI text
detectors-Kirchenbauer et al. watermarks, ZeroGPT, and GPTZero-against human
and AI-generated essays. We find that watermarking results in a high false
positive rate, and that ZeroGPT has both high false positive and false negative
rates. Further, we are able to significantly increase the false negative rate
of all detectors by using ChatGPT 3.5 to paraphrase the original AI-generated
texts, thereby effectively bypassing the detectors.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 14:10:27 GMT"
}
] | 1,713,398,400,000 | [
[
"Weichert",
"James",
""
],
[
"Dimobi",
"Chinecherem",
""
]
] |
2404.11431 | Markus Ulbricht | Tuomo Lehtonen, Anna Rapberger, Francesca Toni, Markus Ulbricht,
Johannes P. Wallner | Instantiations and Computational Aspects of Non-Flat Assumption-based
Argumentation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Most existing computational tools for assumption-based argumentation (ABA)
focus on so-called flat frameworks, disregarding the more general case. In this
paper, we study an instantiation-based approach for reasoning in possibly
non-flat ABA. We make use of a semantics-preserving translation between ABA and
bipolar argumentation frameworks (BAFs). By utilizing compilability theory, we
establish that the constructed BAFs will in general be of exponential size. In
order to keep the number of arguments and computational cost low, we present
three ways of identifying redundant arguments. Moreover, we identify fragments
of ABA which admit a poly-sized instantiation. We propose two algorithmic
approaches for reasoning in possibly non-flat ABA. The first approach utilizes
the BAF instantiation while the second works directly without constructing
arguments. An empirical evaluation shows that the former outperforms the latter
on many instances, reflecting the lower complexity of BAF reasoning. This
result is in contrast to flat ABA, where direct approaches dominate
instantiation-based approaches.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 14:36:47 GMT"
},
{
"version": "v2",
"created": "Fri, 24 May 2024 13:42:44 GMT"
}
] | 1,716,768,000,000 | [
[
"Lehtonen",
"Tuomo",
""
],
[
"Rapberger",
"Anna",
""
],
[
"Toni",
"Francesca",
""
],
[
"Ulbricht",
"Markus",
""
],
[
"Wallner",
"Johannes P.",
""
]
] |
2404.11443 | Zhuoya Geng | Zhuoya Geng, Jianmei Chen, Wanqiang Zhu | Prediction of Unmanned Surface Vessel Motion Attitude Based on
CEEMDAN-PSO-SVM | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unmanned boats, while navigating at sea, utilize active compensation systems
to mitigate wave disturbances experienced by onboard instruments and equipment.
However, there exists a lag in the measurement of unmanned boat attitudes, thus
introducing unmanned boat motion attitude prediction to compensate for the lag
in the signal acquisition process. This paper, based on the basic principles of
waves, derives the disturbance patterns of waves on unmanned boats from the
wave energy spectrum. Through simulation analysis of unmanned boat motion
attitudes, motion attitude data is obtained, providing experimental data for
subsequent work. A combined prediction model based on Complete Ensemble
Empirical Mode Decomposition with Adaptive Noise (CEEMDAN), Particle Swarm
Optimization (PSO), and Support Vector Machine (SVM) is designed to predict the
motion attitude of unmanned boats. Simulation results validate its superior
prediction accuracy compared to traditional prediction models. For example, in
terms of mean absolute error, it improves by 17% compared to the EMD-PSO-SVM
model.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 14:53:03 GMT"
}
] | 1,713,398,400,000 | [
[
"Geng",
"Zhuoya",
""
],
[
"Chen",
"Jianmei",
""
],
[
"Zhu",
"Wanqiang",
""
]
] |
2404.11458 | Xu Chen | Bowen Fang, Xu Chen, Xuan Di | Learn to Tour: Operator Design For Solution Feasibility Mapping in
Pickup-and-delivery Traveling Salesman Problem | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper aims to develop a learning method for a special class of traveling
salesman problems (TSP), namely, the pickup-and-delivery TSP (PDTSP), which
finds the shortest tour along a sequence of one-to-one pickup-and-delivery
nodes. One-to-one here means that the transported people or goods are
associated with designated pairs of pickup and delivery nodes, in contrast to
that indistinguishable goods can be delivered to any nodes. In PDTSP,
precedence constraints need to be satisfied that each pickup node must be
visited before its corresponding delivery node. Classic operations research
(OR) algorithms for PDTSP are difficult to scale to large-sized problems.
Recently, reinforcement learning (RL) has been applied to TSPs. The basic idea
is to explore and evaluate visiting sequences in a solution space. However,
this approach could be less computationally efficient, as it has to potentially
evaluate many infeasible solutions of which precedence constraints are
violated. To restrict solution search within a feasible space, we utilize
operators that always map one feasible solution to another, without spending
time exploring the infeasible solution space. Such operators are evaluated and
selected as policies to solve PDTSPs in an RL framework. We make a comparison
of our method and baselines, including classic OR algorithms and existing
learning methods. Results show that our approach can find tours shorter than
baselines.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 15:05:51 GMT"
}
] | 1,713,398,400,000 | [
[
"Fang",
"Bowen",
""
],
[
"Chen",
"Xu",
""
],
[
"Di",
"Xuan",
""
]
] |
2404.11585 | Carlos Pe\~narrubia | Carlos Penarrubia, Carlos Garrido-Munoz, Jose J. Valero-Mas, Jorge
Calvo-Zaragoza | Spatial Context-based Self-Supervised Learning for Handwritten Text
Recognition | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Handwritten Text Recognition (HTR) is a relevant problem in computer vision,
and implies unique challenges owing to its inherent variability and the rich
contextualization required for its interpretation. Despite the success of
Self-Supervised Learning (SSL) in computer vision, its application to HTR has
been rather scattered, leaving key SSL methodologies unexplored. This work
focuses on one of them, namely Spatial Context-based SSL. We investigate how
this family of approaches can be adapted and optimized for HTR and propose new
workflows that leverage the unique features of handwritten text. Our
experiments demonstrate that the methods considered lead to advancements in the
state-of-the-art of SSL for HTR in a number of benchmark cases.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 17:33:32 GMT"
}
] | 1,713,398,400,000 | [
[
"Penarrubia",
"Carlos",
""
],
[
"Garrido-Munoz",
"Carlos",
""
],
[
"Valero-Mas",
"Jose J.",
""
],
[
"Calvo-Zaragoza",
"Jorge",
""
]
] |
2404.11677 | Zhuoyi Lin | Zhuoyi Lin, Yaoxin Wu, Bangjian Zhou, Zhiguang Cao, Wen Song, Yingqian
Zhang and Senthilnath Jayavelu | Cross-Problem Learning for Solving Vehicle Routing Problems | Accepted by IJCAI'24 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing neural heuristics often train a deep architecture from scratch for
each specific vehicle routing problem (VRP), ignoring the transferable
knowledge across different VRP variants. This paper proposes the cross-problem
learning to assist heuristics training for different downstream VRP variants.
Particularly, we modularize neural architectures for complex VRPs into 1) the
backbone Transformer for tackling the travelling salesman problem (TSP), and 2)
the additional lightweight modules for processing problem-specific features in
complex VRPs. Accordingly, we propose to pre-train the backbone Transformer for
TSP, and then apply it in the process of fine-tuning the Transformer models for
each target VRP variant. On the one hand, we fully fine-tune the trained
backbone Transformer and problem-specific modules simultaneously. On the other
hand, we only fine-tune small adapter networks along with the modules, keeping
the backbone Transformer still. Extensive experiments on typical VRPs
substantiate that 1) the full fine-tuning achieves significantly better
performance than the one trained from scratch, and 2) the adapter-based
fine-tuning also delivers comparable performance while being notably
parameter-efficient. Furthermore, we empirically demonstrate the favorable
effect of our method in terms of cross-distribution application and
versatility.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 18:17:50 GMT"
},
{
"version": "v2",
"created": "Tue, 14 May 2024 11:59:55 GMT"
}
] | 1,715,731,200,000 | [
[
"Lin",
"Zhuoyi",
""
],
[
"Wu",
"Yaoxin",
""
],
[
"Zhou",
"Bangjian",
""
],
[
"Cao",
"Zhiguang",
""
],
[
"Song",
"Wen",
""
],
[
"Zhang",
"Yingqian",
""
],
[
"Jayavelu",
"Senthilnath",
""
]
] |
2404.11706 | Aristeidis Tsaris | Aristeidis Tsaris, Philipe Ambrozio Dias, Abhishek Potnis, Junqi Yin,
Feiyi Wang, Dalton Lunga | Pretraining Billion-scale Geospatial Foundational Models on Frontier | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As AI workloads increase in scope, generalization capability becomes
challenging for small task-specific models and their demand for large amounts
of labeled training samples increases. On the contrary, Foundation Models (FMs)
are trained with internet-scale unlabeled data via self-supervised learning and
have been shown to adapt to various tasks with minimal fine-tuning. Although
large FMs have demonstrated significant impact in natural language processing
and computer vision, efforts toward FMs for geospatial applications have been
restricted to smaller size models, as pretraining larger models requires very
large computing resources equipped with state-of-the-art hardware accelerators.
Current satellite constellations collect 100+TBs of data a day, resulting in
images that are billions of pixels and multimodal in nature. Such geospatial
data poses unique challenges opening up new opportunities to develop FMs. We
investigate billion scale FMs and HPC training profiles for geospatial
applications by pretraining on publicly available data. We studied from
end-to-end the performance and impact in the solution by scaling the model
size. Our larger 3B parameter size model achieves up to 30% improvement in top1
scene classification accuracy when comparing a 100M parameter model. Moreover,
we detail performance experiments on the Frontier supercomputer, America's
first exascale system, where we study different model and data parallel
approaches using PyTorch's Fully Sharded Data Parallel library. Specifically,
we study variants of the Vision Transformer architecture (ViT), conducting
performance analysis for ViT models with size up to 15B parameters. By
discussing throughput and performance bottlenecks under different parallelism
configurations, we offer insights on how to leverage such leadership-class HPC
resources when developing large models for geospatial imagery applications.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 19:16:32 GMT"
}
] | 1,713,484,800,000 | [
[
"Tsaris",
"Aristeidis",
""
],
[
"Dias",
"Philipe Ambrozio",
""
],
[
"Potnis",
"Abhishek",
""
],
[
"Yin",
"Junqi",
""
],
[
"Wang",
"Feiyi",
""
],
[
"Lunga",
"Dalton",
""
]
] |
2404.11714 | Jeremy Straub | Jordan Milbrath, Jonathan Rivard, Jeremy Straub | Implementation and Evaluation of a Gradient Descent-Trained Defensible
Blackboard Architecture System | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | A variety of forms of artificial intelligence systems have been developed.
Two well-known techniques are neural networks and rule-fact expert systems. The
former can be trained from presented data while the latter is typically
developed by human domain experts. A combined implementation that uses gradient
descent to train a rule-fact expert system has been previously proposed. A
related system type, the Blackboard Architecture, adds an actualization
capability to expert systems. This paper proposes and evaluates the
incorporation of a defensible-style gradient descent training capability into
the Blackboard Architecture. It also introduces the use of activation functions
for defensible artificial intelligence systems and implements and evaluates a
new best path-based training algorithm.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 19:55:58 GMT"
}
] | 1,713,484,800,000 | [
[
"Milbrath",
"Jordan",
""
],
[
"Rivard",
"Jonathan",
""
],
[
"Straub",
"Jeremy",
""
]
] |
2404.11716 | Vinicius V. Cogo | Miracle Aniakor, Vinicius V. Cogo, Pedro M. Ferreira | A Survey on Semantic Modeling for Building Energy Management | 29 pages, 6 figures, 2 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Buildings account for a substantial portion of global energy consumption.
Reducing buildings' energy usage primarily involves obtaining data from
building systems and environment, which are instrumental in assessing and
optimizing the building's performance. However, as devices from various
manufacturers represent their data in unique ways, this disparity introduces
challenges for semantic interoperability and creates obstacles in developing
scalable building applications. This survey explores the leading semantic
modeling techniques deployed for energy management in buildings. Furthermore,
it aims to offer tangible use cases for applying semantic models, shedding
light on the pivotal concepts and limitations intrinsic to each model. Our
findings will assist researchers in discerning the appropriate circumstances
and methodologies for employing these models in various use cases.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 20:10:43 GMT"
}
] | 1,713,484,800,000 | [
[
"Aniakor",
"Miracle",
""
],
[
"Cogo",
"Vinicius V.",
""
],
[
"Ferreira",
"Pedro M.",
""
]
] |
2404.11720 | Aayush Dhakal | Aayush Dhakal, Subash Khanal, Srikumar Sastry, Adeel Ahmad, Nathan
Jacobs | GEOBIND: Binding Text, Image, and Audio through Satellite Images | 2024 IEEE International Geoscience and Remote Sensing Symposium | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In remote sensing, we are interested in modeling various modalities for some
geographic location. Several works have focused on learning the relationship
between a location and type of landscape, habitability, audio, textual
descriptions, etc. Recently, a common way to approach these problems is to
train a deep-learning model that uses satellite images to infer some unique
characteristics of the location. In this work, we present a deep-learning
model, GeoBind, that can infer about multiple modalities, specifically text,
image, and audio, from satellite imagery of a location. To do this, we use
satellite images as the binding element and contrastively align all other
modalities to the satellite image data. Our training results in a joint
embedding space with multiple types of data: satellite image, ground-level
image, audio, and text. Furthermore, our approach does not require a single
complex dataset that contains all the modalities mentioned above. Rather it
only requires multiple satellite-image paired data. While we only align three
modalities in this paper, we present a general framework that can be used to
create an embedding space with any number of modalities by using satellite
images as the binding element. Our results show that, unlike traditional
unimodal models, GeoBind is versatile and can reason about multiple modalities
for a given satellite image input.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 20:13:37 GMT"
}
] | 1,713,484,800,000 | [
[
"Dhakal",
"Aayush",
""
],
[
"Khanal",
"Subash",
""
],
[
"Sastry",
"Srikumar",
""
],
[
"Ahmad",
"Adeel",
""
],
[
"Jacobs",
"Nathan",
""
]
] |
2404.11742 | Aomar Osmani Dr | Seyed M.R. Modaresi, Aomar Osmani, Mohammadreza Razzazi, Abdelghani
Chibani | Meta-Decomposition: Dynamic Segmentation Approach Selection in IoT-based
Activity Recognition | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Internet of Things (IoT) devices generate heterogeneous data over time; and
relying solely on individual data points is inadequate for accurate analysis.
Segmentation is a common preprocessing step in many IoT applications,
including IoT-based activity recognition, aiming to address the limitations of
individual events and streamline the process. However, this step introduces at
least two families of uncontrollable biases. The first is caused by the changes
made by the segmentation process on the initial problem space, such as dividing
the input data into 60 seconds windows. The second category of biases results
from the segmentation process itself, including the fixation of the
segmentation method and its parameters.
To address these biases, we propose to redefine the segmentation problem as a
special case of a decomposition problem, including three key components: a
decomposer, resolutions, and a composer.
The inclusion of the composer task in the segmentation process facilitates an
assessment of the relationship between the original problem and the problem
after the segmentation. Therefore, It leads to an improvement in the evaluation
process and, consequently, in the selection of the appropriate segmentation
method.
Then, we formally introduce our novel meta-decomposition or
learning-to-decompose approach. It reduces the segmentation biases by
considering the segmentation as a hyperparameter to be optimized by the outer
learning problem. Therefore, meta-decomposition improves the overall system
performance by dynamically selecting the appropriate segmentation method
without including the mentioned biases. Extensive experiments on four
real-world datasets demonstrate the effectiveness of our proposal.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 20:50:28 GMT"
}
] | 1,713,484,800,000 | [
[
"Modaresi",
"Seyed M. R.",
""
],
[
"Osmani",
"Aomar",
""
],
[
"Razzazi",
"Mohammadreza",
""
],
[
"Chibani",
"Abdelghani",
""
]
] |
2404.11792 | Zooey Nguyen | Zooey Nguyen, Anthony Annunziata, Vinh Luong, Sang Dinh, Quynh Le, Anh
Hai Ha, Chanh Le, Hong An Phan, Shruti Raghavan, Christopher Nguyen | Enhancing Q&A with Domain-Specific Fine-Tuning and Iterative Reasoning:
A Comparative Study | Fixed typo of OODA's score on harder-question set in Table 2 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper investigates the impact of domain-specific model fine-tuning and
of reasoning mechanisms on the performance of question-answering (Q&A) systems
powered by large language models (LLMs) and Retrieval-Augmented Generation
(RAG). Using the FinanceBench SEC financial filings dataset, we observe that,
for RAG, combining a fine-tuned embedding model with a fine-tuned LLM achieves
better accuracy than generic models, with relatively greater gains attributable
to fine-tuned embedding models. Additionally, employing reasoning iterations on
top of RAG delivers an even bigger jump in performance, enabling the Q&A
systems to get closer to human-expert quality. We discuss the implications of
such findings, propose a structured technical design space capturing major
technical components of Q&A AI, and provide recommendations for making
high-impact technical choices for such components. We plan to follow up on this
work with actionable guides for AI teams and further investigations into the
impact of domain-specific augmentation in RAG and into agentic AI capabilities
such as advanced planning and reasoning.
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 23:00:03 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Apr 2024 20:28:16 GMT"
}
] | 1,713,830,400,000 | [
[
"Nguyen",
"Zooey",
""
],
[
"Annunziata",
"Anthony",
""
],
[
"Luong",
"Vinh",
""
],
[
"Dinh",
"Sang",
""
],
[
"Le",
"Quynh",
""
],
[
"Ha",
"Anh Hai",
""
],
[
"Le",
"Chanh",
""
],
[
"Phan",
"Hong An",
""
],
[
"Raghavan",
"Shruti",
""
],
[
"Nguyen",
"Christopher",
""
]
] |
2404.11833 | Michael Katz | Michael Katz, Harsha Kokel, Kavitha Srinivas, Shirin Sohrabi | Thought of Search: Planning with Language Models Through The Lens of
Efficiency | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Among the most important properties of algorithms investigated in computer
science are soundness, completeness, and complexity. These properties, however,
are rarely analyzed for the vast collection of recently proposed methods for
planning with large language models. In this work, we alleviate this gap. We
analyse these properties of using LLMs for planning and highlight that recent
trends abandon both soundness and completeness for the sake of inefficiency. We
propose a significantly more efficient approach that can, at the same time,
maintain both soundness and completeness. We exemplify on four representative
search problems, comparing to the LLM-based solutions from the literature that
attempt to solve these problems. We show that by using LLMs to produce the code
for the search components we can solve the entire datasets with 100\% accuracy
with only a few calls to the LLM. We argue for a responsible use of compute
resources; urging research community to investigate sound and complete
LLM-based approaches that uphold efficiency.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 01:27:29 GMT"
},
{
"version": "v2",
"created": "Tue, 21 May 2024 18:44:54 GMT"
}
] | 1,716,508,800,000 | [
[
"Katz",
"Michael",
""
],
[
"Kokel",
"Harsha",
""
],
[
"Srinivas",
"Kavitha",
""
],
[
"Sohrabi",
"Shirin",
""
]
] |
2404.11835 | Minjung Shin | Minjung Shin, Donghyun Kim, Jeh-Kwang Ryu | CAUS: A Dataset for Question Generation based on Human Cognition
Leveraging Large Language Models | 8 pages, 4 figures and 3 tables. This work has been accepted for
presentation as a poster with full paper publication at CogSci 2024. This is
the final submission | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the Curious About Uncertain Scene (CAUS) dataset, designed to
enable Large Language Models, specifically GPT-4, to emulate human cognitive
processes for resolving uncertainties. Leveraging this dataset, we investigate
the potential of LLMs to engage in questioning effectively. Our approach
involves providing scene descriptions embedded with uncertainties to stimulate
the generation of reasoning and queries. The queries are then classified
according to multi-dimensional criteria. All procedures are facilitated by a
collaborative system involving both LLMs and human researchers. Our results
demonstrate that GPT-4 can effectively generate pertinent questions and grasp
their nuances, particularly when given appropriate context and instructions.
The study suggests that incorporating human-like questioning into AI models
improves their ability to manage uncertainties, paving the way for future
advancements in Artificial Intelligence (AI).
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 01:31:19 GMT"
},
{
"version": "v2",
"created": "Sun, 19 May 2024 04:57:47 GMT"
}
] | 1,716,249,600,000 | [
[
"Shin",
"Minjung",
""
],
[
"Kim",
"Donghyun",
""
],
[
"Ryu",
"Jeh-Kwang",
""
]
] |
2404.11854 | Wenfeng Zhang | Wenfeng Zhang, Xin Li, Anqi Li, Xiaoting Huang, Ti Wang, Honglei Gao | SGRU: A High-Performance Structured Gated Recurrent Unit for Traffic
Flow Prediction | 7 pages, 6 figures, conference | null | 10.1109/ICPADS60453.2023 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic flow prediction is an essential task in constructing smart cities and
is a typical Multivariate Time Series (MTS) Problem. Recent research has
abandoned Gated Recurrent Units (GRU) and utilized dilated convolutions or
temporal slicing for feature extraction, and they have the following drawbacks:
(1) Dilated convolutions fail to capture the features of adjacent time steps,
resulting in the loss of crucial transitional data. (2) The connections within
the same temporal slice are strong, while the connections between different
temporal slices are too loose. In light of these limitations, we emphasize the
importance of analyzing a complete time series repeatedly and the crucial role
of GRU in MTS. Therefore, we propose SGRU: Structured Gated Recurrent Units,
which involve structured GRU layers and non-linear units, along with multiple
layers of time embedding to enhance the model's fitting performance. We
evaluate our approach on four publicly available California traffic datasets:
PeMS03, PeMS04, PeMS07, and PeMS08 for regression prediction. Experimental
results demonstrate that our model outperforms baseline models with average
improvements of 11.7%, 18.6%, 18.5%, and 12.0% respectively.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 02:15:40 GMT"
}
] | 1,713,484,800,000 | [
[
"Zhang",
"Wenfeng",
""
],
[
"Li",
"Xin",
""
],
[
"Li",
"Anqi",
""
],
[
"Huang",
"Xiaoting",
""
],
[
"Wang",
"Ti",
""
],
[
"Gao",
"Honglei",
""
]
] |
2404.11875 | Adrita Barua | Adrita Barua, Cara Widmer, Pascal Hitzler | Concept Induction using LLMs: a user experiment for assessment | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Explainable Artificial Intelligence (XAI) poses a significant challenge in
providing transparent and understandable insights into complex AI models.
Traditional post-hoc algorithms, while useful, often struggle to deliver
interpretable explanations. Concept-based models offer a promising avenue by
incorporating explicit representations of concepts to enhance interpretability.
However, existing research on automatic concept discovery methods is often
limited by lower-level concepts, costly human annotation requirements, and a
restricted domain of background knowledge. In this study, we explore the
potential of a Large Language Model (LLM), specifically GPT-4, by leveraging
its domain knowledge and common-sense capability to generate high-level
concepts that are meaningful as explanations for humans, for a specific setting
of image classification. We use minimal textual object information available in
the data via prompting to facilitate this process. To evaluate the output, we
compare the concepts generated by the LLM with two other methods: concepts
generated by humans and the ECII heuristic concept induction system. Since
there is no established metric to determine the human understandability of
concepts, we conducted a human study to assess the effectiveness of the
LLM-generated concepts. Our findings indicate that while human-generated
explanations remain superior, concepts derived from GPT-4 are more
comprehensible to humans compared to those generated by ECII.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 03:22:02 GMT"
}
] | 1,713,484,800,000 | [
[
"Barua",
"Adrita",
""
],
[
"Widmer",
"Cara",
""
],
[
"Hitzler",
"Pascal",
""
]
] |
2404.11898 | Luke Lee Mr. | Luke Lee | Enhancing Financial Inclusion and Regulatory Challenges: A Critical
Analysis of Digital Banks and Alternative Lenders Through Digital Platforms,
Machine Learning, and Large Language Models Integration | 17 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper explores the dual impact of digital banks and alternative lenders
on financial inclusion and the regulatory challenges posed by their business
models. It discusses the integration of digital platforms, machine learning
(ML), and Large Language Models (LLMs) in enhancing financial services
accessibility for underserved populations. Through a detailed analysis of
operational frameworks and technological infrastructures, this research
identifies key mechanisms that facilitate broader financial access and mitigate
traditional barriers. Additionally, the paper addresses significant regulatory
concerns involving data privacy, algorithmic bias, financial stability, and
consumer protection. Employing a mixed-methods approach, which combines
quantitative financial data analysis with qualitative insights from industry
experts, this paper elucidates the complexities of leveraging digital
technology to foster financial inclusivity. The findings underscore the
necessity of evolving regulatory frameworks that harmonize innovation with
comprehensive risk management. This paper concludes with policy recommendations
for regulators, financial institutions, and technology providers, aiming to
cultivate a more inclusive and stable financial ecosystem through prudent
digital technology integration.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 05:00:53 GMT"
}
] | 1,713,484,800,000 | [
[
"Lee",
"Luke",
""
]
] |
2404.11907 | Xiankun Yan | Xiankun Yan, Aneta Neumann, Frank Neumann | Sampling-based Pareto Optimization for Chance-constrained Monotone
Submodular Problems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently surrogate functions based on the tail inequalities were developed to
evaluate the chance constraints in the context of evolutionary computation and
several Pareto optimization algorithms using these surrogates were successfully
applied in optimizing chance-constrained monotone submodular problems. However,
the difference in performance between algorithms using the surrogates and those
employing the direct sampling-based evaluation remains unclear. Within the
paper, a sampling-based method is proposed to directly evaluate the chance
constraint. Furthermore, to address the problems with more challenging
settings, an enhanced GSEMO algorithm integrated with an adaptive sliding
window, called ASW-GSEMO, is introduced. In the experiments, the ASW-GSEMO
employing the sampling-based approach is tested on the chance-constrained
version of the maximum coverage problem with different settings. Its results
are compared with those from other algorithms using different surrogate
functions. The experimental findings indicate that the ASW-GSEMO with the
sampling-based evaluation approach outperforms other algorithms, highlighting
that the performances of algorithms using different evaluation methods are
comparable. Additionally, the behaviors of ASW-GSEMO are visualized to explain
the distinctions between it and the algorithms utilizing the surrogate
functions.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 05:15:20 GMT"
}
] | 1,713,484,800,000 | [
[
"Yan",
"Xiankun",
""
],
[
"Neumann",
"Aneta",
""
],
[
"Neumann",
"Frank",
""
]
] |
2404.11924 | Ming Cheng | Ming Cheng, Xingjian Diao, Ziyi Zhou, Yanjun Cui, Wenjun Liu, Shitong
Cheng | Toward Short-Term Glucose Prediction Solely Based on CGM Time Series | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The global diabetes epidemic highlights the importance of maintaining good
glycemic control. Glucose prediction is a fundamental aspect of diabetes
management, facilitating real-time decision-making. Recent research has
introduced models focusing on long-term glucose trend prediction, which are
unsuitable for real-time decision-making and result in delayed responses.
Conversely, models designed to respond to immediate glucose level changes
cannot analyze glucose variability comprehensively. Moreover, contemporary
research generally integrates various physiological parameters (e.g. insulin
doses, food intake, etc.), which inevitably raises data privacy concerns. To
bridge such a research gap, we propose TimeGlu -- an end-to-end pipeline for
short-term glucose prediction solely based on CGM time series data. We
implement four baseline methods to conduct a comprehensive comparative analysis
of the model's performance. Through extensive experiments on two contrasting
datasets (CGM Glucose and Colas dataset), TimeGlu achieves state-of-the-art
performance without the need for additional personal data from patients,
providing effective guidance for real-world diabetic glucose management.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 06:02:12 GMT"
}
] | 1,713,484,800,000 | [
[
"Cheng",
"Ming",
""
],
[
"Diao",
"Xingjian",
""
],
[
"Zhou",
"Ziyi",
""
],
[
"Cui",
"Yanjun",
""
],
[
"Liu",
"Wenjun",
""
],
[
"Cheng",
"Shitong",
""
]
] |
2404.11964 | Alex Sheng | Alex Sheng | From Language Models to Practical Self-Improving Computer Agents | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a simple and straightforward methodology to create AI computer
agents that can carry out diverse computer tasks and self-improve by developing
tools and augmentations to enable themselves to solve increasingly complex
tasks. As large language models (LLMs) have been shown to benefit from
non-parametric augmentations, a significant body of recent work has focused on
developing software that augments LLMs with various capabilities. Rather than
manually developing static software to augment LLMs through human engineering
effort, we propose that an LLM agent can systematically generate software to
augment itself. We show, through a few case studies, that a minimal querying
loop with appropriate prompt engineering allows an LLM to generate and use
various augmentations, freely extending its own capabilities to carry out
real-world computer tasks. Starting with only terminal access, we prompt an LLM
agent to augment itself with retrieval, internet search, web navigation, and
text editor capabilities. The agent effectively uses these various tools to
solve problems including automated software development and web-based tasks.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 07:50:10 GMT"
}
] | 1,713,484,800,000 | [
[
"Sheng",
"Alex",
""
]
] |
2404.11973 | Milad Moradi | Milad Moradi, Ke Yan, David Colwell, Matthias Samwald, Rhona Asgari | Exploring the landscape of large language models: Foundations,
techniques, and challenges | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this review paper, we delve into the realm of Large Language Models
(LLMs), covering their foundational principles, diverse applications, and
nuanced training processes. The article sheds light on the mechanics of
in-context learning and a spectrum of fine-tuning approaches, with a special
focus on methods that optimize efficiency in parameter usage. Additionally, it
explores how LLMs can be more closely aligned with human preferences through
innovative reinforcement learning frameworks and other novel methods that
incorporate human feedback. The article also examines the emerging technique of
retrieval augmented generation, integrating external knowledge into LLMs. The
ethical dimensions of LLM deployment are discussed, underscoring the need for
mindful and responsible application. Concluding with a perspective on future
research trajectories, this review offers a succinct yet comprehensive overview
of the current state and emerging trends in the evolving landscape of LLMs,
serving as an insightful guide for both researchers and practitioners in
artificial intelligence.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 08:01:20 GMT"
}
] | 1,713,484,800,000 | [
[
"Moradi",
"Milad",
""
],
[
"Yan",
"Ke",
""
],
[
"Colwell",
"David",
""
],
[
"Samwald",
"Matthias",
""
],
[
"Asgari",
"Rhona",
""
]
] |
2404.11996 | Songtao Huang | Songtao Huang, Hongjin Song, Tianqi Jiang, Akbar Telikani, Jun Shen,
Qingguo Zhou, Binbin Yong, Qiang Wu | DST-GTN: Dynamic Spatio-Temporal Graph Transformer Network for Traffic
Forecasting | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate traffic forecasting is essential for effective urban planning and
congestion management. Deep learning (DL) approaches have gained colossal
success in traffic forecasting but still face challenges in capturing the
intricacies of traffic dynamics. In this paper, we identify and address this
challenges by emphasizing that spatial features are inherently dynamic and
change over time. A novel in-depth feature representation, called Dynamic
Spatio-Temporal (Dyn-ST) features, is introduced, which encapsulates spatial
characteristics across varying times. Moreover, a Dynamic Spatio-Temporal Graph
Transformer Network (DST-GTN) is proposed by capturing Dyn-ST features and
other dynamic adjacency relations between intersections. The DST-GTN can model
dynamic ST relationships between nodes accurately and refine the representation
of global and local ST characteristics by adopting adaptive weights in low-pass
and all-pass filters, enabling the extraction of Dyn-ST features from traffic
time-series data. Through numerical experiments on public datasets, the DST-GTN
achieves state-of-the-art performance for a range of traffic forecasting tasks
and demonstrates enhanced stability.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 08:44:52 GMT"
}
] | 1,713,484,800,000 | [
[
"Huang",
"Songtao",
""
],
[
"Song",
"Hongjin",
""
],
[
"Jiang",
"Tianqi",
""
],
[
"Telikani",
"Akbar",
""
],
[
"Shen",
"Jun",
""
],
[
"Zhou",
"Qingguo",
""
],
[
"Yong",
"Binbin",
""
],
[
"Wu",
"Qiang",
""
]
] |
2404.12090 | Haoyuan Jiang | Haoyuan Jiang, Ziyue Li, Hua Wei, Xuantang Xiong, Jingqing Ruan,
Jiaming Lu, Hangyu Mao and Rui Zhao | X-Light: Cross-City Traffic Signal Control Using Transformer on
Transformer as Meta Multi-Agent Reinforcement Learner | Accepted by IJCAI 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The effectiveness of traffic light control has been significantly improved by
current reinforcement learning-based approaches via better cooperation among
multiple traffic lights. However, a persisting issue remains: how to obtain a
multi-agent traffic signal control algorithm with remarkable transferability
across diverse cities? In this paper, we propose a Transformer on Transformer
(TonT) model for cross-city meta multi-agent traffic signal control, named as
X-Light: We input the full Markov Decision Process trajectories, and the Lower
Transformer aggregates the states, actions, rewards among the target
intersection and its neighbors within a city, and the Upper Transformer learns
the general decision trajectories across different cities. This dual-level
approach bolsters the model's robust generalization and transferability.
Notably, when directly transferring to unseen scenarios, ours surpasses all
baseline methods with +7.91% on average, and even +16.3% in some cases,
yielding the best results.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 11:17:58 GMT"
},
{
"version": "v2",
"created": "Tue, 28 May 2024 14:57:31 GMT"
}
] | 1,716,940,800,000 | [
[
"Jiang",
"Haoyuan",
""
],
[
"Li",
"Ziyue",
""
],
[
"Wei",
"Hua",
""
],
[
"Xiong",
"Xuantang",
""
],
[
"Ruan",
"Jingqing",
""
],
[
"Lu",
"Jiaming",
""
],
[
"Mao",
"Hangyu",
""
],
[
"Zhao",
"Rui",
""
]
] |
2404.12127 | Ying Hu | Shanshan Wang, Ying Hu, Xun Yang, Zhongzhou Zhang, Keyang Wang, Xingyi
Zhang | Personalized Forgetting Mechanism with Concept-Driven Knowledge Tracing | under review | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge Tracing (KT) aims to trace changes in students' knowledge states
throughout their entire learning process by analyzing their historical learning
data and predicting their future learning performance. Existing forgetting
curve theory based knowledge tracing models only consider the general
forgetting caused by time intervals, ignoring the individualization of students
and the causal relationship of the forgetting process. To address these
problems, we propose a Concept-driven Personalized Forgetting knowledge tracing
model (CPF) which integrates hierarchical relationships between knowledge
concepts and incorporates students' personalized cognitive abilities. First, we
integrate the students' personalized capabilities into both the learning and
forgetting processes to explicitly distinguish students' individual learning
gains and forgetting rates according to their cognitive abilities. Second, we
take into account the hierarchical relationships between knowledge points and
design a precursor-successor knowledge concept matrix to simulate the causal
relationship in the forgetting process, while also integrating the potential
impact of forgetting prior knowledge points on subsequent ones. The proposed
personalized forgetting mechanism can not only be applied to the learning of
specifc knowledge concepts but also the life-long learning process. Extensive
experimental results on three public datasets show that our CPF outperforms
current forgetting curve theory based methods in predicting student
performance, demonstrating CPF can better simulate changes in students'
knowledge status through the personalized forgetting mechanism.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 12:28:50 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Apr 2024 13:03:44 GMT"
}
] | 1,714,089,600,000 | [
[
"Wang",
"Shanshan",
""
],
[
"Hu",
"Ying",
""
],
[
"Yang",
"Xun",
""
],
[
"Zhang",
"Zhongzhou",
""
],
[
"Wang",
"Keyang",
""
],
[
"Zhang",
"Xingyi",
""
]
] |
2404.12138 | Rui Xu | Rui Xu, Xintao Wang, Jiangjie Chen, Siyu Yuan, Xinfeng Yuan, Jiaqing
Liang, Zulong Chen, Xiaoqing Dong, Yanghua Xiao | Character is Destiny: Can Large Language Models Simulate Persona-Driven
Decisions in Role-Playing? | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Can Large Language Models substitute humans in making important decisions?
Recent research has unveiled the potential of LLMs to role-play assigned
personas, mimicking their knowledge and linguistic habits. However, imitative
decision-making requires a more nuanced understanding of personas. In this
paper, we benchmark the ability of LLMs in persona-driven decision-making.
Specifically, we investigate whether LLMs can predict characters' decisions
provided with the preceding stories in high-quality novels. Leveraging
character analyses written by literary experts, we construct a dataset
LIFECHOICE comprising 1,401 character decision points from 395 books. Then, we
conduct comprehensive experiments on LIFECHOICE, with various LLMs and methods
for LLM role-playing. The results demonstrate that state-of-the-art LLMs
exhibit promising capabilities in this task, yet there is substantial room for
improvement. Hence, we further propose the CHARMAP method, which achieves a
6.01% increase in accuracy via persona-based memory retrieval. We will make our
datasets and code publicly available.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 12:40:59 GMT"
}
] | 1,713,484,800,000 | [
[
"Xu",
"Rui",
""
],
[
"Wang",
"Xintao",
""
],
[
"Chen",
"Jiangjie",
""
],
[
"Yuan",
"Siyu",
""
],
[
"Yuan",
"Xinfeng",
""
],
[
"Liang",
"Jiaqing",
""
],
[
"Chen",
"Zulong",
""
],
[
"Dong",
"Xiaoqing",
""
],
[
"Xiao",
"Yanghua",
""
]
] |
2404.12149 | Yihua Shao | Yihua Shao, Hongyi Cai, Xinwei Long, Weiyi Lang, Zhe Wang, Haoran Wu,
Yan Wang, Jiayi Yin, Yang Yang, Yisheng Lv and Zhen Lei | AccidentBlip2: Accident Detection With Multi-View MotionBlip2 | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intelligent vehicles have demonstrated excellent capabilities in many
transportation scenarios. The inference capabilities of neural networks using
cameras limit the accuracy of accident detection in complex transportation
systems. This paper presents AccidentBlip2, a pure vision-based multi-modal
large model Blip2 for accident detection. Our method first processes the
multi-view images through ViT-14g and sends the multi-view features into the
cross-attention layer of Q-Former. Different from Blip2's Q-Former, our Motion
Q-Former extends the self-attention layer with the temporal-attention layer. In
the inference process, the queries generated from previous frames are input
into Motion Q-Former to aggregate temporal information. Queries are updated
with an auto-regressive strategy and are sent to a MLP to detect whether there
is an accident in the surrounding environment. Our AccidentBlip2 can be
extended to a multi-vehicle cooperative system by deploying Motion Q-Former on
each vehicle and simultaneously fusing the generated queries into the MLP for
auto-regressive inference. Our approach outperforms existing video large
language models in detection accuracy in both single-vehicle and multi-vehicle
systems.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 12:54:25 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Apr 2024 04:13:51 GMT"
},
{
"version": "v3",
"created": "Mon, 22 Apr 2024 17:07:07 GMT"
},
{
"version": "v4",
"created": "Tue, 7 May 2024 11:21:57 GMT"
}
] | 1,715,126,400,000 | [
[
"Shao",
"Yihua",
""
],
[
"Cai",
"Hongyi",
""
],
[
"Long",
"Xinwei",
""
],
[
"Lang",
"Weiyi",
""
],
[
"Wang",
"Zhe",
""
],
[
"Wu",
"Haoran",
""
],
[
"Wang",
"Yan",
""
],
[
"Yin",
"Jiayi",
""
],
[
"Yang",
"Yang",
""
],
[
"Lv",
"Yisheng",
""
],
[
"Lei",
"Zhen",
""
]
] |
2404.12185 | Bestoun Ahmed Dr. | Bestoun S. Ahmed | An Adaptive Metaheuristic Framework for Changing Environments | Accepted in 2024 IEEE Congress on Evolutionary Computation (CEC) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapidly changing landscapes of modern optimization problems require
algorithms that can be adapted in real-time. This paper introduces an Adaptive
Metaheuristic Framework (AMF) designed for dynamic environments. It is capable
of intelligently adapting to changes in the problem parameters. The AMF
combines a dynamic representation of problems, a real-time sensing system, and
adaptive techniques to navigate continuously changing optimization
environments. Through a simulated dynamic optimization problem, the AMF's
capability is demonstrated to detect environmental changes and proactively
adjust its search strategy. This framework utilizes a differential evolution
algorithm that is improved with an adaptation module that adjusts solutions in
response to detected changes. The capability of the AMF to adjust is tested
through a series of iterations, demonstrating its resilience and robustness in
sustaining solution quality despite the problem's development. The
effectiveness of AMF is demonstrated through a series of simulations on a
dynamic optimization problem. Robustness and agility characterize the
algorithm's performance, as evidenced by the presented fitness evolution and
solution path visualizations. The findings show that AMF is a practical
solution to dynamic optimization and a major step forward in the creation of
algorithms that can handle the unpredictability of real-world problems.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 13:47:53 GMT"
}
] | 1,713,484,800,000 | [
[
"Ahmed",
"Bestoun S.",
""
]
] |
2404.12240 | Lukas Rottkamp | Lukas Rottkamp, Matthias Schubert | A Time-Inhomogeneous Markov Model for Resource Availability under Sparse
Observations | 11 pages, long version of a paper published at 26th ACM SIGSPATIAL
International Conference on Advances in Geographic Information Systems
(SIGSPATIAL 2018) | Proceedings of the 26th ACM SIGSPATIAL International Conference on
Advances in Geographic Information Systems (pp. 460-463) 2018 | 10.1145/3274895.3274945 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate spatio-temporal information about the current situation is crucial
for smart city applications such as modern routing algorithms. Often, this
information describes the state of stationary resources, e.g. the availability
of parking bays, charging stations or the amount of people waiting for a
vehicle to pick them up near a given location. To exploit this kind of
information, predicting future states of the monitored resources is often
mandatory because a resource might change its state within the time until it is
needed. To train an accurate predictive model, it is often not possible to
obtain a continuous time series on the state of the resource. For example, the
information might be collected from traveling agents visiting the resource with
an irregular frequency. Thus, it is necessary to develop methods which work on
sparse observations for training and prediction. In this paper, we propose
time-inhomogeneous discrete Markov models to allow accurate prediction even
when the frequency of observation is very rare. Our new model is able to blend
recent observations with historic data and also provide useful probabilistic
estimates for future states. Since resources availability in a city is
typically time-dependent, our Markov model is time-inhomogeneous and cyclic
within a predefined time interval. To train our model, we propose a modified
Baum-Welch algorithm. Evaluations on real-world datasets of parking bay
availability show that our new method indeed yields good results compared to
methods being trained on complete data and non-cyclic variants.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 15:00:59 GMT"
}
] | 1,713,484,800,000 | [
[
"Rottkamp",
"Lukas",
""
],
[
"Schubert",
"Matthias",
""
]
] |
2404.12278 | David Restrepo | David Restrepo, Chenwei Wu, Constanza V\'asquez-Venegas, Luis Filipe
Nakayama, Leo Anthony Celi, Diego M L\'opez | DF-DM: A foundational process model for multimodal data fusion in the
artificial intelligence era | 6 figures, 5 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the big data era, integrating diverse data modalities poses significant
challenges, particularly in complex fields like healthcare. This paper
introduces a new process model for multimodal Data Fusion for Data Mining,
integrating embeddings and the Cross-Industry Standard Process for Data Mining
with the existing Data Fusion Information Group model. Our model aims to
decrease computational costs, complexity, and bias while improving efficiency
and reliability. We also propose "disentangled dense fusion", a novel embedding
fusion method designed to optimize mutual information and facilitate dense
inter-modality feature interaction, thereby minimizing redundant information.
We demonstrate the model's efficacy through three use cases: predicting
diabetic retinopathy using retinal images and patient metadata, domestic
violence prediction employing satellite imagery, internet, and census data, and
identifying clinical and demographic features from radiography images and
clinical notes. The model achieved a Macro F1 score of 0.92 in diabetic
retinopathy prediction, an R-squared of 0.854 and sMAPE of 24.868 in domestic
violence prediction, and a macro AUC of 0.92 and 0.99 for disease prediction
and sex classification, respectively, in radiological analysis.
These results underscore the Data Fusion for Data Mining model's potential to
significantly impact multimodal data processing, promoting its adoption in
diverse, resource-constrained settings.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 15:52:42 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Jun 2024 16:51:46 GMT"
}
] | 1,717,459,200,000 | [
[
"Restrepo",
"David",
""
],
[
"Wu",
"Chenwei",
""
],
[
"Vásquez-Venegas",
"Constanza",
""
],
[
"Nakayama",
"Luis Filipe",
""
],
[
"Celi",
"Leo Anthony",
""
],
[
"López",
"Diego M",
""
]
] |
2404.12458 | Rongqian Ma | Meredith Dedema, Rongqian Ma | The collective use and evaluation of generative AI tools in digital
humanities research: Survey-based results | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The advent of generative artificial intelligence (GenAI) technologies has
revolutionized research, with significant implications for Digital Humanities
(DH), a field inherently intertwined with technological progress. This article
investigates how digital humanities scholars adopt, practice, as well as
critically evaluate, GenAI technologies such as ChatGPT in the research
process. Drawing on 76 responses collected from an international survey study,
we explored digital humanities scholars' rationale for GenAI adoption in
research, identified specific use cases and practices of using GenAI to support
various DH research tasks, and analyzed scholars' collective perceptions of
GenAI's benefits, risks, and impact on DH research. The survey results suggest
that DH research communities hold divisive sentiments towards the value of
GenAI in DH scholarship, whereas the actual usage diversifies among individuals
and across research tasks. Our survey-based analysis has the potential to serve
as a basis for further empirical research on the impact of GenAI on the
evolution of DH scholarship.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 18:33:00 GMT"
}
] | 1,713,744,000,000 | [
[
"Dedema",
"Meredith",
""
],
[
"Ma",
"Rongqian",
""
]
] |
2404.12520 | Amin Shojaeighadikolaei | Amin Shojaeighadikolaei, Zsolt Talata, Morteza Hashemi | Centralized vs. Decentralized Multi-Agent Reinforcement Learning for
Enhanced Control of Electric Vehicle Charging Networks | 12 pages, 9 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The widespread adoption of electric vehicles (EVs) poses several challenges
to power distribution networks and smart grid infrastructure due to the
possibility of significantly increasing electricity demands, especially during
peak hours. Furthermore, when EVs participate in demand-side management
programs, charging expenses can be reduced by using optimal charging control
policies that fully utilize real-time pricing schemes. However, devising
optimal charging methods and control strategies for EVs is challenging due to
various stochastic and uncertain environmental factors. Currently, most EV
charging controllers operate based on a centralized model. In this paper, we
introduce a novel approach for distributed and cooperative charging strategy
using a Multi-Agent Reinforcement Learning (MARL) framework. Our method is
built upon the Deep Deterministic Policy Gradient (DDPG) algorithm for a group
of EVs in a residential community, where all EVs are connected to a shared
transformer. This method, referred to as CTDE-DDPG, adopts a Centralized
Training Decentralized Execution (CTDE) approach to establish cooperation
between agents during the training phase, while ensuring a distributed and
privacy-preserving operation during execution. We theoretically examine the
performance of centralized and decentralized critics for the DDPG-based MARL
implementation and demonstrate their trade-offs. Furthermore, we numerically
explore the efficiency, scalability, and performance of centralized and
decentralized critics. Our theoretical and numerical results indicate that,
despite higher policy gradient variances and training complexity, the CTDE-DDPG
framework significantly improves charging efficiency by reducing total
variation by approximately %36 and charging cost by around %9.1 on average...
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 21:50:03 GMT"
}
] | 1,713,744,000,000 | [
[
"Shojaeighadikolaei",
"Amin",
""
],
[
"Talata",
"Zsolt",
""
],
[
"Hashemi",
"Morteza",
""
]
] |
2404.12587 | Ngoc Quach | Ngoc Quach, Qi Wang, Zijun Gao, Qifeng Sun, Bo Guan and Lillian Floyd | Reinforcement Learning Approach for Integrating Compressed Contexts into
Knowledge Graphs | This paper has been accepted by the 2024 International Conference on
Machine Learning and Neural Networks (MLNN 2024) | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | The widespread use of knowledge graphs in various fields has brought about a
challenge in effectively integrating and updating information within them. When
it comes to incorporating contexts, conventional methods often rely on rules or
basic machine learning models, which may not fully grasp the complexity and
fluidity of context information. This research suggests an approach based on
reinforcement learning (RL), specifically utilizing Deep Q Networks (DQN) to
enhance the process of integrating contexts into knowledge graphs. By
considering the state of the knowledge graph as environment states defining
actions as operations for integrating contexts and using a reward function to
gauge the improvement in knowledge graph quality post-integration, this method
aims to automatically develop strategies for optimal context integration. Our
DQN model utilizes networks as function approximators, continually updating Q
values to estimate the action value function, thus enabling effective
integration of intricate and dynamic context information. Initial experimental
findings show that our RL method outperforms techniques in achieving precise
context integration across various standard knowledge graph datasets,
highlighting the potential and effectiveness of reinforcement learning in
enhancing and managing knowledge graphs.
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2024 02:32:43 GMT"
}
] | 1,713,744,000,000 | [
[
"Quach",
"Ngoc",
""
],
[
"Wang",
"Qi",
""
],
[
"Gao",
"Zijun",
""
],
[
"Sun",
"Qifeng",
""
],
[
"Guan",
"Bo",
""
],
[
"Floyd",
"Lillian",
""
]
] |
2404.12605 | Ming Cheng | Ziyi Zhou, Ming Cheng, Xingjian Diao, Yanjun Cui, Xiangling Li | GluMarker: A Novel Predictive Modeling of Glycemic Control Through
Digital Biomarkers | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The escalating prevalence of diabetes globally underscores the need for
diabetes management. Recent research highlights the growing focus on digital
biomarkers in diabetes management, with innovations in computational frameworks
and noninvasive monitoring techniques using personalized glucose metrics.
However, they predominantly focus on insulin dosing and specific glucose
values, or with limited attention given to overall glycemic control. This
leaves a gap in expanding the scope of digital biomarkers for overall glycemic
control in diabetes management. To address such a research gap, we propose
GluMarker -- an end-to-end framework for modeling digital biomarkers using
broader factors sources to predict glycemic control. Through the assessment and
refinement of various machine learning baselines, GluMarker achieves
state-of-the-art on Anderson's dataset in predicting next-day glycemic control.
Moreover, our research identifies key digital biomarkers for the next day's
glycemic control prediction. These identified biomarkers are instrumental in
illuminating the daily factors that influence glycemic management, offering
vital insights for diabetes care.
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2024 03:15:50 GMT"
}
] | 1,713,744,000,000 | [
[
"Zhou",
"Ziyi",
""
],
[
"Cheng",
"Ming",
""
],
[
"Diao",
"Xingjian",
""
],
[
"Cui",
"Yanjun",
""
],
[
"Li",
"Xiangling",
""
]
] |
2404.12638 | Zhihai Wang | Jie Wang, Zhihai Wang, Xijun Li, Yufei Kuang, Zhihao Shi, Fangzhou
Zhu, Mingxuan Yuan, Jia Zeng, Yongdong Zhang, Feng Wu | Learning to Cut via Hierarchical Sequence/Set Model for Efficient
Mixed-Integer Programming | arXiv admin note: substantial text overlap with arXiv:2302.00244 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cutting planes (cuts) play an important role in solving mixed-integer linear
programs (MILPs), which formulate many important real-world applications. Cut
selection heavily depends on (P1) which cuts to prefer and (P2) how many cuts
to select. Although modern MILP solvers tackle (P1)-(P2) by human-designed
heuristics, machine learning carries the potential to learn more effective
heuristics. However, many existing learning-based methods learn which cuts to
prefer, neglecting the importance of learning how many cuts to select.
Moreover, we observe that (P3) what order of selected cuts to prefer
significantly impacts the efficiency of MILP solvers as well. To address these
challenges, we propose a novel hierarchical sequence/set model (HEM) to learn
cut selection policies. Specifically, HEM is a bi-level model: (1) a
higher-level module that learns how many cuts to select, (2) and a lower-level
module -- that formulates the cut selection as a sequence/set to sequence
learning problem -- to learn policies selecting an ordered subset with the
cardinality determined by the higher-level module. To the best of our
knowledge, HEM is the first data-driven methodology that well tackles (P1)-(P3)
simultaneously. Experiments demonstrate that HEM significantly improves the
efficiency of solving MILPs on eleven challenging MILP benchmarks, including
two Huawei's real problems.
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2024 05:40:25 GMT"
}
] | 1,713,744,000,000 | [
[
"Wang",
"Jie",
""
],
[
"Wang",
"Zhihai",
""
],
[
"Li",
"Xijun",
""
],
[
"Kuang",
"Yufei",
""
],
[
"Shi",
"Zhihao",
""
],
[
"Zhu",
"Fangzhou",
""
],
[
"Yuan",
"Mingxuan",
""
],
[
"Zeng",
"Jia",
""
],
[
"Zhang",
"Yongdong",
""
],
[
"Wu",
"Feng",
""
]
] |
2404.12653 | Dren Fazlija | Dren Fazlija, Arkadij Orlov, Johanna Schrader, Monty-Maximilian
Z\"uhlke, Michael Rohs and Daniel Kudenko | How Real Is Real? A Human Evaluation Framework for Unrestricted
Adversarial Examples | 3 pages, 3 figures, AAAI 2024 Spring Symposium on User-Aligned
Assessment of Adaptive AI Systems | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | With an ever-increasing reliance on machine learning (ML) models in the real
world, adversarial examples threaten the safety of AI-based systems such as
autonomous vehicles. In the image domain, they represent maliciously perturbed
data points that look benign to humans (i.e., the image modification is not
noticeable) but greatly mislead state-of-the-art ML models. Previously,
researchers ensured the imperceptibility of their altered data points by
restricting perturbations via $\ell_p$ norms. However, recent publications
claim that creating natural-looking adversarial examples without such
restrictions is also possible. With much more freedom to instill malicious
information into data, these unrestricted adversarial examples can potentially
overcome traditional defense strategies as they are not constrained by the
limitations or patterns these defenses typically recognize and mitigate. This
allows attackers to operate outside of expected threat models. However,
surveying existing image-based methods, we noticed a need for more human
evaluations of the proposed image modifications. Based on existing
human-assessment frameworks for image generation quality, we propose SCOOTER -
an evaluation framework for unrestricted image-based attacks. It provides
researchers with guidelines for conducting statistically significant human
experiments, standardized questions, and a ready-to-use implementation. We
propose a framework that allows researchers to analyze how imperceptible their
unrestricted attacks truly are.
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2024 06:42:01 GMT"
}
] | 1,713,744,000,000 | [
[
"Fazlija",
"Dren",
""
],
[
"Orlov",
"Arkadij",
""
],
[
"Schrader",
"Johanna",
""
],
[
"Zühlke",
"Monty-Maximilian",
""
],
[
"Rohs",
"Michael",
""
],
[
"Kudenko",
"Daniel",
""
]
] |
2404.12704 | Haoyu Sun | Jiazhu Dai, Haoyu Sun | A Clean-graph Backdoor Attack against Graph Convolutional Networks with
Poisoned Label Only | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Convolutional Networks (GCNs) have shown excellent performance in
dealing with various graph structures such as node classification, graph
classification and other tasks. However,recent studies have shown that GCNs are
vulnerable to a novel threat known as backdoor attacks. However, all existing
backdoor attacks in the graph domain require modifying the training samples to
accomplish the backdoor injection, which may not be practical in many realistic
scenarios where adversaries have no access to modify the training samples and
may leads to the backdoor attack being detected easily. In order to explore the
backdoor vulnerability of GCNs and create a more practical and stealthy
backdoor attack method, this paper proposes a clean-graph backdoor attack
against GCNs (CBAG) in the node classification task,which only poisons the
training labels without any modification to the training samples, revealing
that GCNs have this security vulnerability. Specifically, CBAG designs a new
trigger exploration method to find important feature dimensions as the trigger
patterns to improve the attack performance. By poisoning the training labels, a
hidden backdoor is injected into the GCNs model. Experimental results show that
our clean graph backdoor can achieve 99% attack success rate while maintaining
the functionality of the GCNs model on benign samples.
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2024 08:21:54 GMT"
}
] | 1,713,744,000,000 | [
[
"Dai",
"Jiazhu",
""
],
[
"Sun",
"Haoyu",
""
]
] |
2404.12926 | Janak Kapuriya | Avinash Anand, Janak Kapuriya, Chhavi Kirtani, Apoorv Singh, Jay
Saraf, Naman Lal, Jatin Kumar, Adarsh Raj Shivam, Astha Verma, Rajiv Ratn
Shah, Roger Zimmermann | MM-PhyRLHF: Reinforcement Learning Framework for Multimodal Physics
Question-Answering | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in LLMs have shown their significant potential in tasks
like text summarization and generation. Yet, they often encounter difficulty
while solving complex physics problems that require arithmetic calculation and
a good understanding of concepts. Moreover, many physics problems include
images that contain important details required to understand the problem's
context. We propose an LMM-based chatbot to answer multimodal physics MCQs. For
domain adaptation, we utilize the MM-PhyQA dataset comprising Indian high
school-level multimodal physics problems. To improve the LMM's performance, we
experiment with two techniques, RLHF (Reinforcement Learning from Human
Feedback) and Image Captioning. In image captioning, we add a detailed
explanation of the diagram in each image, minimizing hallucinations and image
processing errors. We further explore the integration of Reinforcement Learning
from Human Feedback (RLHF) methodology inspired by the ranking approach in RLHF
to enhance the human-like problem-solving abilities of the models. The RLHF
approach incorporates human feedback into the learning process of LLMs,
improving the model's problem-solving skills, truthfulness, and reasoning
capabilities, minimizing the hallucinations in the answers, and improving the
quality instead of using vanilla-supervised fine-tuned models. We employ the
LLaVA open-source model to answer multimodal physics MCQs and compare the
performance with and without using RLHF.
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2024 14:52:57 GMT"
}
] | 1,713,744,000,000 | [
[
"Anand",
"Avinash",
""
],
[
"Kapuriya",
"Janak",
""
],
[
"Kirtani",
"Chhavi",
""
],
[
"Singh",
"Apoorv",
""
],
[
"Saraf",
"Jay",
""
],
[
"Lal",
"Naman",
""
],
[
"Kumar",
"Jatin",
""
],
[
"Shivam",
"Adarsh Raj",
""
],
[
"Verma",
"Astha",
""
],
[
"Shah",
"Rajiv Ratn",
""
],
[
"Zimmermann",
"Roger",
""
]
] |
2404.13501 | Zeyu Zhang | Zeyu Zhang, Xiaohe Bo, Chen Ma, Rui Li, Xu Chen, Quanyu Dai, Jieming
Zhu, Zhenhua Dong, Ji-Rong Wen | A Survey on the Memory Mechanism of Large Language Model based Agents | 39 pages, 5 figures, 4 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language model (LLM) based agents have recently attracted much
attention from the research and industry communities. Compared with original
LLMs, LLM-based agents are featured in their self-evolving capability, which is
the basis for solving real-world problems that need long-term and complex
agent-environment interactions. The key component to support agent-environment
interactions is the memory of the agents. While previous studies have proposed
many promising memory mechanisms, they are scattered in different papers, and
there lacks a systematical review to summarize and compare these works from a
holistic perspective, failing to abstract common and effective designing
patterns for inspiring future studies. To bridge this gap, in this paper, we
propose a comprehensive survey on the memory mechanism of LLM-based agents. In
specific, we first discuss ''what is'' and ''why do we need'' the memory in
LLM-based agents. Then, we systematically review previous studies on how to
design and evaluate the memory module. In addition, we also present many agent
applications, where the memory module plays an important role. At last, we
analyze the limitations of existing work and show important future directions.
To keep up with the latest advances in this field, we create a repository at
\url{https://github.com/nuster1128/LLM_Agent_Memory_Survey}.
| [
{
"version": "v1",
"created": "Sun, 21 Apr 2024 01:49:46 GMT"
}
] | 1,713,830,400,000 | [
[
"Zhang",
"Zeyu",
""
],
[
"Bo",
"Xiaohe",
""
],
[
"Ma",
"Chen",
""
],
[
"Li",
"Rui",
""
],
[
"Chen",
"Xu",
""
],
[
"Dai",
"Quanyu",
""
],
[
"Zhu",
"Jieming",
""
],
[
"Dong",
"Zhenhua",
""
],
[
"Wen",
"Ji-Rong",
""
]
] |
2404.13567 | Rushrukh Rayan | Abhilekha Dalal, Rushrukh Rayan, Adrita Barua, Eugene Y. Vasserman, Md
Kamruzzaman Sarker, Pascal Hitzler | On the Value of Labeled Data and Symbolic Methods for Hidden Neuron
Activation Analysis | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major challenge in Explainable AI is in correctly interpreting activations
of hidden neurons: accurate interpretations would help answer the question of
what a deep learning system internally detects as relevant in the input,
demystifying the otherwise black-box nature of deep learning systems. The state
of the art indicates that hidden node activations can, in some cases, be
interpretable in a way that makes sense to humans, but systematic automated
methods that would be able to hypothesize and verify interpretations of hidden
neuron activations are underexplored. This is particularly the case for
approaches that can both draw explanations from substantial background
knowledge, and that are based on inherently explainable (symbolic) methods.
In this paper, we introduce a novel model-agnostic post-hoc Explainable AI
method demonstrating that it provides meaningful interpretations. Our approach
is based on using a Wikipedia-derived concept hierarchy with approximately 2
million classes as background knowledge, and utilizes OWL-reasoning-based
Concept Induction for explanation generation. Additionally, we explore and
compare the capabilities of off-the-shelf pre-trained multimodal-based
explainable methods.
Our results indicate that our approach can automatically attach meaningful
class expressions as explanations to individual neurons in the dense layer of a
Convolutional Neural Network. Evaluation through statistical analysis and
degree of concept activation in the hidden layer show that our method provides
a competitive edge in both quantitative and qualitative aspects compared to
prior work.
| [
{
"version": "v1",
"created": "Sun, 21 Apr 2024 07:57:45 GMT"
}
] | 1,713,830,400,000 | [
[
"Dalal",
"Abhilekha",
""
],
[
"Rayan",
"Rushrukh",
""
],
[
"Barua",
"Adrita",
""
],
[
"Vasserman",
"Eugene Y.",
""
],
[
"Sarker",
"Md Kamruzzaman",
""
],
[
"Hitzler",
"Pascal",
""
]
] |
2404.13778 | Pakizar Shamoi Dr | Adilet Yerkin, Elnara Kadyrgali, Yerdauit Torekhan, Pakizar Shamoi | Multi-channel Emotion Analysis for Consensus Reaching in Group Movie
Recommendation Systems | the paper has been submitted for consideration to IEEE | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Watching movies is one of the social activities typically done in groups.
Emotion is the most vital factor that affects movie viewers' preferences. So,
the emotional aspect of the movie needs to be determined and analyzed for
further recommendations. It can be challenging to choose a movie that appeals
to the emotions of a diverse group. Reaching an agreement for a group can be
difficult due to the various genres and choices. This paper proposes a novel
approach to group movie suggestions by examining emotions from three different
channels: movie descriptions (text), soundtracks (audio), and posters (image).
We employ the Jaccard similarity index to match each participant's emotional
preferences to prospective movie choices, followed by a fuzzy inference
technique to determine group consensus. We use a weighted integration process
for the fusion of emotion scores from diverse data types. Then, group movie
recommendation is based on prevailing emotions and viewers' best-loved movies.
After determining the recommendations, the group's consensus level is
calculated using a fuzzy inference system, taking participants' feedback as
input. Participants (n=130) in the survey were provided with different emotion
categories and asked to select the emotions best suited for particular movies
(n=12). Comparison results between predicted and actual scores demonstrate the
efficiency of using emotion detection for this problem (Jaccard similarity
index = 0.76). We explored the relationship between induced emotions and movie
popularity as an additional experiment, analyzing emotion distribution in 100
popular movies from the TMDB database. Such systems can potentially improve the
accuracy of movie recommendation systems and achieve a high level of consensus
among participants with diverse preferences.
| [
{
"version": "v1",
"created": "Sun, 21 Apr 2024 21:19:31 GMT"
}
] | 1,713,830,400,000 | [
[
"Yerkin",
"Adilet",
""
],
[
"Kadyrgali",
"Elnara",
""
],
[
"Torekhan",
"Yerdauit",
""
],
[
"Shamoi",
"Pakizar",
""
]
] |
2404.14082 | Leonard Bereska | Leonard Bereska and Efstratios Gavves | Mechanistic Interpretability for AI Safety -- A Review | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Understanding AI systems' inner workings is critical for ensuring value
alignment and safety. This review explores mechanistic interpretability:
reverse-engineering the computational mechanisms and representations learned by
neural networks into human-understandable algorithms and concepts to provide a
granular, causal understanding. We establish foundational concepts such as
features encoding knowledge within neural activations and hypotheses about
their representation and computation. We survey methodologies for causally
dissecting model behaviors and assess the relevance of mechanistic
interpretability to AI safety. We investigate challenges surrounding
scalability, automation, and comprehensive interpretation. We advocate for
clarifying concepts, setting standards, and scaling techniques to handle
complex models and behaviors and expand to domains such as vision and
reinforcement learning. Mechanistic interpretability could help prevent
catastrophic outcomes as AI systems become more powerful and inscrutable.
| [
{
"version": "v1",
"created": "Mon, 22 Apr 2024 11:01:51 GMT"
}
] | 1,713,830,400,000 | [
[
"Bereska",
"Leonard",
""
],
[
"Gavves",
"Efstratios",
""
]
] |
2404.14304 | Xiang Yin | Xiang Yin, Potyka Nico, Francesca Toni | Explaining Arguments' Strength: Unveiling the Role of Attacks and
Supports (Technical Report) | This paper has been accepted at IJCAI 2024 (the 33rd International
Joint Conference on Artificial Intelligence) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Quantitatively explaining the strength of arguments under gradual semantics
has recently received increasing attention. Specifically, several works in the
literature provide quantitative explanations by computing the attribution
scores of arguments. These works disregard the importance of attacks and
supports, even though they play an essential role when explaining arguments'
strength. In this paper, we propose a novel theory of Relation Attribution
Explanations (RAEs), adapting Shapley values from game theory to offer
fine-grained insights into the role of attacks and supports in quantitative
bipolar argumentation towards obtaining the arguments' strength. We show that
RAEs satisfy several desirable properties. We also propose a probabilistic
algorithm to approximate RAEs efficiently. Finally, we show the application
value of RAEs in fraud detection and large language models case studies.
| [
{
"version": "v1",
"created": "Mon, 22 Apr 2024 16:02:48 GMT"
},
{
"version": "v2",
"created": "Fri, 10 May 2024 17:37:43 GMT"
}
] | 1,715,558,400,000 | [
[
"Yin",
"Xiang",
""
],
[
"Nico",
"Potyka",
""
],
[
"Toni",
"Francesca",
""
]
] |
2404.14450 | Sefika Efeoglu | Sefika Efeoglu | GraphMatcher: A Graph Representation Learning Approach for Ontology
Matching | The 17th International Workshop on Ontology Matching, The 21st
International Semantic Web Conference (ISWC) 2022, 23 October 2022, Hangzhou,
China | The 17th International Workshop on Ontology Matching, The 21st
International Semantic Web Conference (ISWC) 2022, 23 October 2022, Hangzhou,
China | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Ontology matching is defined as finding a relationship or correspondence
between two or more entities in two or more ontologies. To solve the
interoperability problem of the domain ontologies, semantically similar
entities in these ontologies must be found and aligned before merging them.
GraphMatcher, developed in this study, is an ontology matching system using a
graph attention approach to compute higher-level representation of a class
together with its surrounding terms. The GraphMatcher has obtained remarkable
results in in the Ontology Alignment Evaluation Initiative (OAEI) 2022
conference track. Its codes are available at
~\url{https://github.com/sefeoglu/gat_ontology_matching}.
| [
{
"version": "v1",
"created": "Sat, 20 Apr 2024 18:30:17 GMT"
}
] | 1,713,916,800,000 | [
[
"Efeoglu",
"Sefika",
""
]
] |
2404.15184 | Kelsey Sikes | Kelsey Sikes, Sarah Keren, Sarath Sreedharan | Reducing Human-Robot Goal State Divergence with Environment Design | 8 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most difficult challenges in creating successful human-AI
collaborations is aligning a robot's behavior with a human user's expectations.
When this fails to occur, a robot may misinterpret their specified goals,
prompting it to perform actions with unanticipated, potentially dangerous side
effects. To avoid this, we propose a new metric we call Goal State Divergence
$\mathcal{(GSD)}$, which represents the difference between a robot's final goal
state and the one a human user expected. In cases where $\mathcal{GSD}$ cannot
be directly calculated, we show how it can be approximated using maximal and
minimal bounds. We then input the $\mathcal{GSD}$ value into our novel
human-robot goal alignment (HRGA) design problem, which identifies a minimal
set of environment modifications that can prevent mismatches like this. To show
the effectiveness of $\mathcal{GSD}$ for reducing differences between
human-robot goal states, we empirically evaluate our approach on several
standard benchmarks.
| [
{
"version": "v1",
"created": "Wed, 10 Apr 2024 20:36:04 GMT"
}
] | 1,713,916,800,000 | [
[
"Sikes",
"Kelsey",
""
],
[
"Keren",
"Sarah",
""
],
[
"Sreedharan",
"Sarath",
""
]
] |
2404.15189 | Xiaoyun Chang | Xiaoyun Chang and Yi Sun | Text2Grasp: Grasp synthesis by text prompts of object grasping parts | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The hand plays a pivotal role in human ability to grasp and manipulate
objects and controllable grasp synthesis is the key for successfully performing
downstream tasks. Existing methods that use human intention or task-level
language as control signals for grasping inherently face ambiguity. To address
this challenge, we propose a grasp synthesis method guided by text prompts of
object grasping parts, Text2Grasp, which provides more precise control.
Specifically, we present a two-stage method that includes a text-guided
diffusion model TextGraspDiff to first generate a coarse grasp pose, then apply
a hand-object contact optimization process to ensure both plausibility and
diversity. Furthermore, by leveraging Large Language Model, our method
facilitates grasp synthesis guided by task-level and personalized text
descriptions without additional manual annotations. Extensive experiments
demonstrate that our method achieves not only accurate part-level grasp control
but also comparable performance in grasp quality.
| [
{
"version": "v1",
"created": "Tue, 9 Apr 2024 10:57:27 GMT"
}
] | 1,713,916,800,000 | [
[
"Chang",
"Xiaoyun",
""
],
[
"Sun",
"Yi",
""
]
] |
2404.15192 | Yuchen Li | Yuchen Li, Ziqi Wang, Qingquan Zhang, Jialin Liu | Measuring Diversity of Game Scenarios | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This survey comprehensively reviews the multi-dimensionality of game scenario
diversity, spotlighting the innovative use of procedural content generation and
other fields as cornerstones for enriching player experiences through diverse
game scenarios. By traversing a wide array of disciplines, from affective
modeling and multi-agent systems to psychological studies, our research
underscores the importance of diverse game scenarios in gameplay and education.
Through a taxonomy of diversity metrics and evaluation methods, we aim to
bridge the current gaps in literature and practice, offering insights into
effective strategies for measuring and integrating diversity in game scenarios.
Our analysis highlights the necessity for a unified taxonomy to aid developers
and researchers in crafting more engaging and varied game worlds. This survey
not only charts a path for future research in diverse game scenarios but also
serves as a handbook for industry practitioners seeking to leverage diversity
as a key component of game design and development.
| [
{
"version": "v1",
"created": "Mon, 15 Apr 2024 07:59:52 GMT"
}
] | 1,713,916,800,000 | [
[
"Li",
"Yuchen",
""
],
[
"Wang",
"Ziqi",
""
],
[
"Zhang",
"Qingquan",
""
],
[
"Liu",
"Jialin",
""
]
] |
2404.15492 | Ioannis Kavouras A | Ioannis Kavouras, Ioannis Rallis, Emmanuel Sardis, Eftychios
Protopapadakis, Anastasios Doulamis and Nikolaos Doulamis | Multi-scale Intervention Planning based on Generative Design | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The scarcity of green spaces, in urban environments, consists a critical
challenge. There are multiple adverse effects, impacting the health and
well-being of the citizens. Small scale interventions, e.g. pocket parks, is a
viable solution, but comes with multiple constraints, involving the design and
implementation over a specific area. In this study, we harness the capabilities
of generative AI for multi-scale intervention planning, focusing on nature
based solutions. By leveraging image-to-image and image inpainting algorithms,
we propose a methodology to address the green space deficit in urban areas.
Focusing on two alleys in Thessaloniki, where greenery is lacking, we
demonstrate the efficacy of our approach in visualizing NBS interventions. Our
findings underscore the transformative potential of emerging technologies in
shaping the future of urban intervention planning processes.
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2024 20:06:56 GMT"
}
] | 1,714,003,200,000 | [
[
"Kavouras",
"Ioannis",
""
],
[
"Rallis",
"Ioannis",
""
],
[
"Sardis",
"Emmanuel",
""
],
[
"Protopapadakis",
"Eftychios",
""
],
[
"Doulamis",
"Anastasios",
""
],
[
"Doulamis",
"Nikolaos",
""
]
] |
2404.15583 | Sarah Keren | Sarah Keren and Chaimaa Essayeh and Stefano V. Albrecht and Thomas
Morstyn | Multi-Agent Reinforcement Learning for Energy Networks: Computational
Challenges, Progress and Open Problems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapidly changing architecture and functionality of electrical networks
and the increasing penetration of renewable and distributed energy resources
have resulted in various technological and managerial challenges. These have
rendered traditional centralized energy-market paradigms insufficient due to
their inability to support the dynamic and evolving nature of the network. This
survey explores how multi-agent reinforcement learning (MARL) can support the
decentralization and decarbonization of energy networks and mitigate the
associated challenges. This is achieved by specifying key computational
challenges in managing energy networks, reviewing recent research progress on
addressing them, and highlighting open challenges that may be addressed using
MARL.
| [
{
"version": "v1",
"created": "Wed, 24 Apr 2024 01:35:27 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Apr 2024 11:39:54 GMT"
},
{
"version": "v3",
"created": "Sat, 25 May 2024 05:10:30 GMT"
}
] | 1,716,854,400,000 | [
[
"Keren",
"Sarah",
""
],
[
"Essayeh",
"Chaimaa",
""
],
[
"Albrecht",
"Stefano V.",
""
],
[
"Morstyn",
"Thomas",
""
]
] |
2404.16055 | Fabio Caraffini PhD | Juan F. P\'erez-P\'erez, Pablo Isaza G\'omez, Isis Bonet, Mar\'ia
Solange S\'anchez-Pinz\'on, Fabio Caraffini, Christian Lochmuller | Assessing Climate Transition Risks in the Colombian Processed Food
Sector: A Fuzzy Logic and Multicriteria Decision-Making Approach | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Climate risk assessment is becoming increasingly important. For
organisations, identifying and assessing climate-related risks is challenging,
as they can come from multiple sources. This study identifies and assesses the
main climate transition risks in the colombian processed food sector. As
transition risks are vague, our approach uses Fuzzy Logic and compares it to
various multi-criteria decision-making methods to classify the different
climate transition risks an organisation may be exposed to. This approach
allows us to use linguistic expressions for risk analysis and to better
describe risks and their consequences. The results show that the risks ranked
as the most critical for this organisation in their order were price volatility
and raw materials availability, the change to less carbon-intensive production
or consumption patterns, the increase in carbon taxes and technological change,
and the associated development or implementation costs. These risks show a
critical risk level, which implies that they are the most significant risks for
the organisation in the case study. These results highlight the importance of
investments needed to meet regulatory requirements, which are the main drivers
for organisations at the financial level.
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2024 21:49:49 GMT"
}
] | 1,714,089,600,000 | [
[
"Pérez-Pérez",
"Juan F.",
""
],
[
"Gómez",
"Pablo Isaza",
""
],
[
"Bonet",
"Isis",
""
],
[
"Sánchez-Pinzón",
"María Solange",
""
],
[
"Caraffini",
"Fabio",
""
],
[
"Lochmuller",
"Christian",
""
]
] |
2404.16364 | Chunyu Xuan | Chunyu Xuan, Yazhe Niu, Yuan Pu, Shuai Hu, Yu Liu, Jing Yang | ReZero: Boosting MCTS-based Algorithms by Backward-view and
Entire-buffer Reanalyze | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Monte Carlo Tree Search (MCTS)-based algorithms, such as MuZero and its
derivatives, have achieved widespread success in various decision-making
domains. These algorithms employ the reanalyze process to enhance sample
efficiency from stale data, albeit at the expense of significant wall-clock
time consumption. To address this issue, we propose a general approach named
ReZero to boost tree search operations for MCTS-based algorithms. Specifically,
drawing inspiration from the one-armed bandit model, we reanalyze training
samples through a backward-view reuse technique which obtains the value
estimation of a certain child node in advance. To further adapt to this design,
we periodically reanalyze the entire buffer instead of frequently reanalyzing
the mini-batch. The synergy of these two designs can significantly reduce the
search cost and meanwhile guarantee or even improve performance, simplifying
both data collecting and reanalyzing. Experiments conducted on Atari
environments and board games demonstrate that ReZero substantially improves
training speed while maintaining high sample efficiency. The code is available
as part of the LightZero benchmark at https://github.com/opendilab/LightZero.
| [
{
"version": "v1",
"created": "Thu, 25 Apr 2024 07:02:07 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Apr 2024 06:21:04 GMT"
},
{
"version": "v3",
"created": "Tue, 28 May 2024 05:49:18 GMT"
}
] | 1,716,940,800,000 | [
[
"Xuan",
"Chunyu",
""
],
[
"Niu",
"Yazhe",
""
],
[
"Pu",
"Yuan",
""
],
[
"Hu",
"Shuai",
""
],
[
"Liu",
"Yu",
""
],
[
"Yang",
"Jing",
""
]
] |
2404.16411 | Wenchuan Mu | Wenchuan Mu and Kwan Hui Lim | Label-Free Topic-Focused Summarization Using Query Augmentation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In today's data and information-rich world, summarization techniques are
essential in harnessing vast text to extract key information and enhance
decision-making and efficiency. In particular, topic-focused summarization is
important due to its ability to tailor content to specific aspects of an
extended text. However, this usually requires extensive labelled datasets and
considerable computational power. This study introduces a novel method,
Augmented-Query Summarization (AQS), for topic-focused summarization without
the need for extensive labelled datasets, leveraging query augmentation and
hierarchical clustering. This approach facilitates the transferability of
machine learning models to the task of summarization, circumventing the need
for topic-specific training. Through real-world tests, our method demonstrates
the ability to generate relevant and accurate summaries, showing its potential
as a cost-effective solution in data-rich environments. This innovation paves
the way for broader application and accessibility in the field of topic-focused
summarization technology, offering a scalable, efficient method for
personalized content extraction.
| [
{
"version": "v1",
"created": "Thu, 25 Apr 2024 08:39:10 GMT"
}
] | 1,714,089,600,000 | [
[
"Mu",
"Wenchuan",
""
],
[
"Lim",
"Kwan Hui",
""
]
] |
2404.16689 | Martin Schmid | Radovan Haluska, Martin Schmid | Learning to Beat ByteRL: Exploitability of Collectible Card Game Agents | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | While Poker, as a family of games, has been studied extensively in the last
decades, collectible card games have seen relatively little attention. Only
recently have we seen an agent that can compete with professional human players
in Hearthstone, one of the most popular collectible card games. Although
artificial agents must be able to work with imperfect information in both of
these genres, collectible card games pose another set of distinct challenges.
Unlike in many poker variants, agents must deal with state space so vast that
even enumerating all states consistent with the agent's beliefs is intractable,
rendering the current search methods unusable and requiring the agents to opt
for other techniques. In this paper, we investigate the strength of such
techniques for this class of games. Namely, we present preliminary analysis
results of ByteRL, the state-of-the-art agent in Legends of Code and Magic and
Hearthstone. Although ByteRL beat a top-10 Hearthstone player from China, we
show that its play in Legends of Code and Magic is highly exploitable.
| [
{
"version": "v1",
"created": "Thu, 25 Apr 2024 15:48:40 GMT"
}
] | 1,714,089,600,000 | [
[
"Haluska",
"Radovan",
""
],
[
"Schmid",
"Martin",
""
]
] |
2404.17129 | Juan Colonna | Juan G. Colonna, Ahmed A. Fares, M\'arcio Duarte, Ricardo Sousa | Process Mining Embeddings: Learning Vector Representations for Petri
Nets | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Process mining offers powerful techniques for discovering, analyzing, and
enhancing real-world business processes. In this context, Petri nets provide an
expressive means of modeling process behavior. However, directly analyzing and
comparing intricate Petri net presents challenges. This study introduces
PetriNet2Vec, a novel unsupervised methodology based on Natural Language
Processing concepts inspired by Doc2Vec and designed to facilitate the
effective comparison, clustering, and classification of process models
represented as embedding vectors. These embedding vectors allow us to quantify
similarities and relationships between different process models. Our
methodology was experimentally validated using the PDC Dataset, featuring 96
diverse Petri net models. We performed cluster analysis, created UMAP
visualizations, and trained a decision tree to provide compelling evidence for
the capability of PetriNet2Vec to discern meaningful patterns and relationships
among process models and their constituent tasks. Through a series of
experiments, we demonstrated that PetriNet2Vec was capable of learning the
structure of Petri nets, as well as the main properties used to simulate the
process models of our dataset. Furthermore, our results showcase the utility of
the learned embeddings in two crucial downstream tasks within process mining
enhancement: process classification and process retrieval.
| [
{
"version": "v1",
"created": "Fri, 26 Apr 2024 03:07:32 GMT"
},
{
"version": "v2",
"created": "Fri, 3 May 2024 13:33:59 GMT"
}
] | 1,714,953,600,000 | [
[
"Colonna",
"Juan G.",
""
],
[
"Fares",
"Ahmed A.",
""
],
[
"Duarte",
"Márcio",
""
],
[
"Sousa",
"Ricardo",
""
]
] |
2404.17316 | Hannes Ihalainen | Hannes Ihalainen, Andy Oertel, Yong Kiam Tan, Jeremias Berg, Matti
J\"arvisalo, Jakob Nordstr\"om | Certified MaxSAT Preprocessing | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building on the progress in Boolean satisfiability (SAT) solving over the
last decades, maximum satisfiability (MaxSAT) has become a viable approach for
solving NP-hard optimization problems, but ensuring correctness of MaxSAT
solvers has remained an important concern. For SAT, this is largely a solved
problem thanks to the use of proof logging, meaning that solvers emit
machine-verifiable proofs of (un)satisfiability to certify correctness.
However, for MaxSAT, proof logging solvers have started being developed only
very recently. Moreover, these nascent efforts have only targeted the core
solving process, ignoring the preprocessing phase where input problem instances
can be substantially reformulated before being passed on to the solver proper.
In this work, we demonstrate how pseudo-Boolean proof logging can be used to
certify the correctness of a wide range of modern MaxSAT preprocessing
techniques. By combining and extending the VeriPB and CakePB tools, we provide
formally verified, end-to-end proof checking that the input and preprocessed
output MaxSAT problem instances have the same optimal value. An extensive
evaluation on applied MaxSAT benchmarks shows that our approach is feasible in
practice.
| [
{
"version": "v1",
"created": "Fri, 26 Apr 2024 10:55:06 GMT"
}
] | 1,714,348,800,000 | [
[
"Ihalainen",
"Hannes",
""
],
[
"Oertel",
"Andy",
""
],
[
"Tan",
"Yong Kiam",
""
],
[
"Berg",
"Jeremias",
""
],
[
"Järvisalo",
"Matti",
""
],
[
"Nordström",
"Jakob",
""
]
] |
2404.17716 | Andre Beckus | Adis Delanovic, Carmen Chiu, Andre Beckus | Airlift Challenge: A Competition for Optimizing Cargo Delivery | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Airlift operations require the timely distribution of various cargo, much of
which is time sensitive and valuable. However, these operations have to contend
with sudden disruptions from weather and malfunctions, requiring immediate
rescheduling. The Airlift Challenge competition seeks possible solutions via a
simulator that provides a simplified abstraction of the airlift problem. The
simulator uses an OpenAI gym interface that allows participants to create an
algorithm for planning agent actions. The algorithm is scored using a remote
evaluator against scenarios of ever-increasing difficulty. The second iteration
of the competition was underway from November 2023 to April 2024. In this
paper, we describe the competition and simulation environment. As a step
towards applying generalized planning techniques to the problem, we present a
temporal PDDL domain for the Pickup and Delivery Problem, a model which lies at
the core of the Airlift Challenge.
| [
{
"version": "v1",
"created": "Fri, 26 Apr 2024 22:30:10 GMT"
}
] | 1,714,435,200,000 | [
[
"Delanovic",
"Adis",
""
],
[
"Chiu",
"Carmen",
""
],
[
"Beckus",
"Andre",
""
]
] |
2404.18262 | Atharva Naik | Atharva Naik, Jessica Ruhan Yin, Anusha Kamath, Qianou Ma, Sherry
Tongshuang Wu, Charles Murray, Christopher Bogart, Majd Sakr, Carolyn P. Rose | Generating Situated Reflection Triggers about Alternative Solution
Paths: A Case Study of Generative AI for Computer-Supported Collaborative
Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An advantage of Large Language Models (LLMs) is their contextualization
capability - providing different responses based on student inputs like
solution strategy or prior discussion, to potentially better engage students
than standard feedback. We present a design and evaluation of a
proof-of-concept LLM application to offer students dynamic and contextualized
feedback. Specifically, we augment an Online Programming Exercise bot for a
college-level Cloud Computing course with ChatGPT, which offers students
contextualized reflection triggers during a collaborative query optimization
task in database design. We demonstrate that LLMs can be used to generate
highly situated reflection triggers that incorporate details of the
collaborative discussion happening in context. We discuss in depth the
exploration of the design space of the triggers and their correspondence with
the learning objectives as well as the impact on student learning in a pilot
study with 34 students.
| [
{
"version": "v1",
"created": "Sun, 28 Apr 2024 17:56:14 GMT"
}
] | 1,714,435,200,000 | [
[
"Naik",
"Atharva",
""
],
[
"Yin",
"Jessica Ruhan",
""
],
[
"Kamath",
"Anusha",
""
],
[
"Ma",
"Qianou",
""
],
[
"Wu",
"Sherry Tongshuang",
""
],
[
"Murray",
"Charles",
""
],
[
"Bogart",
"Christopher",
""
],
[
"Sakr",
"Majd",
""
],
[
"Rose",
"Carolyn P.",
""
]
] |
2404.18672 | Jean-Guy Mailly | Paul Cibier and Jean-Guy Mailly | Graph Convolutional Networks and Graph Attention Networks for
Approximating Arguments Acceptability -- Technical Report | 15 pages, 2 figures. Submitted to the 10th International Conference
on Computational Models of Argument (COMMA 2024) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Various approaches have been proposed for providing efficient computational
approaches for abstract argumentation. Among them, neural networks have
permitted to solve various decision problems, notably related to arguments
(credulous or skeptical) acceptability. In this work, we push further this
study in various ways. First, relying on the state-of-the-art approach AFGCN,
we show how we can improve the performances of the Graph Convolutional Networks
(GCNs) regarding both runtime and accuracy. Then, we show that it is possible
to improve even more the efficiency of the approach by modifying the
architecture of the network, using Graph Attention Networks (GATs) instead.
| [
{
"version": "v1",
"created": "Mon, 29 Apr 2024 13:12:08 GMT"
}
] | 1,714,435,200,000 | [
[
"Cibier",
"Paul",
""
],
[
"Mailly",
"Jean-Guy",
""
]
] |
2404.18766 | Patrick Haller | Patrick Haller, Jonas Golde, Alan Akbik | PECC: Problem Extraction and Coding Challenges | This paper got accepted at LREC-COLING 2024 (long) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in large language models (LLMs) have showcased their
exceptional abilities across various tasks, such as code generation,
problem-solving and reasoning. Existing benchmarks evaluate tasks in isolation,
yet the extent to which LLMs can understand prose-style tasks, identify the
underlying problems, and then generate appropriate code solutions is still
unexplored. Addressing this gap, we introduce PECC, a novel benchmark derived
from Advent Of Code (AoC) challenges and Project Euler, including 2396
problems. Unlike conventional benchmarks, PECC requires LLMs to interpret
narrative-embedded problems, extract requirements, and generate executable
code. A key feature of our dataset is the complexity added by natural language
prompting in chat-based evaluations, mirroring real-world instruction
ambiguities. Results show varying model performance between narrative and
neutral problems, with specific challenges in the Euler math-based subset with
GPT-3.5-Turbo passing 50% of the AoC challenges and only 8% on the Euler
problems. By probing the limits of LLMs' capabilities, our benchmark provides a
framework to monitor and assess the subsequent progress of LLMs as a universal
problem solver.
| [
{
"version": "v1",
"created": "Mon, 29 Apr 2024 15:02:14 GMT"
}
] | 1,714,435,200,000 | [
[
"Haller",
"Patrick",
""
],
[
"Golde",
"Jonas",
""
],
[
"Akbik",
"Alan",
""
]
] |
2404.18982 | Paul Thagard | Paul Thagard | Can ChatGPT Make Explanatory Inferences? Benchmarks for Abductive
Reasoning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Explanatory inference is the creation and evaluation of hypotheses that
provide explanations, and is sometimes known as abduction or abductive
inference. Generative AI is a new set of artificial intelligence models based
on novel algorithms for generating text, images, and sounds. This paper
proposes a set of benchmarks for assessing the ability of AI programs to
perform explanatory inference, and uses them to determine the extent to which
ChatGPT, a leading generative AI model, is capable of making explanatory
inferences. Tests on the benchmarks reveal that ChatGPT performs creative and
evaluative inferences in many domains, although it is limited to verbal and
visual modalities. Claims that ChatGPT and similar models are incapable of
explanation, understanding, causal reasoning, meaning, and creativity are
rebutted.
| [
{
"version": "v1",
"created": "Mon, 29 Apr 2024 15:19:05 GMT"
}
] | 1,714,521,600,000 | [
[
"Thagard",
"Paul",
""
]
] |
2404.19454 | Adam Kypriadis | Adam D. Kypriadis, Isaac E. Lagaris, Aristidis Likas, Konstantinos E.
Parsopoulos | Optimized neural forms for solving ordinary differential equations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A critical issue in approximating solutions of ordinary differential
equations using neural networks is the exact satisfaction of the boundary or
initial conditions. For this purpose, neural forms have been introduced, i.e.,
functional expressions that depend on neural networks which, by design, satisfy
the prescribed conditions exactly. Expanding upon prior progress, the present
work contributes in three distinct aspects. First, it presents a novel
formalism for crafting optimized neural forms. Second, it outlines a method for
establishing an upper bound on the absolute deviation from the exact solution.
Third, it introduces a technique for converting problems with Neumann or Robin
conditions into equivalent problems with parametric Dirichlet conditions. The
proposed optimized neural forms were numerically tested on a set of diverse
problems, encompassing first-order and second-order ordinary differential
equations, as well as first-order systems. Stiff and delay differential
equations were also considered. The obtained solutions were compared against
solutions obtained via Runge-Kutta methods and exact solutions wherever
available. The reported results and analysis verify that in addition to the
exact satisfaction of the boundary or initial conditions, optimized neural
forms provide closed-form solutions of superior interpolation capability and
controllable overall accuracy.
| [
{
"version": "v1",
"created": "Tue, 30 Apr 2024 11:10:34 GMT"
}
] | 1,714,521,600,000 | [
[
"Kypriadis",
"Adam D.",
""
],
[
"Lagaris",
"Isaac E.",
""
],
[
"Likas",
"Aristidis",
""
],
[
"Parsopoulos",
"Konstantinos E.",
""
]
] |
2404.19485 | Maarten Stol | Maarten C. Stol and Alessandra Mileo | IID Relaxation by Logical Expressivity: A Research Agenda for Fitting
Logics to Neurosymbolic Requirements | 12 pages, 2 figures, submitted to NeSy 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Neurosymbolic background knowledge and the expressivity required of its logic
can break Machine Learning assumptions about data Independence and Identical
Distribution. In this position paper we propose to analyze IID relaxation in a
hierarchy of logics that fit different use case requirements. We discuss the
benefits of exploiting known data dependencies and distribution constraints for
Neurosymbolic use cases and argue that the expressivity required for this
knowledge has implications for the design of underlying ML routines. This opens
a new research agenda with general questions about Neurosymbolic background
knowledge and the expressivity required of its logic.
| [
{
"version": "v1",
"created": "Tue, 30 Apr 2024 12:09:53 GMT"
}
] | 1,714,521,600,000 | [
[
"Stol",
"Maarten C.",
""
],
[
"Mileo",
"Alessandra",
""
]
] |
2405.00352 | Zhiyu Fang | Zhiyu Fang, Shuai-Long Lei, Xiaobin Zhu, Chun Yang, Shi-Xue Zhang,
Xu-Cheng Yin, Jingyan Qin | Transformer-based Reasoning for Learning Evolutionary Chain of Events on
Temporal Knowledge Graph | Accepted by SIGIR 2024 (the Full paper track, camera ready version) | null | 10.1145/3626772.3657706 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal Knowledge Graph (TKG) reasoning often involves completing missing
factual elements along the timeline. Although existing methods can learn good
embeddings for each factual element in quadruples by integrating temporal
information, they often fail to infer the evolution of temporal facts. This is
mainly because of (1) insufficiently exploring the internal structure and
semantic relationships within individual quadruples and (2) inadequately
learning a unified representation of the contextual and temporal correlations
among different quadruples. To overcome these limitations, we propose a novel
Transformer-based reasoning model (dubbed ECEformer) for TKG to learn the
Evolutionary Chain of Events (ECE). Specifically, we unfold the neighborhood
subgraph of an entity node in chronological order, forming an evolutionary
chain of events as the input for our model. Subsequently, we utilize a
Transformer encoder to learn the embeddings of intra-quadruples for ECE. We
then craft a mixed-context reasoning module based on the multi-layer perceptron
(MLP) to learn the unified representations of inter-quadruples for ECE while
accomplishing temporal knowledge reasoning. In addition, to enhance the
timeliness of the events, we devise an additional time prediction task to
complete effective temporal information within the learned unified
representation. Extensive experiments on six benchmark datasets verify the
state-of-the-art performance and the effectiveness of our method.
| [
{
"version": "v1",
"created": "Wed, 1 May 2024 07:12:16 GMT"
}
] | 1,714,608,000,000 | [
[
"Fang",
"Zhiyu",
""
],
[
"Lei",
"Shuai-Long",
""
],
[
"Zhu",
"Xiaobin",
""
],
[
"Yang",
"Chun",
""
],
[
"Zhang",
"Shi-Xue",
""
],
[
"Yin",
"Xu-Cheng",
""
],
[
"Qin",
"Jingyan",
""
]
] |
2405.00644 | Robert Moss | Robert J. Moss, Arec Jamgochian, Johannes Fischer, Anthony Corso, and
Mykel J. Kochenderfer | ConstrainedZero: Chance-Constrained POMDP Planning using Learned
Probabilistic Failure Surrogates and Adaptive Safety Constraints | In Proceedings of the 2024 International Joint Conference on
Artificial Intelligence (IJCAI) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | To plan safely in uncertain environments, agents must balance utility with
safety constraints. Safe planning problems can be modeled as a
chance-constrained partially observable Markov decision process (CC-POMDP) and
solutions often use expensive rollouts or heuristics to estimate the optimal
value and action-selection policy. This work introduces the ConstrainedZero
policy iteration algorithm that solves CC-POMDPs in belief space by learning
neural network approximations of the optimal value and policy with an
additional network head that estimates the failure probability given a belief.
This failure probability guides safe action selection during online Monte Carlo
tree search (MCTS). To avoid overemphasizing search based on the failure
estimates, we introduce $\Delta$-MCTS, which uses adaptive conformal inference
to update the failure threshold during planning. The approach is tested on a
safety-critical POMDP benchmark, an aircraft collision avoidance system, and
the sustainability problem of safe CO$_2$ storage. Results show that by
separating safety constraints from the objective we can achieve a target level
of safety without optimizing the balance between rewards and costs.
| [
{
"version": "v1",
"created": "Wed, 1 May 2024 17:17:22 GMT"
}
] | 1,714,608,000,000 | [
[
"Moss",
"Robert J.",
""
],
[
"Jamgochian",
"Arec",
""
],
[
"Fischer",
"Johannes",
""
],
[
"Corso",
"Anthony",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] |
2405.00843 | Sowmya S. Sundaram | Sowmya S Sundaram, Balaji Alwar | Can a Hallucinating Model help in Reducing Human "Hallucination"? | Under review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The prevalence of unwarranted beliefs, spanning pseudoscience, logical
fallacies, and conspiracy theories, presents substantial societal hurdles and
the risk of disseminating misinformation. Utilizing established psychometric
assessments, this study explores the capabilities of large language models
(LLMs) vis-a-vis the average human in detecting prevalent logical pitfalls. We
undertake a philosophical inquiry, juxtaposing the rationality of humans
against that of LLMs. Furthermore, we propose methodologies for harnessing LLMs
to counter misconceptions, drawing upon psychological models of persuasion such
as cognitive dissonance theory and elaboration likelihood theory. Through this
endeavor, we highlight the potential of LLMs as personalized misinformation
debunking agents.
| [
{
"version": "v1",
"created": "Wed, 1 May 2024 20:10:44 GMT"
}
] | 1,714,694,400,000 | [
[
"Sundaram",
"Sowmya S",
""
],
[
"Alwar",
"Balaji",
""
]
] |
2405.01394 | Weize Zhang | Weize Zhang, Mohammed Elmahgiubi, Kasra Rezaee, Behzad Khamidehi,
Hamidreza Mirkhani, Fazel Arasteh, Chunlin Li, Muhammad Ahsan Kaleem, Eduardo
R. Corral-Soto, Dhruv Sharma, and Tongtong Cao | Analysis of a Modular Autonomous Driving Architecture: The Top
Submission to CARLA Leaderboard 2.0 Challenge | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present the architecture of the Kyber-E2E submission to the
map track of CARLA Leaderboard 2.0 Autonomous Driving (AD) challenge 2023,
which achieved first place. We employed a modular architecture for our solution
consists of five main components: sensing, localization, perception,
tracking/prediction, and planning/control. Our solution leverages
state-of-the-art language-assisted perception models to help our planner
perform more reliably in highly challenging traffic scenarios. We use
open-source driving datasets in conjunction with Inverse Reinforcement Learning
(IRL) to enhance the performance of our motion planner. We provide insight into
our design choices and trade-offs made to achieve this solution. We also
explore the impact of each component in the overall performance of our
solution, with the intent of providing a guideline where allocation of
resources can have the greatest impact.
| [
{
"version": "v1",
"created": "Thu, 21 Mar 2024 23:44:19 GMT"
}
] | 1,714,694,400,000 | [
[
"Zhang",
"Weize",
""
],
[
"Elmahgiubi",
"Mohammed",
""
],
[
"Rezaee",
"Kasra",
""
],
[
"Khamidehi",
"Behzad",
""
],
[
"Mirkhani",
"Hamidreza",
""
],
[
"Arasteh",
"Fazel",
""
],
[
"Li",
"Chunlin",
""
],
[
"Kaleem",
"Muhammad Ahsan",
""
],
[
"Corral-Soto",
"Eduardo R.",
""
],
[
"Sharma",
"Dhruv",
""
],
[
"Cao",
"Tongtong",
""
]
] |
2405.01398 | Brandon Colelough | Brandon Curtis Colelough | Advancing Frontiers in SLAM: A Survey of Symbolic Representation and
Human-Machine Teaming in Environmental Mapping | 8 pages, 1 figure | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This survey paper presents a comprehensive overview of the latest
advancements in the field of Simultaneous Localization and Mapping (SLAM) with
a focus on the integration of symbolic representation of environment features.
The paper synthesizes research trends in multi-agent systems (MAS) and
human-machine teaming, highlighting their applications in both symbolic and
sub-symbolic SLAM tasks. The survey emphasizes the evolution and significance
of ontological designs and symbolic reasoning in creating sophisticated 2D and
3D maps of various environments. Central to this review is the exploration of
different architectural approaches in SLAM, with a particular interest in the
functionalities and applications of edge and control agent architectures in MAS
settings. This study acknowledges the growing demand for enhanced human-machine
collaboration in mapping tasks and examines how these collaborative efforts
improve the accuracy and efficiency of environmental mapping
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2024 00:48:48 GMT"
}
] | 1,714,694,400,000 | [
[
"Colelough",
"Brandon Curtis",
""
]
] |
2405.01797 | Tian Xie | Tian Xie, Zhiqun Zuo, Mohammad Mahdi Khalili, Xueru Zhang | Learning under Imitative Strategic Behavior with Unforeseeable Outcomes | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Machine learning systems have been widely used to make decisions about
individuals who may best respond and behave strategically to receive favorable
outcomes, e.g., they may genuinely improve the true labels or manipulate
observable features directly to game the system without changing labels.
Although both behaviors have been studied (often as two separate problems) in
the literature, most works assume individuals can (i) perfectly foresee the
outcomes of their behaviors when they best respond; (ii) change their features
arbitrarily as long as it is affordable, and the costs they need to pay are
deterministic functions of feature changes. In this paper, we consider a
different setting and focus on imitative strategic behaviors with unforeseeable
outcomes, i.e., individuals manipulate/improve by imitating the features of
those with positive labels, but the induced feature changes are unforeseeable.
We first propose a Stackelberg game to model the interplay between individuals
and the decision-maker, under which we examine how the decision-maker's ability
to anticipate individual behavior affects its objective function and the
individual's best response. We show that the objective difference between the
two can be decomposed into three interpretable terms, with each representing
the decision-maker's preference for a certain behavior. By exploring the roles
of each term, we further illustrate how a decision-maker with adjusted
preferences can simultaneously disincentivize manipulation, incentivize
improvement, and promote fairness.
| [
{
"version": "v1",
"created": "Fri, 3 May 2024 00:53:58 GMT"
}
] | 1,714,953,600,000 | [
[
"Xie",
"Tian",
""
],
[
"Zuo",
"Zhiqun",
""
],
[
"Khalili",
"Mohammad Mahdi",
""
],
[
"Zhang",
"Xueru",
""
]
] |
2405.01840 | Herbert Roitblat | Herbert L. Roitblat | An Essay concerning machine understanding | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial intelligence systems exhibit many useful capabilities, but they
appear to lack understanding. This essay describes how we could go about
constructing a machine capable of understanding. As John Locke (1689) pointed
out words are signs for ideas, which we can paraphrase as thoughts and
concepts. To understand a word is to know and be able to work with the
underlying concepts for which it is an indicator. Understanding between a
speaker and a listener occurs when the speaker casts his or her concepts into
words and the listener recovers approximately those same concepts. Current
models rely on the listener to construct any potential meaning. The diminution
of behaviorism as a psychological paradigm and the rise of cognitivism provide
examples of many experimental methods that can be used to determine whether and
to what extent a machine might understand and to make suggestions about how
that understanding might be instantiated.
| [
{
"version": "v1",
"created": "Fri, 3 May 2024 04:12:43 GMT"
}
] | 1,714,953,600,000 | [
[
"Roitblat",
"Herbert L.",
""
]
] |
2405.02324 | Rolin Gabriel RASOANAIVO | R\^olin Gabriel Rasoanaivo (IRIT, UT Capitole, IRIT-ADRIA), Morteza
Yazdani (UIV), Pascale Zarat\'e (IRIT, UT Capitole, IRIT-ADRIA), Amirhossein
Fateh (UPV) | Combined Compromise for Ideal Solution (CoCoFISo): a multi-criteria
decision-making based on the CoCoSo method algorithm | Expert Systems with Applications, 2024 | null | 10.1016/j.eswa.2024.124079 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Each decision-making tool should be tested and validated in real case studies
to be practical and fit to global problems. The application of multi-criteria
decision-making methods (MCDM) is currently a trend to rank alternatives. In
the literature, there are several multi-criteria decision-making methods
according to their classification. During our experimentation on the Combined
Compromise Solution (CoCoSo) method, we encountered its limits for real cases.
The authors examined the applicability of the CoCoFISo method (improved version
of combined compromise solution), by a real case study in a university campus
and compared the obtained results to other MCDMs such as Preference Ranking
Organisation Method for Enrichment Evaluations (PROMETHEE), Weighted Sum Method
(WSM) and Technique for Order Preference by Similarity to the Ideal Solution
(TOPSIS). Our research finding indicates that CoCoSo is an applied method that
has been developed to solve complex multi variable assessment problems, while
CoCoFISo can improve the shortages observed in CoCoSo and deliver stable
outcomes compared to other developed tools. The findings imply that application
of CoCoFISo is suggested to decision makers, experts and researchers while they
are facing practical challenges and sensitive questions regarding the
utilization of a reliable decision-making method. Unlike many prior studies,
the current version of CoCoSo is unique, original and is presented for the
first time. Its performance was approved using several strategies and
examinations.
| [
{
"version": "v1",
"created": "Mon, 22 Apr 2024 09:19:33 GMT"
}
] | 1,715,040,000,000 | [
[
"Rasoanaivo",
"Rôlin Gabriel",
"",
"IRIT, UT Capitole, IRIT-ADRIA"
],
[
"Yazdani",
"Morteza",
"",
"UIV"
],
[
"Zaraté",
"Pascale",
"",
"IRIT, UT Capitole, IRIT-ADRIA"
],
[
"Fateh",
"Amirhossein",
"",
"UPV"
]
] |
2405.02325 | Michael Timothy Bennett | Michael Timothy Bennett | Multiscale Causal Learning | Definitions shared with arXiv:2404.07227, arXiv:2302.00843 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Biological intelligence is more sample-efficient than artificial intelligence
(AI), learning from fewer examples. Here we answer why. Given data, there can
be many policies which seem "correct" because they perfectly fit the data.
However, only one correct policy could have actually caused the data.
Sample-efficiency requires a means of discerning which. Previous work showed
sample efficiency is maximised by weak-policy-optimisation (WPO); preferring
policies that more weakly constrain what is considered to be correct, given
finite resources. Biology's sample-efficiency demonstrates it is better at WPO.
To understand how, we formalise the "multiscale-competency-architecture" (MCA)
observed in biological systems, as a sequence of nested
"agentic-abstraction-layers". We show that WPO at low levels enables synthesis
of weaker policies at high. We call this "multiscale-causal-learning", and
argue this is how we might construct more scale-able, sample-efficient and
reliable AI. Furthermore, a sufficiently weak policy at low levels is a
precondition of collective policy at higher levels. The higher level "identity"
of the collective is lost if lower levels use an insufficiently weak policy
(e.g. cells may become isolated from the collective informational structure and
revert to primitive behaviour). This has implications for biology, machine
learning, AI-safety, and philosophy.
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2024 00:13:14 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jun 2024 14:38:08 GMT"
}
] | 1,717,459,200,000 | [
[
"Bennett",
"Michael Timothy",
""
]
] |
2405.02327 | Utkarshani Jaimini | Utkarshani Jaimini, Cory Henson, Amit P. Sheth | CausalDisco: Causal discovery using knowledge graph link prediction | 9 pages, 8 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Causal discovery is a process of discovering new causal relations from
observational data. Traditional causal discovery methods often suffer from
issues related to missing data To address these issues, this paper presents a
novel approach called CausalDisco that formulates causal discovery as a
knowledge graph completion problem. More specifically, the task of discovering
causal relations is mapped to the task of knowledge graph link prediction.
CausalDisco supports two types of discovery: causal explanation and causal
prediction. The causal relations have weights representing the strength of the
causal association between entities in the knowledge graph. An evaluation of
this approach uses a benchmark dataset of simulated videos for causal
reasoning, CLEVRER-Humans, and compares the performance of multiple knowledge
graph embedding algorithms. In addition, two distinct dataset splitting
approaches are utilized within the evaluation: (1) random-based split, which is
the method typically used to evaluate link prediction algorithms, and (2)
Markov-based split, a novel data split technique for evaluating link prediction
that utilizes the Markovian property of the causal relation. Results show that
using weighted causal relations improves causal discovery over the baseline
without weighted relations.
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2024 20:50:06 GMT"
}
] | 1,715,040,000,000 | [
[
"Jaimini",
"Utkarshani",
""
],
[
"Henson",
"Cory",
""
],
[
"Sheth",
"Amit P.",
""
]
] |
2405.02458 | Lorenzo Marconi | Gianluca Cima, Domenico Lembo, Lorenzo Marconi, Riccardo Rosati,
Domenico Fabio Savo | Controlled Query Evaluation through Epistemic Dependencies | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In this paper, we propose the use of epistemic dependencies to express data
protection policies in Controlled Query Evaluation (CQE), which is a form of
confidentiality-preserving query answering over ontologies and databases. The
resulting policy language goes significantly beyond those proposed in the
literature on CQE so far, allowing for very rich and practically interesting
forms of data protection rules. We show the expressive abilities of our
framework and study the data complexity of CQE for (unions of) conjunctive
queries when ontologies are specified in the Description Logic DL-Lite_R.
Interestingly, while we show that the problem is in general intractable, we
prove tractability for the case of acyclic epistemic dependencies by providing
a suitable query rewriting algorithm. The latter result paves the way towards
the implementation and practical application of this new approach to CQE.
| [
{
"version": "v1",
"created": "Fri, 3 May 2024 19:48:07 GMT"
}
] | 1,715,040,000,000 | [
[
"Cima",
"Gianluca",
""
],
[
"Lembo",
"Domenico",
""
],
[
"Marconi",
"Lorenzo",
""
],
[
"Rosati",
"Riccardo",
""
],
[
"Savo",
"Domenico Fabio",
""
]
] |
2405.02463 | Daqian Shi | Daqian Shi | Knowledge Graph Extension by Entity Type Recognition | PhD thesis | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Knowledge graphs have emerged as a sophisticated advancement and refinement
of semantic networks, and their deployment is one of the critical methodologies
in contemporary artificial intelligence. The construction of knowledge graphs
is a multifaceted process involving various techniques, where researchers aim
to extract the knowledge from existing resources for the construction since
building from scratch entails significant labor and time costs. However, due to
the pervasive issue of heterogeneity, the description diversity across
different knowledge graphs can lead to mismatches between concepts, thereby
impacting the efficacy of knowledge extraction. This Ph.D. study focuses on
automatic knowledge graph extension, i.e., properly extending the reference
knowledge graph by extracting and integrating concepts from one or more
candidate knowledge graphs. We propose a novel knowledge graph extension
framework based on entity type recognition. The framework aims to achieve
high-quality knowledge extraction by aligning the schemas and entities across
different knowledge graphs, thereby enhancing the performance of the extension.
This paper elucidates three major contributions: (i) we propose an entity type
recognition method exploiting machine learning and property-based similarities
to enhance knowledge extraction; (ii) we introduce a set of assessment metrics
to validate the quality of the extended knowledge graphs; (iii) we develop a
platform for knowledge graph acquisition, management, and extension to benefit
knowledge engineers practically. Our evaluation comprehensively demonstrated
the feasibility and effectiveness of the proposed extension framework and its
functionalities through quantitative experiments and case studies.
| [
{
"version": "v1",
"created": "Fri, 3 May 2024 19:55:03 GMT"
}
] | 1,715,040,000,000 | [
[
"Shi",
"Daqian",
""
]
] |
2405.02583 | Xiangqi Kong | Xiangqi Kong, Yang Xing, Antonios Tsourdos, Ziyue Wang, Weisi Guo,
Adolfo Perrusquia, Andreas Wikander | Explainable Interface for Human-Autonomy Teaming: A Survey | 45 pages, 9 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Nowadays, large-scale foundation models are being increasingly integrated
into numerous safety-critical applications, including human-autonomy teaming
(HAT) within transportation, medical, and defence domains. Consequently, the
inherent 'black-box' nature of these sophisticated deep neural networks
heightens the significance of fostering mutual understanding and trust between
humans and autonomous systems. To tackle the transparency challenges in HAT,
this paper conducts a thoughtful study on the underexplored domain of
Explainable Interface (EI) in HAT systems from a human-centric perspective,
thereby enriching the existing body of research in Explainable Artificial
Intelligence (XAI). We explore the design, development, and evaluation of EI
within XAI-enhanced HAT systems. To do so, we first clarify the distinctions
between these concepts: EI, explanations and model explainability, aiming to
provide researchers and practitioners with a structured understanding. Second,
we contribute to a novel framework for EI, addressing the unique challenges in
HAT. Last, our summarized evaluation framework for ongoing EI offers a holistic
perspective, encompassing model performance, human-centered factors, and group
task objectives. Based on extensive surveys across XAI, HAT, psychology, and
Human-Computer Interaction (HCI), this review offers multiple novel insights
into incorporating XAI into HAT systems and outlines future directions.
| [
{
"version": "v1",
"created": "Sat, 4 May 2024 06:35:38 GMT"
}
] | 1,715,040,000,000 | [
[
"Kong",
"Xiangqi",
""
],
[
"Xing",
"Yang",
""
],
[
"Tsourdos",
"Antonios",
""
],
[
"Wang",
"Ziyue",
""
],
[
"Guo",
"Weisi",
""
],
[
"Perrusquia",
"Adolfo",
""
],
[
"Wikander",
"Andreas",
""
]
] |
2405.02653 | Qianli Zhou | Qianli Zhou and Tianxiang Zhan and Yong Deng | Isopignistic Canonical Decomposition via Belief Evolution Network | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing a general information processing model in uncertain environments
is fundamental for the advancement of explainable artificial intelligence.
Dempster-Shafer theory of evidence is a well-known and effective reasoning
method for representing epistemic uncertainty, which is closely related to
subjective probability theory and possibility theory. Although they can be
transformed to each other under some particular belief structures, there
remains a lack of a clear and interpretable transformation process, as well as
a unified approach for information processing. In this paper, we aim to address
these issues from the perspectives of isopignistic belief functions and the
hyper-cautious transferable belief model. Firstly, we propose an isopignistic
transformation based on the belief evolution network. This transformation
allows for the adjustment of the information granule while retaining the
potential decision outcome. The isopignistic transformation is integrated with
a hyper-cautious transferable belief model to establish a new canonical
decomposition. This decomposition offers a reverse path between the possibility
distribution and its isopignistic mass functions. The result of the canonical
decomposition, called isopignistic function, is an identical information
content distribution to reflect the propensity and relative commitment degree
of the BPA. Furthermore, this paper introduces a method to reconstruct the
basic belief assignment by adjusting the isopignistic function. It explores the
advantages of this approach in modeling and handling uncertainty within the
hyper-cautious transferable belief model. More general, this paper establishes
a theoretical basis for building general models of artificial intelligence
based on probability theory, Dempster-Shafer theory, and possibility theory.
| [
{
"version": "v1",
"created": "Sat, 4 May 2024 12:39:15 GMT"
}
] | 1,715,040,000,000 | [
[
"Zhou",
"Qianli",
""
],
[
"Zhan",
"Tianxiang",
""
],
[
"Deng",
"Yong",
""
]
] |
2405.02846 | Mengjia Wu | Yi Zhang, Mengjia Wu, Guangquan Zhang, Jie Lu | Responsible AI: Portraits with Intelligent Bibliometrics | 14 pages, 9 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Shifting the focus from principles to practical implementation, responsible
artificial intelligence (AI) has garnered considerable attention across
academia, industry, and society at large. Despite being in its nascent stages,
this emerging field grapples with nebulous concepts and intricate knowledge
frameworks. By analyzing three prevailing concepts - explainable AI,
trustworthy AI, and ethical AI, this study defined responsible AI and
identified its core principles. Methodologically, this study successfully
demonstrated the implementation of leveraging AI's capabilities into
bibliometrics for enhanced knowledge discovery and the cross-validation of
experimentally examined models with domain insights. Empirically, this study
investigated 17,799 research articles contributed by the AI community since
2015. This involves recognizing key technological players and their
relationships, unveiling the topical landscape and hierarchy of responsible AI,
charting its evolution, and elucidating the interplay between the
responsibility principles and primary AI techniques. An analysis of a core
cohort comprising 380 articles from multiple disciplines captures the most
recent advancements in responsible AI. As one of the pioneering bibliometric
studies dedicated to exploring responsible AI, this study will provide
comprehensive macro-level insights, enhancing the understanding of responsible
AI while furnishing valuable knowledge support for AI regulation and governance
initiatives.
| [
{
"version": "v1",
"created": "Sun, 5 May 2024 08:40:22 GMT"
}
] | 1,715,040,000,000 | [
[
"Zhang",
"Yi",
""
],
[
"Wu",
"Mengjia",
""
],
[
"Zhang",
"Guangquan",
""
],
[
"Lu",
"Jie",
""
]
] |
2405.02957 | WeiTao Li | Junkai Li, Siyu Wang, Meng Zhang, Weitao Li, Yunghwei Lai, Xinhui
Kang, Weizhi Ma, Yang Liu | Agent Hospital: A Simulacrum of Hospital with Evolvable Medical Agents | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we introduce a simulacrum of hospital called Agent Hospital
that simulates the entire process of treating illness. All patients, nurses,
and doctors are autonomous agents powered by large language models (LLMs). Our
central goal is to enable a doctor agent to learn how to treat illness within
the simulacrum. To do so, we propose a method called MedAgent-Zero. As the
simulacrum can simulate disease onset and progression based on knowledge bases
and LLMs, doctor agents can keep accumulating experience from both successful
and unsuccessful cases. Simulation experiments show that the treatment
performance of doctor agents consistently improves on various tasks. More
interestingly, the knowledge the doctor agents have acquired in Agent Hospital
is applicable to real-world medicare benchmarks. After treating around ten
thousand patients (real-world doctors may take over two years), the evolved
doctor agent achieves a state-of-the-art accuracy of 93.06% on a subset of the
MedQA dataset that covers major respiratory diseases. This work paves the way
for advancing the applications of LLM-powered agent techniques in medical
scenarios.
| [
{
"version": "v1",
"created": "Sun, 5 May 2024 14:53:51 GMT"
}
] | 1,715,040,000,000 | [
[
"Li",
"Junkai",
""
],
[
"Wang",
"Siyu",
""
],
[
"Zhang",
"Meng",
""
],
[
"Li",
"Weitao",
""
],
[
"Lai",
"Yunghwei",
""
],
[
"Kang",
"Xinhui",
""
],
[
"Ma",
"Weizhi",
""
],
[
"Liu",
"Yang",
""
]
] |
2405.03010 | Manjiang Yu | Manjiang Yu, Xue Li | High Order Reasoning for Time Critical Recommendation in Evidence-based
Medicine | 13 pages, 15 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In time-critical decisions, human decision-makers can interact with
AI-enabled situation-aware software to evaluate many imminent and possible
scenarios, retrieve billions of facts, and estimate different outcomes based on
trillions of parameters in a fraction of a second. In high-order reasoning,
"what-if" questions can be used to challenge the assumptions or pre-conditions
of the reasoning, "why-not" questions can be used to challenge on the method
applied in the reasoning, "so-what" questions can be used to challenge the
purpose of the decision, and "how-about" questions can be used to challenge the
applicability of the method. When above high-order reasoning questions are
applied to assist human decision-making, it can help humans to make
time-critical decisions and avoid false-negative or false-positive types of
errors. In this paper, we present a model of high-order reasoning to offer
recommendations in evidence-based medicine in a time-critical fashion for the
applications in ICU. The Large Language Model (LLM) is used in our system. The
experiments demonstrated the LLM exhibited optimal performance in the "What-if"
scenario, achieving a similarity of 88.52% with the treatment plans of human
doctors. In the "Why-not" scenario, the best-performing model tended to opt for
alternative treatment plans in 70% of cases for patients who died after being
discharged from the ICU. In the "So-what" scenario, the optimal model provided
a detailed analysis of the motivation and significance of treatment plans for
ICU patients, with its reasoning achieving a similarity of 55.6% with actual
diagnostic information. In the "How-about" scenario, the top-performing LLM
demonstrated a content similarity of 66.5% in designing treatment plans
transferring for similar diseases. Meanwhile, LLMs managed to predict the life
status of patients after their discharge from the ICU with an accuracy of 70%.
| [
{
"version": "v1",
"created": "Sun, 5 May 2024 17:36:22 GMT"
}
] | 1,715,040,000,000 | [
[
"Yu",
"Manjiang",
""
],
[
"Li",
"Xue",
""
]
] |
2405.03340 | Robert Johansson | Robert Johansson, Patrick Hammer, Tony Lofthouse | Functional Equivalence with NARS | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This study explores the concept of functional equivalence within the
framework of the Non-Axiomatic Reasoning System (NARS), specifically through
OpenNARS for Applications (ONA). Functional equivalence allows organisms to
categorize and respond to varied stimuli based on their utility rather than
perceptual similarity, thus enhancing cognitive efficiency and adaptability. In
this study, ONA was modified to allow the derivation of functional equivalence.
This paper provides practical examples of the capability of ONA to apply
learned knowledge across different functional situations, demonstrating its
utility in complex problem-solving and decision-making. An extended example is
included, where training of ONA aimed to learn basic human-like language
abilities, using a systematic procedure in relating spoken words, objects and
written words. The research carried out as part of this study extends the
understanding of functional equivalence in AGI systems, and argues for its
necessity for level of flexibility in learning and adapting necessary for
human-level AGI.
| [
{
"version": "v1",
"created": "Mon, 6 May 2024 10:40:34 GMT"
}
] | 1,715,040,000,000 | [
[
"Johansson",
"Robert",
""
],
[
"Hammer",
"Patrick",
""
],
[
"Lofthouse",
"Tony",
""
]
] |
2405.03406 | Malte Luttermann | Malte Luttermann, Edgar Baake, Juljan Bouchagiar, Benjamin Gebel,
Philipp Gr\"uning, Dilini Manikwadura, Franziska Schollemann, Elisa Teifke,
Philipp Rostalski, Ralf M\"oller | Automated Computation of Therapies Using Failure Mode and Effects
Analysis in the Medical Domain | Accepted to the German Journal of Artificial Intelligence | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Failure mode and effects analysis (FMEA) is a systematic approach to identify
and analyse potential failures and their effects in a system or process. The
FMEA approach, however, requires domain experts to manually analyse the FMEA
model to derive risk-reducing actions that should be applied. In this paper, we
provide a formal framework to allow for automatic planning and acting in FMEA
models. More specifically, we cast the FMEA model into a Markov decision
process which can then be solved by existing solvers. We show that the FMEA
approach can not only be used to support medical experts during the modelling
process but also to automatically derive optimal therapies for the treatment of
patients.
| [
{
"version": "v1",
"created": "Mon, 6 May 2024 12:16:53 GMT"
}
] | 1,715,040,000,000 | [
[
"Luttermann",
"Malte",
""
],
[
"Baake",
"Edgar",
""
],
[
"Bouchagiar",
"Juljan",
""
],
[
"Gebel",
"Benjamin",
""
],
[
"Grüning",
"Philipp",
""
],
[
"Manikwadura",
"Dilini",
""
],
[
"Schollemann",
"Franziska",
""
],
[
"Teifke",
"Elisa",
""
],
[
"Rostalski",
"Philipp",
""
],
[
"Möller",
"Ralf",
""
]
] |
2405.03524 | Shenzhe Zhu | Shenzhe Zhu, Shengxiang Sun | Exploring knowledge graph-based neural-symbolic system from application
perspective | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advancements in Artificial Intelligence (AI) and deep neural networks have
driven significant progress in vision and text processing. However, achieving
human-like reasoning and interpretability in AI systems remains a substantial
challenge. The Neural-Symbolic paradigm, which integrates neural networks with
symbolic systems, presents a promising pathway toward more interpretable AI.
Within this paradigm, Knowledge Graphs (KG) are crucial, offering a structured
and dynamic method for representing knowledge through interconnected entities
and relationships, typically as triples (subject, predicate, object). This
paper explores recent advancements in neural-symbolic integration based on KG,
examining how it supports integration in three categories: enhancing the
reasoning and interpretability of neural networks with symbolic knowledge
(Symbol for Neural), refining the completeness and accuracy of symbolic systems
via neural network methodologies (Neural for Symbol), and facilitating their
combined application in Hybrid Neural-Symbolic Integration. It highlights
current trends and proposes future research directions in Neural-Symbolic AI.
| [
{
"version": "v1",
"created": "Mon, 6 May 2024 14:40:50 GMT"
},
{
"version": "v2",
"created": "Wed, 8 May 2024 19:54:59 GMT"
},
{
"version": "v3",
"created": "Sat, 18 May 2024 20:38:45 GMT"
},
{
"version": "v4",
"created": "Wed, 29 May 2024 22:37:08 GMT"
}
] | 1,717,113,600,000 | [
[
"Zhu",
"Shenzhe",
""
],
[
"Sun",
"Shengxiang",
""
]
] |
2405.03809 | Zixu Wang | Zixu Wang, Zhigang Sun, Juergen Luettin, Lavdim Halilaj | SocialFormer: Social Interaction Modeling with Edge-enhanced
Heterogeneous Graph Transformers for Trajectory Prediction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate trajectory prediction is crucial for ensuring safe and efficient
autonomous driving. However, most existing methods overlook complex
interactions between traffic participants that often govern their future
trajectories. In this paper, we propose SocialFormer, an agent
interaction-aware trajectory prediction method that leverages the semantic
relationship between the target vehicle and surrounding vehicles by making use
of the road topology. We also introduce an edge-enhanced heterogeneous graph
transformer (EHGT) as the aggregator in a graph neural network (GNN) to encode
the semantic and spatial agent interaction information. Additionally, we
introduce a temporal encoder based on gated recurrent units (GRU) to model the
temporal social behavior of agent movements. Finally, we present an information
fusion framework that integrates agent encoding, lane encoding, and agent
interaction encoding for a holistic representation of the traffic scene. We
evaluate SocialFormer for the trajectory prediction task on the popular
nuScenes benchmark and achieve state-of-the-art performance.
| [
{
"version": "v1",
"created": "Mon, 6 May 2024 19:47:23 GMT"
}
] | 1,715,126,400,000 | [
[
"Wang",
"Zixu",
""
],
[
"Sun",
"Zhigang",
""
],
[
"Luettin",
"Juergen",
""
],
[
"Halilaj",
"Lavdim",
""
]
] |
2405.03825 | Silvan Ferreira da Silva Junior | Silvan Ferreira, Ivanovitch Silva, Allan Martins | Organizing a Society of Language Models: Structures and Mechanisms for
Enhanced Collective Intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent developments in Large Language Models (LLMs) have significantly
expanded their applications across various domains. However, the effectiveness
of LLMs is often constrained when operating individually in complex
environments. This paper introduces a transformative approach by organizing
LLMs into community-based structures, aimed at enhancing their collective
intelligence and problem-solving capabilities. We investigate different
organizational models-hierarchical, flat, dynamic, and federated-each
presenting unique benefits and challenges for collaborative AI systems. Within
these structured communities, LLMs are designed to specialize in distinct
cognitive tasks, employ advanced interaction mechanisms such as direct
communication, voting systems, and market-based approaches, and dynamically
adjust their governance structures to meet changing demands. The implementation
of such communities holds substantial promise for improve problem-solving
capabilities in AI, prompting an in-depth examination of their ethical
considerations, management strategies, and scalability potential. This position
paper seeks to lay the groundwork for future research, advocating a paradigm
shift from isolated to synergistic operational frameworks in AI research and
application.
| [
{
"version": "v1",
"created": "Mon, 6 May 2024 20:15:45 GMT"
}
] | 1,715,126,400,000 | [
[
"Ferreira",
"Silvan",
""
],
[
"Silva",
"Ivanovitch",
""
],
[
"Martins",
"Allan",
""
]
] |
2405.04064 | BingBing Wang | Yanli Yuan and Bingbing Wang and Chuan Zhang and Jingyi Xu and Ximeng
Liu and Liehuang Zhu | MFA-Net: Multi-Scale feature fusion attention network for liver tumor
segmentation | Paper accepted in Human-Centric Representation Learning workshop at
AAAI 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Segmentation of organs of interest in medical CT images is beneficial for
diagnosis of diseases. Though recent methods based on Fully Convolutional
Neural Networks (F-CNNs) have shown success in many segmentation tasks, fusing
features from images with different scales is still a challenge: (1) Due to the
lack of spatial awareness, F-CNNs share the same weights at different spatial
locations. (2) F-CNNs can only obtain surrounding information through local
receptive fields. To address the above challenge, we propose a new segmentation
framework based on attention mechanisms, named MFA-Net (Multi-Scale Feature
Fusion Attention Network). The proposed framework can learn more meaningful
feature maps among multiple scales and result in more accurate automatic
segmentation. We compare our proposed MFA-Net with SOTA methods on two 2D liver
CT datasets. The experimental results show that our MFA-Net produces more
precise segmentation on images with different scales.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 07:10:44 GMT"
},
{
"version": "v2",
"created": "Thu, 9 May 2024 12:26:45 GMT"
}
] | 1,715,299,200,000 | [
[
"Yuan",
"Yanli",
""
],
[
"Wang",
"Bingbing",
""
],
[
"Zhang",
"Chuan",
""
],
[
"Xu",
"Jingyi",
""
],
[
"Liu",
"Ximeng",
""
],
[
"Zhu",
"Liehuang",
""
]
] |
2405.04081 | Gianvincenzo Alfano | Gianvincenzo Alfano, Sergio Greco, Francesco Parisi, Irina Trubitsyna | Counterfactual and Semifactual Explanations in Abstract Argumentation:
Formal Foundations, Complexity and Computation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explainable Artificial Intelligence and Formal Argumentation have received
significant attention in recent years. Argumentation-based systems often lack
explainability while supporting decision-making processes. Counterfactual and
semifactual explanations are interpretability techniques that provide insights
into the outcome of a model by generating alternative hypothetical instances.
While there has been important work on counterfactual and semifactual
explanations for Machine Learning models, less attention has been devoted to
these kinds of problems in argumentation. In this paper, we explore
counterfactual and semifactual reasoning in abstract Argumentation Framework.
We investigate the computational complexity of counterfactual- and
semifactual-based reasoning problems, showing that they are generally harder
than classical argumentation problems such as credulous and skeptical
acceptance. Finally, we show that counterfactual and semifactual queries can be
encoded in weak-constrained Argumentation Framework, and provide a
computational strategy through ASP solvers.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 07:27:27 GMT"
}
] | 1,715,126,400,000 | [
[
"Alfano",
"Gianvincenzo",
""
],
[
"Greco",
"Sergio",
""
],
[
"Parisi",
"Francesco",
""
],
[
"Trubitsyna",
"Irina",
""
]
] |
2405.04135 | Jingyuan Zhang | Ziqi Zhou, Jingyue Zhang, Jingyuan Zhang, Boyue Wang, Tianyu Shi, Alaa
Khamis | In-context Learning for Automated Driving Scenarios | 7 pages, 6 figures, 35 references | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the key challenges in current Reinforcement Learning (RL)-based
Automated Driving (AD) agents is achieving flexible, precise, and human-like
behavior cost-effectively. This paper introduces an innovative approach
utilizing Large Language Models (LLMs) to intuitively and effectively optimize
RL reward functions in a human-centric way. We developed a framework where
instructions and dynamic environment descriptions are input into the LLM. The
LLM then utilizes this information to assist in generating rewards, thereby
steering the behavior of RL agents towards patterns that more closely resemble
human driving. The experimental results demonstrate that this approach not only
makes RL agents more anthropomorphic but also reaches better performance.
Additionally, various strategies for reward-proxy and reward-shaping are
investigated, revealing the significant impact of prompt design on shaping an
AD vehicle's behavior. These findings offer a promising direction for the
development of more advanced and human-like automated driving systems. Our
experimental data and source code can be found here.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 09:04:52 GMT"
}
] | 1,715,126,400,000 | [
[
"Zhou",
"Ziqi",
""
],
[
"Zhang",
"Jingyue",
""
],
[
"Zhang",
"Jingyuan",
""
],
[
"Wang",
"Boyue",
""
],
[
"Shi",
"Tianyu",
""
],
[
"Khamis",
"Alaa",
""
]
] |
2405.04215 | Elliot Gestrin | Elliot Gestrin, Marco Kuhlmann, Jendrik Seipp | NL2Plan: Robust LLM-Driven Planning from Minimal Text Descriptions | Accepted for the ICAPS 2024 Workshop on Human-Aware and Explainable
Planning | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today's classical planners are powerful, but modeling input tasks in formats
such as PDDL is tedious and error-prone. In contrast, planning with Large
Language Models (LLMs) allows for almost any input text, but offers no
guarantees on plan quality or even soundness. In an attempt to merge the best
of these two approaches, some work has begun to use LLMs to automate parts of
the PDDL creation process. However, these methods still require various degrees
of expert input. We present NL2Plan, the first domain-agnostic offline
LLM-driven planning system. NL2Plan uses an LLM to incrementally extract the
necessary information from a short text prompt before creating a complete PDDL
description of both the domain and the problem, which is finally solved by a
classical planner. We evaluate NL2Plan on four planning domains and find that
it solves 10 out of 15 tasks - a clear improvement over a plain
chain-of-thought reasoning LLM approach, which only solves 2 tasks. Moreover,
in two out of the five failure cases, instead of returning an invalid plan,
NL2Plan reports that it failed to solve the task. In addition to using NL2Plan
in end-to-end mode, users can inspect and correct all of its intermediate
results, such as the PDDL representation, increasing explainability and making
it an assistive tool for PDDL creation.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 11:27:13 GMT"
}
] | 1,715,126,400,000 | [
[
"Gestrin",
"Elliot",
""
],
[
"Kuhlmann",
"Marco",
""
],
[
"Seipp",
"Jendrik",
""
]
] |