id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
listlengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2312.01350 | Francis Rhys Ward | Francis Rhys Ward, Francesco Belardinelli, Francesca Toni, Tom Everitt | Honesty Is the Best Policy: Defining and Mitigating AI Deception | Accepted as a spotlight at the 37th Conference on Neural Information
Processing Systems (NeurIPS 2023) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Deceptive agents are a challenge for the safety, trustworthiness, and
cooperation of AI systems. We focus on the problem that agents might deceive in
order to achieve their goals (for instance, in our experiments with language
models, the goal of being evaluated as truthful). There are a number of
existing definitions of deception in the literature on game theory and symbolic
AI, but there is no overarching theory of deception for learning agents in
games. We introduce a formal definition of deception in structural causal
games, grounded in the philosophy literature, and applicable to real-world
machine learning systems. Several examples and results illustrate that our
formal definition aligns with the philosophical and commonsense meaning of
deception. Our main technical result is to provide graphical criteria for
deception. We show, experimentally, that these results can be used to mitigate
deception in reinforcement learning agents and language models.
| [
{
"version": "v1",
"created": "Sun, 3 Dec 2023 11:11:57 GMT"
}
] | 1,701,734,400,000 | [
[
"Ward",
"Francis Rhys",
""
],
[
"Belardinelli",
"Francesco",
""
],
[
"Toni",
"Francesca",
""
],
[
"Everitt",
"Tom",
""
]
] |
2312.01601 | Wei Chen | Wei Chen, Huaiyu Wan, Yuting Wu, Shuyuan Zhao, Jiayaqi Cheng, Yuxin Li
and Youfang Lin | Local-Global History-aware Contrastive Learning for Temporal Knowledge
Graph Reasoning | 14 pages, Accept ICDE2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Temporal knowledge graphs (TKGs) have been identified as a promising approach
to represent the dynamics of facts along the timeline. The extrapolation of TKG
is to predict unknowable facts happening in the future, holding significant
practical value across diverse fields. Most extrapolation studies in TKGs focus
on modeling global historical fact repeating and cyclic patterns, as well as
local historical adjacent fact evolution patterns, showing promising
performance in predicting future unknown facts. Yet, existing methods still
face two major challenges: (1) They usually neglect the importance of
historical information in KG snapshots related to the queries when encoding the
local and global historical information; (2) They exhibit weak anti-noise
capabilities, which hinders their performance when the inputs are contaminated
with noise.To this end, we propose a novel \blue{Lo}cal-\blue{g}lobal
history-aware \blue{C}ontrastive \blue{L}earning model (\blue{LogCL}) for TKG
reasoning, which adopts contrastive learning to better guide the fusion of
local and global historical information and enhance the ability to resist
interference. Specifically, for the first challenge, LogCL proposes an
entity-aware attention mechanism applied to the local and global historical
facts encoder, which captures the key historical information related to
queries. For the latter issue, LogCL designs four historical query contrast
patterns, effectively improving the robustness of the model. The experimental
results on four benchmark datasets demonstrate that LogCL delivers better and
more robust performance than the state-of-the-art baselines.
| [
{
"version": "v1",
"created": "Mon, 4 Dec 2023 03:27:01 GMT"
}
] | 1,701,734,400,000 | [
[
"Chen",
"Wei",
""
],
[
"Wan",
"Huaiyu",
""
],
[
"Wu",
"Yuting",
""
],
[
"Zhao",
"Shuyuan",
""
],
[
"Cheng",
"Jiayaqi",
""
],
[
"Li",
"Yuxin",
""
],
[
"Lin",
"Youfang",
""
]
] |
2312.02405 | Anssi Kanervisto | Stephanie Milani, Anssi Kanervisto, Karolis Ramanauskas, Sander
Schulhoff, Brandon Houghton, Rohin Shah | BEDD: The MineRL BASALT Evaluation and Demonstrations Dataset for
Training and Benchmarking Agents that Solve Fuzzy Tasks | NeurIPS 2023 Datasets and Benchmarks Oral. Dataset links are
available on Github: https://github.com/minerllabs/basalt-benchmark | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The MineRL BASALT competition has served to catalyze advances in learning
from human feedback through four hard-to-specify tasks in Minecraft, such as
create and photograph a waterfall. Given the completion of two years of BASALT
competitions, we offer to the community a formalized benchmark through the
BASALT Evaluation and Demonstrations Dataset (BEDD), which serves as a resource
for algorithm development and performance assessment. BEDD consists of a
collection of 26 million image-action pairs from nearly 14,000 videos of human
players completing the BASALT tasks in Minecraft. It also includes over 3,000
dense pairwise human evaluations of human and algorithmic agents. These
comparisons serve as a fixed, preliminary leaderboard for evaluating
newly-developed algorithms. To enable this comparison, we present a streamlined
codebase for benchmarking new algorithms against the leaderboard. In addition
to presenting these datasets, we conduct a detailed analysis of the data from
both datasets to guide algorithm development and evaluation. The released code
and data are available at https://github.com/minerllabs/basalt-benchmark .
| [
{
"version": "v1",
"created": "Tue, 5 Dec 2023 00:29:44 GMT"
}
] | 1,701,820,800,000 | [
[
"Milani",
"Stephanie",
""
],
[
"Kanervisto",
"Anssi",
""
],
[
"Ramanauskas",
"Karolis",
""
],
[
"Schulhoff",
"Sander",
""
],
[
"Houghton",
"Brandon",
""
],
[
"Shah",
"Rohin",
""
]
] |
2312.02561 | Youpeng Zhao | Youpeng Zhao and Yudong Lu and Jian Zhao and Wengang Zhou and Houqiang
Li | DanZero+: Dominating the GuanDan Game through Reinforcement Learning | arXiv admin note: text overlap with arXiv:2210.17087 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The utilization of artificial intelligence (AI) in card games has been a
well-explored subject within AI research for an extensive period. Recent
advancements have propelled AI programs to showcase expertise in intricate card
games such as Mahjong, DouDizhu, and Texas Hold'em. In this work, we aim to
develop an AI program for an exceptionally complex and popular card game called
GuanDan. This game involves four players engaging in both competitive and
cooperative play throughout a long process to upgrade their level, posing great
challenges for AI due to its expansive state and action space, long episode
length, and complex rules. Employing reinforcement learning techniques,
specifically Deep Monte Carlo (DMC), and a distributed training framework, we
first put forward an AI program named DanZero for this game. Evaluation against
baseline AI programs based on heuristic rules highlights the outstanding
performance of our bot. Besides, in order to further enhance the AI's
capabilities, we apply policy-based reinforcement learning algorithm to
GuanDan. To address the challenges arising from the huge action space, which
will significantly impact the performance of policy-based algorithms, we adopt
the pre-trained model to facilitate the training process and the achieved AI
program manages to achieve a superior performance.
| [
{
"version": "v1",
"created": "Tue, 5 Dec 2023 08:07:32 GMT"
}
] | 1,701,820,800,000 | [
[
"Zhao",
"Youpeng",
""
],
[
"Lu",
"Yudong",
""
],
[
"Zhao",
"Jian",
""
],
[
"Zhou",
"Wengang",
""
],
[
"Li",
"Houqiang",
""
]
] |
2312.03446 | Kibeom Kim | Kibeom Kim, Kisung Shin, Min Whoo Lee, Moonhoen Lee, Minsu Lee,
Byoung-Tak Zhang | Visual Hindsight Self-Imitation Learning for Interactive Navigation | 14 pages, 9 figures and under-review | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Interactive visual navigation tasks, which involve following instructions to
reach and interact with specific targets, are challenging not only because
successful experiences are very rare but also because the complex visual inputs
require a substantial number of samples. Previous methods for these tasks often
rely on intricately designed dense rewards or the use of expensive expert data
for imitation learning. To tackle these challenges, we propose a novel
approach, Visual Hindsight Self-Imitation Learning (VHS) for enhancing sample
efficiency through hindsight goal re-labeling and self-imitation. We also
introduce a prototypical goal embedding method derived from experienced goal
observations, that is particularly effective in vision-based and partially
observable environments. This embedding technique allows the agent to visually
reinterpret its unsuccessful attempts, enabling vision-based goal re-labeling
and self-imitation from enhanced successful experiences. Experimental results
show that VHS outperforms existing techniques in interactive visual navigation
tasks, confirming its superior performance and sample efficiency.
| [
{
"version": "v1",
"created": "Tue, 5 Dec 2023 05:34:12 GMT"
}
] | 1,701,907,200,000 | [
[
"Kim",
"Kibeom",
""
],
[
"Shin",
"Kisung",
""
],
[
"Lee",
"Min Whoo",
""
],
[
"Lee",
"Moonhoen",
""
],
[
"Lee",
"Minsu",
""
],
[
"Zhang",
"Byoung-Tak",
""
]
] |
2312.05328 | Talfan Evans | Talfan Evans, Shreya Pathak, Hamza Merzic, Jonathan Schwarz, Ryutaro
Tanno, Olivier J. Henaff | Bad Students Make Great Teachers: Active Learning Accelerates
Large-Scale Visual Understanding | Technical report | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Power-law scaling indicates that large-scale training with uniform sampling
is prohibitively slow. Active learning methods aim to increase data efficiency
by prioritizing learning on the most relevant examples. Despite their appeal,
these methods have yet to be widely adopted since no one algorithm has been
shown to a) generalize across models and tasks b) scale to large datasets and
c) yield overall FLOP savings when accounting for the overhead of data
selection. In this work we propose a method which satisfies these three
properties, leveraging small, cheap proxy models to estimate "learnability"
scores for datapoints, which are used to prioritize data for the training of
much larger models. As a result, our models require 46% and 51% fewer training
updates and up to 25% less total computation to reach the same performance as
uniformly trained visual classifiers on JFT and multimodal models on ALIGN.
Finally, we find our data-prioritization scheme to be complementary with recent
data-curation and learning objectives, yielding a new state-of-the-art in
several multimodal transfer tasks.
| [
{
"version": "v1",
"created": "Fri, 8 Dec 2023 19:26:13 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Dec 2023 15:37:59 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Feb 2024 18:22:12 GMT"
}
] | 1,707,955,200,000 | [
[
"Evans",
"Talfan",
""
],
[
"Pathak",
"Shreya",
""
],
[
"Merzic",
"Hamza",
""
],
[
"Schwarz",
"Jonathan",
""
],
[
"Tanno",
"Ryutaro",
""
],
[
"Henaff",
"Olivier J.",
""
]
] |
2312.05361 | Quentin Ferry | Quentin RV. Ferry, Joshua Ching, Takashi Kawai | Emergence and Function of Abstract Representations in Self-Supervised
Transformers | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Human intelligence relies in part on our brains' ability to create abstract
mental models that succinctly capture the hidden blueprint of our reality. Such
abstract world models notably allow us to rapidly navigate novel situations by
generalizing prior knowledge, a trait deep learning systems have historically
struggled to replicate. However, the recent shift from supervised to
self-supervised objectives, combined with expressive transformer-based
architectures, have yielded powerful foundation models that appear to learn
versatile representations that can support a wide range of downstream tasks.
This promising development raises the intriguing possibility of such models
developing in silico abstract world models. We test this hypothesis by studying
the inner workings of small-scale transformers trained to reconstruct partially
masked visual scenes generated from a simple blueprint. We show that the
network develops intermediate abstract representations, or abstractions, that
encode all semantic features of the dataset. These abstractions manifest as
low-dimensional manifolds where the embeddings of semantically related tokens
transiently converge, thus allowing for the generalization of downstream
computations. Using precise manipulation experiments, we demonstrate that
abstractions are central to the network's decision-making process. Our research
also suggests that these abstractions are compositionally structured,
exhibiting features like contextual independence and part-whole relationships
that mirror the compositional nature of the dataset. Finally, we introduce a
Language-Enhanced Architecture (LEA) designed to encourage the network to
articulate its computations. We find that LEA develops an abstraction-centric
language that can be easily interpreted, allowing us to more readily access and
steer the network's decision-making process.
| [
{
"version": "v1",
"created": "Fri, 8 Dec 2023 20:47:15 GMT"
}
] | 1,702,339,200,000 | [
[
"Ferry",
"Quentin RV.",
""
],
[
"Ching",
"Joshua",
""
],
[
"Kawai",
"Takashi",
""
]
] |
2312.05379 | Bei Zhou Mr | Bei Zhou, Soren Riis | Exploring Parity Challenges in Reinforcement Learning through Curriculum
Learning with Noisy Labels | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper delves into applying reinforcement learning (RL) in strategy
games, particularly those characterized by parity challenges, as seen in
specific positions of Go and Chess and a broader range of impartial games. We
propose a simulated learning process, structured within a curriculum learning
framework and augmented with noisy labels, to mirror the intricacies of
self-play learning scenarios. This approach thoroughly analyses how neural
networks (NNs) adapt and evolve from elementary to increasingly complex game
positions. Our empirical research indicates that even minimal label noise can
significantly impede NNs' ability to discern effective strategies, a difficulty
that intensifies with the growing complexity of the game positions. These
findings underscore the urgent need for advanced methodologies in RL training,
specifically tailored to counter the obstacles imposed by noisy evaluations.
The development of such methodologies is crucial not only for enhancing NN
proficiency in strategy games with significant parity elements but also for
broadening the resilience and efficiency of RL systems across diverse and
complex environments.
| [
{
"version": "v1",
"created": "Fri, 8 Dec 2023 21:32:39 GMT"
},
{
"version": "v2",
"created": "Sun, 14 Jan 2024 10:23:09 GMT"
}
] | 1,705,449,600,000 | [
[
"Zhou",
"Bei",
""
],
[
"Riis",
"Soren",
""
]
] |
2312.05392 | Andr\'es Corrada-Emmanuel | Andr\'es Corrada-Emmanuel | The logic of NTQR evaluations of noisy AI agents: Complete postulates
and logically consistent error correlations | 18 pages, 9 figures, under review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In his "ship of state" allegory (\textit{Republic}, Book VI, 488) Plato poses
a question -- how can a crew of sailors presumed to know little about the art
of navigation recognize the true pilot among them? The allegory argues that a
simple majority voting procedure cannot safely determine who is most qualified
to pilot a ship when the voting members are ignorant or biased. We formalize
Plato's concerns by considering the problem in AI safety of monitoring noisy AI
agents in unsupervised settings. An algorithm evaluating AI agents using
unlabeled data would be subject to the evaluation dilemma - how would we know
the evaluation algorithm was correct itself? This endless validation chain can
be avoided by considering purely algebraic functions of the observed responses.
We can construct complete postulates than can prove or disprove the logical
consistency of any grading algorithm. A complete set of postulates exists
whenever we are evaluating $N$ experts that took $T$ tests with $Q$ questions
with $R$ responses each. We discuss evaluating binary classifiers that have
taken a single test - the $(N,T=1,Q,R=2)$ tests. We show how some of the
postulates have been previously identified in the ML literature but not
recognized as such - the \textbf{agreement equations} of Platanios. The
complete postulates for pair correlated binary classifiers are considered and
we show how it allows for error correlations to be quickly calculated. An
algebraic evaluator based on the assumption that the ensemble is error
independent is compared with grading by majority voting on evaluations using
the \uciadult and and \texttt{two-norm} datasets. Throughout, we demonstrate
how the formalism of logical consistency via algebraic postulates of evaluation
can help increase the safety of machines using AI algorithms.
| [
{
"version": "v1",
"created": "Fri, 8 Dec 2023 22:06:44 GMT"
}
] | 1,702,339,200,000 | [
[
"Corrada-Emmanuel",
"Andrés",
""
]
] |
2312.05473 | Kaibo He | Chenhui Zuo, Kaibo He, Jing Shao, Yanan Sui | Self Model for Embodied Intelligence: Modeling Full-Body Human
Musculoskeletal System and Locomotion Control with Hierarchical
Low-Dimensional Representation | ICRA 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling and control of the human musculoskeletal system is important for
understanding human motor functions, developing embodied intelligence, and
optimizing human-robot interaction systems. However, current human
musculoskeletal models are restricted to a limited range of body parts and
often with a reduced number of muscles. There is also a lack of algorithms
capable of controlling over 600 muscles to generate reasonable human movements.
To fill this gap, we build a musculoskeletal model (MS-Human-700) with 90 body
segments, 206 joints, and 700 muscle-tendon units, allowing simulation of
full-body dynamics and interaction with various devices. We develop a new
algorithm using low-dimensional representation and hierarchical deep
reinforcement learning to achieve state-of-the-art full-body control. We
validate the effectiveness of our model and algorithm in simulations with real
human locomotion data. The musculoskeletal model, along with its control
algorithm, will be made available to the research community to promote a deeper
understanding of human motion control and better design of interactive robots.
Project page: https://lnsgroup.cc/research/MS-Human-700
| [
{
"version": "v1",
"created": "Sat, 9 Dec 2023 05:42:32 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Mar 2024 08:51:41 GMT"
},
{
"version": "v3",
"created": "Sat, 11 May 2024 16:11:15 GMT"
},
{
"version": "v4",
"created": "Sat, 25 May 2024 16:26:22 GMT"
}
] | 1,716,854,400,000 | [
[
"Zuo",
"Chenhui",
""
],
[
"He",
"Kaibo",
""
],
[
"Shao",
"Jing",
""
],
[
"Sui",
"Yanan",
""
]
] |
2312.05589 | Jianguo Jia | Jianguo Jia, Wen Liang, Youzhi Liang | A Review of Hybrid and Ensemble in Deep Learning for Natural Language
Processing | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This review presents a comprehensive exploration of hybrid and ensemble deep
learning models within Natural Language Processing (NLP), shedding light on
their transformative potential across diverse tasks such as Sentiment Analysis,
Named Entity Recognition, Machine Translation, Question Answering, Text
Classification, Generation, Speech Recognition, Summarization, and Language
Modeling. The paper systematically introduces each task, delineates key
architectures from Recurrent Neural Networks (RNNs) to Transformer-based models
like BERT, and evaluates their performance, challenges, and computational
demands. The adaptability of ensemble techniques is emphasized, highlighting
their capacity to enhance various NLP applications. Challenges in
implementation, including computational overhead, overfitting, and model
interpretation complexities, are addressed alongside the trade-off between
interpretability and performance. Serving as a concise yet invaluable guide,
this review synthesizes insights into tasks, architectures, and challenges,
offering a holistic perspective for researchers and practitioners aiming to
advance language-driven applications through ensemble deep learning in NLP.
| [
{
"version": "v1",
"created": "Sat, 9 Dec 2023 14:49:34 GMT"
}
] | 1,702,339,200,000 | [
[
"Jia",
"Jianguo",
""
],
[
"Liang",
"Wen",
""
],
[
"Liang",
"Youzhi",
""
]
] |
2312.05597 | Mario Burgui | Mario Burgui-Burgui | Artificial Intelligence in the automatic coding of interviews on
Landscape Quality Objectives. Comparison and case study | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this study, we conducted a comparative analysis of the automated coding
provided by three Artificial Intelligence functionalities (At-las.ti, ChatGPT
and Google Bard) in relation to the manual coding of 12 research interviews
focused on Landscape Quality Objectives for a small island in the north of Cuba
(Cayo Santa Mar\'ia). For this purpose, the following comparison criteria were
established: Accuracy, Comprehensiveness, Thematic Coherence, Redundancy,
Clarity, Detail and Regularity. The analysis showed the usefulness of AI for
the intended purpose, albeit with numerous flaws and shortcomings. In summary,
today the automatic coding of AIs can be considered useful as a guide towards a
subsequent in-depth and meticulous analysis of the information by the
researcher. However, as this is such a recently developed field, rapid
evolution is expected to bring the necessary improvements to these tools.
| [
{
"version": "v1",
"created": "Sat, 9 Dec 2023 15:37:19 GMT"
}
] | 1,702,339,200,000 | [
[
"Burgui-Burgui",
"Mario",
""
]
] |
2312.05686 | Peeyush Kumar | Ananta Mukherjee, Peeyush Kumar, Boling Yang, Nishanth Chandran, Divya
Gupta | Privacy Preserving Multi-Agent Reinforcement Learning in Supply Chains | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper addresses privacy concerns in multi-agent reinforcement learning
(MARL), specifically within the context of supply chains where individual
strategic data must remain confidential. Organizations within the supply chain
are modeled as agents, each seeking to optimize their own objectives while
interacting with others. As each organization's strategy is contingent on
neighboring strategies, maintaining privacy of state and action-related
information is crucial. To tackle this challenge, we propose a game-theoretic,
privacy-preserving mechanism, utilizing a secure multi-party computation (MPC)
framework in MARL settings. Our major contribution is the successful
implementation of a secure MPC framework, SecFloat on EzPC, to solve this
problem. However, simply implementing policy gradient methods such as MADDPG
operations using SecFloat, while conceptually feasible, would be
programmatically intractable. To overcome this hurdle, we devise a novel
approach that breaks down the forward and backward pass of the neural network
into elementary operations compatible with SecFloat , creating efficient and
secure versions of the MADDPG algorithm. Furthermore, we present a learning
mechanism that carries out floating point operations in a privacy-preserving
manner, an important feature for successful learning in MARL framework.
Experiments reveal that there is on average 68.19% less supply chain wastage in
2 PC compared to no data share, while also giving on average 42.27% better
average cumulative revenue for each player. This work paves the way for
practical, privacy-preserving MARL, promising significant improvements in
secure computation within supply chain contexts and broadly.
| [
{
"version": "v1",
"created": "Sat, 9 Dec 2023 21:25:21 GMT"
}
] | 1,702,339,200,000 | [
[
"Mukherjee",
"Ananta",
""
],
[
"Kumar",
"Peeyush",
""
],
[
"Yang",
"Boling",
""
],
[
"Chandran",
"Nishanth",
""
],
[
"Gupta",
"Divya",
""
]
] |
2312.05735 | Yuntao Shou | Yuntao Shou, Tao Meng, Wei Ai, Nan Yin, Keqin Li | A Comprehensive Survey on Multi-modal Conversational Emotion Recognition
with Deep Learning | 36 pages, 10 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-modal conversation emotion recognition (MCER) aims to recognize and
track the speaker's emotional state using text, speech, and visual information
in the conversation scene. Analyzing and studying MCER issues is significant to
affective computing, intelligent recommendations, and human-computer
interaction fields. Unlike the traditional single-utterance multi-modal emotion
recognition or single-modal conversation emotion recognition, MCER is a more
challenging problem that needs to deal with more complex emotional interaction
relationships. The critical issue is learning consistency and complementary
semantics for multi-modal feature fusion based on emotional interaction
relationships. To solve this problem, people have conducted extensive research
on MCER based on deep learning technology, but there is still a lack of
systematic review of the modeling methods. Therefore, a timely and
comprehensive overview of MCER's recent advances in deep learning is of great
significance to academia and industry. In this survey, we provide a
comprehensive overview of MCER modeling methods and roughly divide MCER methods
into four categories, i.e., context-free modeling, sequential context modeling,
speaker-differentiated modeling, and speaker-relationship modeling. In
addition, we further discuss MCER's publicly available popular datasets,
multi-modal feature extraction methods, application areas, existing challenges,
and future development directions. We hope that our review can help MCER
researchers understand the current research status in emotion recognition,
provide some inspiration, and develop more efficient models.
| [
{
"version": "v1",
"created": "Sun, 10 Dec 2023 03:07:23 GMT"
}
] | 1,702,339,200,000 | [
[
"Shou",
"Yuntao",
""
],
[
"Meng",
"Tao",
""
],
[
"Ai",
"Wei",
""
],
[
"Yin",
"Nan",
""
],
[
"Li",
"Keqin",
""
]
] |
2312.05795 | Maolin Wang | Maolin Wang, Yao Zhao, Jiajia Liu, Jingdong Chen, Chenyi Zhuang,
Jinjie Gu, Ruocheng Guo, Xiangyu Zhao | Large Multimodal Model Compression via Efficient Pruning and
Distillation at AntGroup | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The deployment of Large Multimodal Models (LMMs) within AntGroup has
significantly advanced multimodal tasks in payment, security, and advertising,
notably enhancing advertisement audition tasks in Alipay. However, the
deployment of such sizable models introduces challenges, particularly in
increased latency and carbon emissions, which are antithetical to the ideals of
Green AI. This paper introduces a novel multi-stage compression strategy for
our proprietary LLM, AntGMM. Our methodology pivots on three main aspects:
employing small training sample sizes, addressing multi-level redundancy
through multi-stage pruning, and introducing an advanced distillation loss
design. In our research, we constructed a dataset, the Multimodal Advertisement
Audition Dataset (MAAD), from real-world scenarios within Alipay, and conducted
experiments to validate the reliability of our proposed strategy. Furthermore,
the effectiveness of our strategy is evident in its operational success in
Alipay's real-world multimodal advertisement audition for three months from
September 2023. Notably, our approach achieved a substantial reduction in
latency, decreasing it from 700ms to 90ms, while maintaining online performance
with only a slight performance decrease. Moreover, our compressed model is
estimated to reduce electricity consumption by approximately 75 million kWh
annually compared to the direct deployment of AntGMM, demonstrating our
commitment to green AI initiatives. We will publicly release our code and the
MAAD dataset after some
reviews\footnote{https://github.com/MorinW/AntGMM$\_$Pruning}.
| [
{
"version": "v1",
"created": "Sun, 10 Dec 2023 06:57:48 GMT"
}
] | 1,702,339,200,000 | [
[
"Wang",
"Maolin",
""
],
[
"Zhao",
"Yao",
""
],
[
"Liu",
"Jiajia",
""
],
[
"Chen",
"Jingdong",
""
],
[
"Zhuang",
"Chenyi",
""
],
[
"Gu",
"Jinjie",
""
],
[
"Guo",
"Ruocheng",
""
],
[
"Zhao",
"Xiangyu",
""
]
] |
2312.05822 | William Wang | William Wei Wang, Dongqi Han, Xufang Luo, Yifei Shen, Charles Ling,
Boyu Wang, Dongsheng Li | Toward Open-ended Embodied Tasks Solving | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Empowering embodied agents, such as robots, with Artificial Intelligence (AI)
has become increasingly important in recent years. A major challenge is task
open-endedness. In practice, robots often need to perform tasks with novel
goals that are multifaceted, dynamic, lack a definitive "end-state", and were
not encountered during training. To tackle this problem, this paper introduces
\textit{Diffusion for Open-ended Goals} (DOG), a novel framework designed to
enable embodied AI to plan and act flexibly and dynamically for open-ended task
goals. DOG synergizes the generative prowess of diffusion models with
state-of-the-art, training-free guidance techniques to adaptively perform
online planning and control. Our evaluations demonstrate that DOG can handle
various kinds of novel task goals not seen during training, in both maze
navigation and robot control problems. Our work sheds light on enhancing
embodied AI's adaptability and competency in tackling open-ended goals.
| [
{
"version": "v1",
"created": "Sun, 10 Dec 2023 08:43:26 GMT"
}
] | 1,702,339,200,000 | [
[
"Wang",
"William Wei",
""
],
[
"Han",
"Dongqi",
""
],
[
"Luo",
"Xufang",
""
],
[
"Shen",
"Yifei",
""
],
[
"Ling",
"Charles",
""
],
[
"Wang",
"Boyu",
""
],
[
"Li",
"Dongsheng",
""
]
] |
2312.05864 | Mathieu D'Aquin | Mathieu d'Aquin | Finding Concept Representations in Neural Networks with Self-Organizing
Maps | Published in proceedings of K-CAP 2023 | null | 10.1145/3587259.3627551 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In sufficiently complex tasks, it is expected that as a side effect of
learning to solve a problem, a neural network will learn relevant abstractions
of the representation of that problem. This has been confirmed in particular in
machine vision where a number of works showed that correlations could be found
between the activations of specific units (neurons) in a neural network and the
visual concepts (textures, colors, objects) present in the image. Here, we
explore the use of self-organizing maps as a way to both visually and
computationally inspect how activation vectors of whole layers of neural
networks correspond to neural representations of abstract concepts such as
`female person' or `realist painter'. We experiment with multiple measures
applied to those maps to assess the level of representation of a concept in a
network's layer. We show that, among the measures tested, the relative entropy
of the activation map for a concept compared to the map for the whole data is a
suitable candidate and can be used as part of a methodology to identify and
locate the neural representation of a concept, visualize it, and understand its
importance in solving the prediction task at hand.
| [
{
"version": "v1",
"created": "Sun, 10 Dec 2023 12:10:34 GMT"
}
] | 1,702,339,200,000 | [
[
"d'Aquin",
"Mathieu",
""
]
] |
2312.05866 | Mathieu D'Aquin | Mathieu d'Aquin | TaBIIC: Taxonomy Building through Iterative and Interactive Clustering | Published in proceedings of FOIS 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Building taxonomies is often a significant part of building an ontology, and
many attempts have been made to automate the creation of such taxonomies from
relevant data. The idea in such approaches is either that relevant definitions
of the intension of concepts can be extracted as patterns in the data (e.g. in
formal concept analysis) or that their extension can be built from grouping
data objects based on similarity (clustering). In both cases, the process leads
to an automatically constructed structure, which can either be too coarse and
lacking in definition, or too fined-grained and detailed, therefore requiring
to be refined into the desired taxonomy. In this paper, we explore a method
that takes inspiration from both approaches in an iterative and interactive
process, so that refinement and definition of the concepts in the taxonomy
occur at the time of identifying those concepts in the data. We show that this
method is applicable on a variety of data sources and leads to taxonomies that
can be more directly integrated into ontologies.
| [
{
"version": "v1",
"created": "Sun, 10 Dec 2023 12:17:43 GMT"
}
] | 1,702,339,200,000 | [
[
"d'Aquin",
"Mathieu",
""
]
] |
2312.05875 | Grace Li Zhang | Mengnan Jiang, Jingcun Wang, Amro Eldebiky, Xunzhao Yin, Cheng Zhuo,
Ing-Chao Lin, Grace Li Zhang | Class-Aware Pruning for Efficient Neural Networks | Accepted by Design Automation and Test in Europe (DATE) 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Deep neural networks (DNNs) have demonstrated remarkable success in various
fields. However, the large number of floating-point operations (FLOPs) in DNNs
poses challenges for their deployment in resource-constrained applications,
e.g., edge devices. To address the problem, pruning has been introduced to
reduce the computational cost in executing DNNs. Previous pruning strategies
are based on weight values, gradient values and activation outputs. Different
from previous pruning solutions, in this paper, we propose a class-aware
pruning technique to compress DNNs, which provides a novel perspective to
reduce the computational cost of DNNs. In each iteration, the neural network
training is modified to facilitate the class-aware pruning. Afterwards, the
importance of filters with respect to the number of classes is evaluated. The
filters that are only important for a few number of classes are removed. The
neural network is then retrained to compensate for the incurred accuracy loss.
The pruning iterations end until no filter can be removed anymore, indicating
that the remaining filters are very important for many classes. This pruning
technique outperforms previous pruning solutions in terms of accuracy, pruning
ratio and the reduction of FLOPs. Experimental results confirm that this
class-aware pruning technique can significantly reduce the number of weights
and FLOPs, while maintaining a high inference accuracy.
| [
{
"version": "v1",
"created": "Sun, 10 Dec 2023 13:07:54 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Feb 2024 16:53:29 GMT"
}
] | 1,708,387,200,000 | [
[
"Jiang",
"Mengnan",
""
],
[
"Wang",
"Jingcun",
""
],
[
"Eldebiky",
"Amro",
""
],
[
"Yin",
"Xunzhao",
""
],
[
"Zhuo",
"Cheng",
""
],
[
"Lin",
"Ing-Chao",
""
],
[
"Zhang",
"Grace Li",
""
]
] |
2312.05877 | Christophe Lecoutre | Gilles Audemard, Christophe Lecoutre, Emmanuel Lonca | Proceedings of the 2023 XCSP3 Competition | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | This document represents the proceedings of the 2023 XCSP3 Competition. The
results of this competition of constraint solvers were presented at CP'23 (the
29th International Conference on Principles and Practice of Constraint
Programming, held in Toronto, Canada from 27th to 31th August, 2023).
| [
{
"version": "v1",
"created": "Sun, 10 Dec 2023 13:11:03 GMT"
}
] | 1,702,339,200,000 | [
[
"Audemard",
"Gilles",
""
],
[
"Lecoutre",
"Christophe",
""
],
[
"Lonca",
"Emmanuel",
""
]
] |
2312.05890 | Luca Marzari | Luca Marzari, Gabriele Roncolato and Alessandro Farinelli | Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing | Accepted at AIRO 2023 the 10th Italian Workshop on Artificial
Intelligence and Robotics co-located with the 22nd International Conference
of the Italian Association for Artificial Intelligence (AI*IA 2023), Rome,
Italy | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary
results in many scenarios, ranging from pattern recognition to complex robotic
problems. However, their intricate designs and lack of transparency raise
safety concerns when applied in real-world applications. In this context,
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide
provable guarantees on the safety aspect. Nonetheless, the binary answer (i.e.,
safe or unsafe) could be not informative enough for direct safety interventions
such as safety model ranking or selection. To address this limitation, the FV
problem has recently been extended to the counting version, called
#DNN-Verification, for the computation of the size of the unsafe regions in a
given safety property's domain. Still, due to the complexity of the problem,
existing solutions struggle to scale on real-world robotic scenarios, where the
DNN can be large and complex. To address this limitation, inspired by advances
in FV, in this work, we propose a novel strategy based on reachability analysis
combined with Symbolic Linear Relaxation and parallel computing to enhance the
efficiency of existing exact and approximate FV for DNN counters. The empirical
evaluation on standard FV benchmarks and realistic robotic scenarios shows a
remarkable improvement in scalability and efficiency, enabling the use of such
techniques even for complex robotic applications.
| [
{
"version": "v1",
"created": "Sun, 10 Dec 2023 13:51:25 GMT"
}
] | 1,702,339,200,000 | [
[
"Marzari",
"Luca",
""
],
[
"Roncolato",
"Gabriele",
""
],
[
"Farinelli",
"Alessandro",
""
]
] |
2312.05921 | Zhilin Du | Zhilin Du, Haozhen Li, Zhenyu Liu, Shilong Fan, Xinyu Gu, Lin Zhang | Dig-CSI: A Distributed and Generative Model Assisted CSI Feedback
Training Framework | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advent of deep learning (DL)-based models has significantly advanced
Channel State Information (CSI) feedback mechanisms in wireless communication
systems. However, traditional approaches often suffer from high communication
overhead and potential privacy risks due to the centralized nature of CSI data
processing. To address these challenges, we design a CSI feedback training
framework called Dig-CSI, in which the dataset for training the CSI feedback
model is produced by the distributed generators uploaded by each user equipment
(UE), but not through local data upload. Each UE trains an autoencoder, where
the decoder is considered as the distributed generator, with local data to gain
reconstruction accuracy and the ability to generate. Experimental results show
that Dig-CSI can train a global CSI feedback model with comparable performance
to the model trained with classical centralized learning with a much lighter
communication overhead.
| [
{
"version": "v1",
"created": "Sun, 10 Dec 2023 15:55:57 GMT"
}
] | 1,702,339,200,000 | [
[
"Du",
"Zhilin",
""
],
[
"Li",
"Haozhen",
""
],
[
"Liu",
"Zhenyu",
""
],
[
"Fan",
"Shilong",
""
],
[
"Gu",
"Xinyu",
""
],
[
"Zhang",
"Lin",
""
]
] |
2312.06034 | Piotr Milkowski | Piotr Mi{\l}kowski, Konrad Karanowski, Patryk Wielopolski, Jan
Koco\'n, Przemys{\l}aw Kazienko, Maciej Zi\k{e}ba | Modeling Uncertainty in Personalized Emotion Prediction with Normalizing
Flows | 10 pages, 8 figures, SENTIRE'23 (ICDM 2023) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Designing predictive models for subjective problems in natural language
processing (NLP) remains challenging. This is mainly due to its
non-deterministic nature and different perceptions of the content by different
humans. It may be solved by Personalized Natural Language Processing (PNLP),
where the model exploits additional information about the reader to make more
accurate predictions. However, current approaches require complete information
about the recipients to be straight embedded. Besides, the recent methods focus
on deterministic inference or simple frequency-based estimations of the
probabilities. In this work, we overcome this limitation by proposing a novel
approach to capture the uncertainty of the forecast using conditional
Normalizing Flows. This allows us to model complex multimodal distributions and
to compare various models using negative log-likelihood (NLL). In addition, the
new solution allows for various interpretations of possible reader perception
thanks to the available sampling function. We validated our method on three
challenging, subjective NLP tasks, including emotion recognition and hate
speech. The comparative analysis of generalized and personalized approaches
revealed that our personalized solutions significantly outperform the baseline
and provide more precise uncertainty estimates. The impact on the text
interpretability and uncertainty studies are presented as well. The information
brought by the developed methods makes it possible to build hybrid models whose
effectiveness surpasses classic solutions. In addition, an analysis and
visualization of the probabilities of the given decisions for texts with high
entropy of annotations and annotators with mixed views were carried out.
| [
{
"version": "v1",
"created": "Sun, 10 Dec 2023 23:21:41 GMT"
}
] | 1,702,339,200,000 | [
[
"Miłkowski",
"Piotr",
""
],
[
"Karanowski",
"Konrad",
""
],
[
"Wielopolski",
"Patryk",
""
],
[
"Kocoń",
"Jan",
""
],
[
"Kazienko",
"Przemysław",
""
],
[
"Zięba",
"Maciej",
""
]
] |
2312.06037 | Gyeong-Geon Lee Dr | Gyeong-Geon Lee, Lehong Shi, Ehsan Latif, Yizhu Gao, Arne Bewersdorff,
Matthew Nyaaba, Shuchen Guo, Zihao Wu, Zhengliang Liu, Hui Wang, Gengchen
Mai, Tiaming Liu, and Xiaoming Zhai | Multimodality of AI for Education: Towards Artificial General
Intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper presents a comprehensive examination of how multimodal artificial
intelligence (AI) approaches are paving the way towards the realization of
Artificial General Intelligence (AGI) in educational contexts. It scrutinizes
the evolution and integration of AI in educational systems, emphasizing the
crucial role of multimodality, which encompasses auditory, visual, kinesthetic,
and linguistic modes of learning. This research delves deeply into the key
facets of AGI, including cognitive frameworks, advanced knowledge
representation, adaptive learning mechanisms, strategic planning, sophisticated
language processing, and the integration of diverse multimodal data sources. It
critically assesses AGI's transformative potential in reshaping educational
paradigms, focusing on enhancing teaching and learning effectiveness, filling
gaps in existing methodologies, and addressing ethical considerations and
responsible usage of AGI in educational settings. The paper also discusses the
implications of multimodal AI's role in education, offering insights into
future directions and challenges in AGI development. This exploration aims to
provide a nuanced understanding of the intersection between AI, multimodality,
and education, setting a foundation for future research and development in AGI.
| [
{
"version": "v1",
"created": "Sun, 10 Dec 2023 23:32:55 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Dec 2023 15:26:38 GMT"
}
] | 1,702,425,600,000 | [
[
"Lee",
"Gyeong-Geon",
""
],
[
"Shi",
"Lehong",
""
],
[
"Latif",
"Ehsan",
""
],
[
"Gao",
"Yizhu",
""
],
[
"Bewersdorff",
"Arne",
""
],
[
"Nyaaba",
"Matthew",
""
],
[
"Guo",
"Shuchen",
""
],
[
"Wu",
"Zihao",
""
],
[
"Liu",
"Zhengliang",
""
],
[
"Wang",
"Hui",
""
],
[
"Mai",
"Gengchen",
""
],
[
"Liu",
"Tiaming",
""
],
[
"Zhai",
"Xiaoming",
""
]
] |
2312.06141 | Savya Khosla | Savya Khosla, Zhen Zhu, Yifei He | Survey on Memory-Augmented Neural Networks: Cognitive Insights to AI
Applications | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper explores Memory-Augmented Neural Networks (MANNs), delving into
how they blend human-like memory processes into AI. It covers different memory
types, like sensory, short-term, and long-term memory, linking psychological
theories with AI applications. The study investigates advanced architectures
such as Hopfield Networks, Neural Turing Machines, Correlation Matrix Memories,
Memformer, and Neural Attention Memory, explaining how they work and where they
excel. It dives into real-world uses of MANNs across Natural Language
Processing, Computer Vision, Multimodal Learning, and Retrieval Models, showing
how memory boosters enhance accuracy, efficiency, and reliability in AI tasks.
Overall, this survey provides a comprehensive view of MANNs, offering insights
for future research in memory-based AI systems.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 06:05:09 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Dec 2023 02:13:26 GMT"
}
] | 1,702,512,000,000 | [
[
"Khosla",
"Savya",
""
],
[
"Zhu",
"Zhen",
""
],
[
"He",
"Yifei",
""
]
] |
2312.06231 | Elodie Germani | Elodie Germani (EMPENN), Elisa Fromont (LACODAM), Camille Maumet
(EMPENN) | Uncovering communities of pipelines in the task-fMRI analytical space | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analytical workflows in functional magnetic resonance imaging are highly
flexible with limited best practices as to how to choose a pipeline. While it
has been shown that the use of different pipelines might lead to different
results, there is still a lack of understanding of the factors that drive these
differences and of the stability of these differences across contexts. We use
community detection algorithms to explore the pipeline space and assess the
stability of pipeline relationships across different contexts. We show that
there are subsets of pipelines that give similar results, especially those
sharing specific parameters (e.g. number of motion regressors, software
packages, etc.). Those pipeline-to-pipeline patterns are stable across groups
of participants but not across different tasks. By visualizing the differences
between communities, we show that the pipeline space is mainly driven by the
size of the activation area in the brain and the scale of statistic values in
statistic maps.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 09:18:14 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Feb 2024 10:22:21 GMT"
}
] | 1,707,782,400,000 | [
[
"Germani",
"Elodie",
"",
"EMPENN"
],
[
"Fromont",
"Elisa",
"",
"LACODAM"
],
[
"Maumet",
"Camille",
"",
"EMPENN"
]
] |
2312.06261 | Ruonan Liu | Ruonan Liu, Quanhu Zhang, Te Han | Survey on Foundation Models for Prognostics and Health Management in
Industrial Cyber-Physical Systems | Authors of the paper to be re-established | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Industrial Cyber-Physical Systems (ICPS) integrate the disciplines of
computer science, communication technology, and engineering, and have emerged
as integral components of contemporary manufacturing and industries. However,
ICPS encounters various challenges in long-term operation, including equipment
failures, performance degradation, and security threats. To achieve efficient
maintenance and management, prognostics and health management (PHM) finds
widespread application in ICPS for critical tasks, including failure
prediction, health monitoring, and maintenance decision-making. The emergence
of large-scale foundation models (LFMs) like BERT and GPT signifies a
significant advancement in AI technology, and ChatGPT stands as a remarkable
accomplishment within this research paradigm, harboring potential for General
Artificial Intelligence. Considering the ongoing enhancement in data
acquisition technology and data processing capability, LFMs are anticipated to
assume a crucial role in the PHM domain of ICPS. However, at present, a
consensus is lacking regarding the application of LFMs to PHM in ICPS,
necessitating systematic reviews and roadmaps to elucidate future directions.
To bridge this gap, this paper elucidates the key components and recent
advances in the underlying model.A comprehensive examination and comprehension
of the latest advances in grand modeling for PHM in ICPS can offer valuable
references for decision makers and researchers in the industrial field while
facilitating further enhancements in the reliability, availability, and safety
of ICPS.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 09:58:46 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Dec 2023 02:50:54 GMT"
},
{
"version": "v3",
"created": "Sat, 20 Jan 2024 12:53:12 GMT"
}
] | 1,705,968,000,000 | [
[
"Liu",
"Ruonan",
""
],
[
"Zhang",
"Quanhu",
""
],
[
"Han",
"Te",
""
]
] |
2312.06297 | Jiangbin Zheng | Jiangbin Zheng, Siyuan Li, Yufei Huang, Zhangyang Gao, Cheng Tan,
Bozhen Hu, Jun Xia, Ge Wang, Stan Z. Li | MMDesign: Multi-Modality Transfer Learning for Generative Protein Design | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein design involves generating protein sequences based on their
corresponding protein backbones. While deep generative models show promise for
learning protein design directly from data, the lack of publicly available
structure-sequence pairings limits their generalization capabilities. Previous
efforts of generative protein design have focused on architectural improvements
and pseudo-data augmentation to overcome this bottleneck. To further address
this challenge, we propose a novel protein design paradigm called MMDesign,
which leverages multi-modality transfer learning. To our knowledge, MMDesign is
the first framework that combines a pretrained structural module with a
pretrained contextual module, using an auto-encoder (AE) based language model
to incorporate prior semantic knowledge of protein sequences. We also introduce
a cross-layer cross-modal alignment algorithm to enable the structural module
to learn long-term temporal information and ensure consistency between
structural and contextual modalities. Experimental results, only training with
the small CATH dataset, demonstrate that our MMDesign framework consistently
outperforms other baselines on various public test sets. To further assess the
biological plausibility of the generated protein sequences and data
distribution, we present systematic quantitative analysis techniques that
provide interpretability and reveal more about the laws of protein design.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 10:59:23 GMT"
}
] | 1,702,339,200,000 | [
[
"Zheng",
"Jiangbin",
""
],
[
"Li",
"Siyuan",
""
],
[
"Huang",
"Yufei",
""
],
[
"Gao",
"Zhangyang",
""
],
[
"Tan",
"Cheng",
""
],
[
"Hu",
"Bozhen",
""
],
[
"Xia",
"Jun",
""
],
[
"Wang",
"Ge",
""
],
[
"Li",
"Stan Z.",
""
]
] |
2312.06432 | Tao Yu | Tao Yu, Zongdian Li, Kei Sakaguchi, Omar Hashash, Walid Saad, Merouane
Debbah | Internet of Federated Digital Twins (IoFDT): Connecting Twins Beyond
Borders for Society 5.0 | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The concept of digital twin (DT), which enables the creation of a
programmable, digital representation of physical systems, is expected to
revolutionize future industries and will lie at the heart of the vision of a
future smart society, namely, Society 5.0, in which high integration between
cyber (digital) and physical spaces is exploited to bring economic and societal
advancements. However, the success of such a DT-driven Society 5.0 requires a
synergistic convergence of artificial intelligence and networking technologies
into an integrated, programmable system that can coordinate networks of DTs to
effectively deliver diverse Society 5.0 services. Prior works remain restricted
to either qualitative study, simple analysis or software implementations of a
single DT, and thus, they cannot provide the highly synergistic integration of
digital and physical spaces as required by Society 5.0. In contrast, this paper
envisions a novel concept of an Internet of Federated Digital Twins (IoFDT)
that holistically integrates heterogeneous and physically separated DTs
representing different Society 5.0 services within a single framework and
system. For this concept of IoFDT, we first introduce a hierarchical
architecture that integrates federated DTs through horizontal and vertical
interactions, bridging the cyber and physical spaces to unlock new
possibilities. Then, we discuss the challenges of realizing IoFDT, highlighting
the intricacies across communication, computing, and AI-native networks while
also underscoring potential innovative solutions. Subsequently, we elaborate on
the importance of the implementation of a unified IoFDT platform that
integrates all technical components and orchestrates their interactions,
emphasizing the necessity of practical experimental platforms with a focus on
real-world applications in areas like smart mobility.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 14:56:27 GMT"
}
] | 1,702,339,200,000 | [
[
"Yu",
"Tao",
""
],
[
"Li",
"Zongdian",
""
],
[
"Sakaguchi",
"Kei",
""
],
[
"Hashash",
"Omar",
""
],
[
"Saad",
"Walid",
""
],
[
"Debbah",
"Merouane",
""
]
] |
2312.06490 | Alice Petrov | Alice Petrov, Christian Muise | Automated Planning Techniques for Elementary Proofs in Abstract Algebra | Automated Planning Techniques for Elementary Proofs in Abstract
Algebra. Petrov, A. & Muise, C. In Scheduling and Planning Applications
woRKshop. 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper explores the application of automated planning to automated
theorem proving, which is a branch of automated reasoning concerned with the
development of algorithms and computer programs to construct mathematical
proofs. In particular, we investigate the use of planning to construct
elementary proofs in abstract algebra, which provides a rigorous and axiomatic
framework for studying algebraic structures such as groups, rings, fields, and
modules. We implement basic implications, equalities, and rules in both
deterministic and non-deterministic domains to model commutative rings and
deduce elementary results about them. The success of this initial
implementation suggests that the well-established techniques seen in automated
planning are applicable to the relatively newer field of automated theorem
proving. Likewise, automated theorem proving provides a new, challenging domain
for automated planning.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 16:17:43 GMT"
}
] | 1,702,339,200,000 | [
[
"Petrov",
"Alice",
""
],
[
"Muise",
"Christian",
""
]
] |
2312.06534 | Rebeca D\'iaz-Redondo | Mohamed Soliman Halawa and Rebeca P. D\'iaz-Redondo and Ana
Fern\'andez-Vilas | KPIs-Based Clustering and Visualization of HPC jobs: a Feature Reduction
Approach | 23 pages, 11 figures | IEEE Access, 2021, vol. 9, p. 25522-25543 | 10.1109/ACCESS.2021.3057427 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-Performance Computing (HPC) systems need to be constantly monitored to
ensure their stability. The monitoring systems collect a tremendous amount of
data about different parameters or Key Performance Indicators (KPIs), such as
resource usage, IO waiting time, etc. A proper analysis of this data, usually
stored as time series, can provide insight in choosing the right management
strategies as well as the early detection of issues. In this paper, we
introduce a methodology to cluster HPC jobs according to their KPI indicators.
Our approach reduces the inherent high dimensionality of the collected data by
applying two techniques to the time series: literature-based and variance-based
feature extraction. We also define a procedure to visualize the obtained
clusters by combining the two previous approaches and the Principal Component
Analysis (PCA). Finally, we have validated our contributions on a real data set
to conclude that those KPIs related to CPU usage provide the best cohesion and
separation for clustering analysis and the good results of our visualization
methodology.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 17:13:54 GMT"
}
] | 1,702,339,200,000 | [
[
"Halawa",
"Mohamed Soliman",
""
],
[
"Díaz-Redondo",
"Rebeca P.",
""
],
[
"Fernández-Vilas",
"Ana",
""
]
] |
2312.06632 | Jiyan He | Jiyan He, Weitao Feng, Yaosen Min, Jingwei Yi, Kunsheng Tang, Shuai
Li, Jie Zhang, Kejiang Chen, Wenbo Zhou, Xing Xie, Weiming Zhang, Nenghai Yu,
Shuxin Zheng | Control Risk for Potential Misuse of Artificial Intelligence in Science | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The expanding application of Artificial Intelligence (AI) in scientific
fields presents unprecedented opportunities for discovery and innovation.
However, this growth is not without risks. AI models in science, if misused,
can amplify risks like creation of harmful substances, or circumvention of
established regulations. In this study, we aim to raise awareness of the
dangers of AI misuse in science, and call for responsible AI development and
use in this domain. We first itemize the risks posed by AI in scientific
contexts, then demonstrate the risks by highlighting real-world examples of
misuse in chemical science. These instances underscore the need for effective
risk management strategies. In response, we propose a system called SciGuard to
control misuse risks for AI models in science. We also propose a red-teaming
benchmark SciMT-Safety to assess the safety of different systems. Our proposed
SciGuard shows the least harmful impact in the assessment without compromising
performance in benign tests. Finally, we highlight the need for a
multidisciplinary and collaborative effort to ensure the safe and ethical use
of AI models in science. We hope that our study can spark productive
discussions on using AI ethically in science among researchers, practitioners,
policymakers, and the public, to maximize benefits and minimize the risks of
misuse.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 18:50:57 GMT"
}
] | 1,702,339,200,000 | [
[
"He",
"Jiyan",
""
],
[
"Feng",
"Weitao",
""
],
[
"Min",
"Yaosen",
""
],
[
"Yi",
"Jingwei",
""
],
[
"Tang",
"Kunsheng",
""
],
[
"Li",
"Shuai",
""
],
[
"Zhang",
"Jie",
""
],
[
"Chen",
"Kejiang",
""
],
[
"Zhou",
"Wenbo",
""
],
[
"Xie",
"Xing",
""
],
[
"Zhang",
"Weiming",
""
],
[
"Yu",
"Nenghai",
""
],
[
"Zheng",
"Shuxin",
""
]
] |
2312.06646 | Jiaqi Ma | Junwei Deng, Jiaqi Ma | Computational Copyright: Towards A Royalty Model for Music Generative AI | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advancement of generative AI has given rise to pressing copyright
challenges, particularly in music industry. This paper focuses on the economic
aspects of these challenges, emphasizing that the economic impact constitutes a
central issue in the copyright arena. The complexity of the black-box
generative AI technologies not only suggests but necessitates algorithmic
solutions. However, such solutions have been largely missing, leading to
regulatory challenges in this landscape. We aim to bridge the gap in current
approaches by proposing potential royalty models for revenue sharing on AI
music generation platforms. Our methodology involves a detailed analysis of
existing royalty models in platforms like Spotify and YouTube, and adapting
these to the unique context of AI-generated music. A significant challenge we
address is the attribution of AI-generated music to influential copyrighted
content in the training data. To this end, we present algorithmic solutions
employing data attribution techniques. Our experimental results verify the
effectiveness of these solutions. This research represents a pioneering effort
in integrating technical advancements with economic and legal considerations in
the field of generative AI, offering a computational copyright solution for the
challenges posed by the opaque nature of AI technologies.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 18:57:20 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Feb 2024 17:25:42 GMT"
}
] | 1,707,868,800,000 | [
[
"Deng",
"Junwei",
""
],
[
"Ma",
"Jiaqi",
""
]
] |
2312.06684 | Jianghong Zhou | Jianghong Zhou, Weizhi Du, Md Omar Faruk Rokon, Zhaodong Wang, Jiaxuan
Xu, Isha Shah, Kuang-chih Lee, Musen Wen | Enhanced E-Commerce Attribute Extraction: Innovating with Decorative
Relation Correction and LLAMA 2.0-Based Annotation | 9 pages, 5 images | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | The rapid proliferation of e-commerce platforms accentuates the need for
advanced search and retrieval systems to foster a superior user experience.
Central to this endeavor is the precise extraction of product attributes from
customer queries, enabling refined search, comparison, and other crucial
e-commerce functionalities. Unlike traditional Named Entity Recognition (NER)
tasks, e-commerce queries present a unique challenge owing to the intrinsic
decorative relationship between product types and attributes. In this study, we
propose a pioneering framework that integrates BERT for classification, a
Conditional Random Fields (CRFs) layer for attribute value extraction, and
Large Language Models (LLMs) for data annotation, significantly advancing
attribute recognition from customer inquiries. Our approach capitalizes on the
robust representation learning of BERT, synergized with the sequence decoding
prowess of CRFs, to adeptly identify and extract attribute values. We introduce
a novel decorative relation correction mechanism to further refine the
extraction process based on the nuanced relationships between product types and
attributes inherent in e-commerce data. Employing LLMs, we annotate additional
data to expand the model's grasp and coverage of diverse attributes. Our
methodology is rigorously validated on various datasets, including Walmart,
BestBuy's e-commerce NER dataset, and the CoNLL dataset, demonstrating
substantial improvements in attribute recognition performance. Particularly,
the model showcased promising results during a two-month deployment in
Walmart's Sponsor Product Search, underscoring its practical utility and
effectiveness.
| [
{
"version": "v1",
"created": "Sat, 9 Dec 2023 08:26:30 GMT"
}
] | 1,702,425,600,000 | [
[
"Zhou",
"Jianghong",
""
],
[
"Du",
"Weizhi",
""
],
[
"Rokon",
"Md Omar Faruk",
""
],
[
"Wang",
"Zhaodong",
""
],
[
"Xu",
"Jiaxuan",
""
],
[
"Shah",
"Isha",
""
],
[
"Lee",
"Kuang-chih",
""
],
[
"Wen",
"Musen",
""
]
] |
2312.06685 | Shitian Zhao | Shitian Zhao, Zhuowan Li, Yadong Lu, Alan Yuille, Yan Wang | Causal-CoG: A Causal-Effect Look at Context Generation for Boosting
Multi-modal Language Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While Multi-modal Language Models (MLMs) demonstrate impressive multimodal
ability, they still struggle on providing factual and precise responses for
tasks like visual question answering (VQA). In this paper, we address this
challenge from the perspective of contextual information. We propose Causal
Context Generation, Causal-CoG, which is a prompting strategy that engages
contextual information to enhance precise VQA during inference. Specifically,
we prompt MLMs to generate contexts, i.e, text description of an image, and
engage the generated contexts for question answering. Moreover, we investigate
the advantage of contexts on VQA from a causality perspective, introducing
causality filtering to select samples for which contextual information is
helpful. To show the effectiveness of Causal-CoG, we run extensive experiments
on 10 multimodal benchmarks and show consistent improvements, e.g., +6.30% on
POPE, +13.69% on Vizwiz and +6.43% on VQAv2 compared to direct decoding,
surpassing existing methods. We hope Casual-CoG inspires explorations of
context knowledge in multimodal models, and serves as a plug-and-play strategy
for MLM decoding.
| [
{
"version": "v1",
"created": "Sat, 9 Dec 2023 08:44:41 GMT"
}
] | 1,702,425,600,000 | [
[
"Zhao",
"Shitian",
""
],
[
"Li",
"Zhuowan",
""
],
[
"Lu",
"Yadong",
""
],
[
"Yuille",
"Alan",
""
],
[
"Wang",
"Yan",
""
]
] |
2312.06717 | Peter Chang | Seth Neel and Peter Chang | Privacy Issues in Large Language Models: A Survey | May 2024 update | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This is the first survey of the active area of AI research that focuses on
privacy issues in Large Language Models (LLMs). Specifically, we focus on work
that red-teams models to highlight privacy risks, attempts to build privacy
into the training or inference process, enables efficient data deletion from
trained models to comply with existing privacy regulations, and tries to
mitigate copyright issues. Our focus is on summarizing technical research that
develops algorithms, proves theorems, and runs empirical evaluations. While
there is an extensive body of legal and policy work addressing these challenges
from a different angle, that is not the focus of our survey. Nevertheless,
these works, along with recent legal developments do inform how these technical
problems are formalized, and so we discuss them briefly in Section 1. While we
have made our best effort to include all the relevant work, due to the fast
moving nature of this research we may have missed some recent work. If we have
missed some of your work please contact us, as we will attempt to keep this
survey relatively up to date. We are maintaining a repository with the list of
papers covered in this survey and any relevant code that was publicly available
at https://github.com/safr-ml-lab/survey-llm.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 01:26:53 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Jan 2024 21:56:31 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Feb 2024 18:26:08 GMT"
},
{
"version": "v4",
"created": "Thu, 30 May 2024 19:26:05 GMT"
}
] | 1,717,372,800,000 | [
[
"Neel",
"Seth",
""
],
[
"Chang",
"Peter",
""
]
] |
2312.06718 | Haotian Zhang | Haotian Zhang, Semujju Stuart Dereck, Zhicheng Wang, Xianwei Lv, Kang
Xu, Liang Wu, Ye Jia, Jing Wu, Zhuo Long, Wensheng Liang, X.G. Ma, and Ruiyan
Zhuang | Large Scale Foundation Models for Intelligent Manufacturing
Applications: A Survey | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although the applications of artificial intelligence especially deep learning
had greatly improved various aspects of intelligent manufacturing, they still
face challenges for wide employment due to the poor generalization ability,
difficulties to establish high-quality training datasets, and unsatisfactory
performance of deep learning methods. The emergence of large scale foundational
models(LSFMs) had triggered a wave in the field of artificial intelligence,
shifting deep learning models from single-task, single-modal, limited data
patterns to a paradigm encompassing diverse tasks, multimodal, and pre-training
on massive datasets. Although LSFMs had demonstrated powerful generalization
capabilities, automatic high-quality training dataset generation and superior
performance across various domains, applications of LSFMs on intelligent
manufacturing were still in their nascent stage. A systematic overview of this
topic was lacking, especially regarding which challenges of deep learning can
be addressed by LSFMs and how these challenges can be systematically tackled.
To fill this gap, this paper systematically expounded current statue of LSFMs
and their advantages in the context of intelligent manufacturing. and compared
comprehensively with the challenges faced by current deep learning models in
various intelligent manufacturing applications. We also outlined the roadmaps
for utilizing LSFMs to address these challenges. Finally, case studies of
applications of LSFMs in real-world intelligent manufacturing scenarios were
presented to illustrate how LSFMs could help industries, improve their
efficiency.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 02:00:18 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Dec 2023 13:55:19 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Dec 2023 15:49:47 GMT"
}
] | 1,703,635,200,000 | [
[
"Zhang",
"Haotian",
""
],
[
"Dereck",
"Semujju Stuart",
""
],
[
"Wang",
"Zhicheng",
""
],
[
"Lv",
"Xianwei",
""
],
[
"Xu",
"Kang",
""
],
[
"Wu",
"Liang",
""
],
[
"Jia",
"Ye",
""
],
[
"Wu",
"Jing",
""
],
[
"Long",
"Zhuo",
""
],
[
"Liang",
"Wensheng",
""
],
[
"Ma",
"X. G.",
""
],
[
"Zhuang",
"Ruiyan",
""
]
] |
2312.06727 | Alexey Yurtin | Alexey Yurtin | A method for recovery of multidimensional time series based on the
detection of behavioral patterns and the use of autoencoders | 15 pages, in Russian language, 2 figure, 4 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This article presents a method for recovering missing values in
multidimensional time series. The method combines neural network technologies
and an algorithm for searching snippets (behavioral patterns of a time series).
It includes the stages of data preprocessing, recognition and reconstruction,
using convolutional and recurrent neural networks. Experiments have shown high
accuracy of recovery and the advantage of the method over SOTA methods.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 07:50:16 GMT"
}
] | 1,702,425,600,000 | [
[
"Yurtin",
"Alexey",
""
]
] |
2312.06853 | Ching-An Cheng | Ching-An Cheng, Andrey Kolobov, Dipendra Misra, Allen Nie, Adith
Swaminathan | LLF-Bench: Benchmark for Interactive Learning from Language Feedback | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new benchmark, LLF-Bench (Learning from Language Feedback
Benchmark; pronounced as "elf-bench"), to evaluate the ability of AI agents to
interactively learn from natural language feedback and instructions. Learning
from language feedback (LLF) is essential for people, largely because the rich
information this feedback provides can help a learner avoid much of trial and
error and thereby speed up the learning process. Large Language Models (LLMs)
have recently enabled AI agents to comprehend natural language -- and hence AI
agents can potentially benefit from language feedback during learning like
humans do. But existing interactive benchmarks do not assess this crucial
capability: they either use numeric reward feedback or require no learning at
all (only planning or information retrieval). LLF-Bench is designed to fill
this omission. LLF-Bench is a diverse collection of sequential decision-making
tasks that includes user recommendation, poem writing, navigation, and robot
control. The objective of an agent is to interactively solve these tasks based
on their natural-language instructions and the feedback received after taking
actions. Crucially, to ensure that the agent actually "learns" from the
feedback, LLF-Bench implements several randomization techniques (such as
paraphrasing and environment randomization) to ensure that the task isn't
familiar to the agent and that the agent is robust to various verbalizations.
In addition, LLF-Bench provides a unified OpenAI Gym interface for all its
tasks and allows the users to easily configure the information the feedback
conveys (among suggestion, explanation, and instantaneous performance) to study
how agents respond to different types of feedback. Together, these features
make LLF-Bench a unique research platform for developing and testing LLF
agents.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2023 21:49:04 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Dec 2023 06:20:56 GMT"
}
] | 1,702,512,000,000 | [
[
"Cheng",
"Ching-An",
""
],
[
"Kolobov",
"Andrey",
""
],
[
"Misra",
"Dipendra",
""
],
[
"Nie",
"Allen",
""
],
[
"Swaminathan",
"Adith",
""
]
] |
2312.06901 | Renlong Jie | Renlong Jie, Xiaojun Meng, Xin Jiang, Qun Liu | Unsupervised Extractive Summarization with Learnable Length Control
Strategies | accepted by AAAI2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised extractive summarization is an important technique in
information extraction and retrieval. Compared with supervised method, it does
not require high-quality human-labelled summaries for training and thus can be
easily applied for documents with different types, domains or languages. Most
of existing unsupervised methods including TextRank and PACSUM rely on
graph-based ranking on sentence centrality. However, this scorer can not be
directly applied in end-to-end training, and the positional-related prior
assumption is often needed for achieving good summaries. In addition, less
attention is paid to length-controllable extractor, where users can decide to
summarize texts under particular length constraint. This paper introduces an
unsupervised extractive summarization model based on a siamese network, for
which we develop a trainable bidirectional prediction objective between the
selected summary and the original document. Different from the centrality-based
ranking methods, our extractive scorer can be trained in an end-to-end manner,
with no other requirement of positional assumption. In addition, we introduce a
differentiable length control module by approximating 0-1 knapsack solver for
end-to-end length-controllable extracting. Experiments show that our
unsupervised method largely outperforms the centrality-based baseline using a
same sentence encoder. In terms of length control ability, via our trainable
knapsack module, the performance consistently outperforms the strong baseline
without utilizing end-to-end training. Human evaluation further evidences that
our method performs the best among baselines in terms of relevance and
consistency.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 00:15:26 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Dec 2023 09:05:24 GMT"
}
] | 1,702,944,000,000 | [
[
"Jie",
"Renlong",
""
],
[
"Meng",
"Xiaojun",
""
],
[
"Jiang",
"Xin",
""
],
[
"Liu",
"Qun",
""
]
] |
2312.06990 | Prisha Shroff | Prisha Shroff | AI-based Wildfire Prevention, Detection and Suppression System | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Wildfires pose a serious threat to the environment of the world. The global
wildfire season length has increased by 19% and severe wildfires have besieged
nations around the world. Every year, forests are burned by wildfires, causing
vast amounts of carbon dioxide to be released into the atmosphere, contributing
to climate change. There is a need for a system which prevents, detects, and
suppresses wildfires. The AI based Wildfire Prevention, Detection and
Suppression System (WPDSS) is a novel, fully automated, end to end, AI based
solution to effectively predict hotspots and detect wildfires, deploy drones to
spray fire retardant, preventing and suppressing wildfires. WPDSS consists of
four steps. 1. Preprocessing: WPDSS loads real time satellite data from NASA
and meteorological data from NOAA of vegetation, temperature, precipitation,
wind, soil moisture, and land cover for prevention. For detection, it loads the
real time data of Land Cover, Humidity, Temperature, Vegetation, Burned Area
Index, Ozone, and CO2. It uses the process of masking to eliminate not hotspots
and not wildfires such as water bodies, and rainfall. 2. Learning: The AI model
consists of a random forest classifier, which is trained using a labeled
dataset of hotspots and wildfires and not hotspots and not wildfires. 3.
Identification of hotspots and wildfires: WPDSS runs the real time data through
the model to automatically identify hotspots and wildfires. 4. Drone
deployment: The drone flies to the identified hotspot or wildfire location.
WPDSS attained a 98.6% accuracy in identifying hotspots and a 98.7% accuracy in
detecting wildfires. WPDSS will reduce the impacts of climate change, protect
ecosystems and biodiversity, avert huge economic losses, and save human lives.
The power of WPDSS developed can be applied to any location globally to prevent
and suppress wildfires, reducing climate change.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 05:18:23 GMT"
}
] | 1,702,425,600,000 | [
[
"Shroff",
"Prisha",
""
]
] |
2312.07025 | Wei Geng | Wei Geng, Baidi Xiao, Rongpeng Li, Ning Wei, Dong Wang, and Zhifeng
Zhao | Noise Distribution Decomposition based Multi-Agent Distributional
Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generally, Reinforcement Learning (RL) agent updates its policy by
repetitively interacting with the environment, contingent on the received
rewards to observed states and undertaken actions. However, the environmental
disturbance, commonly leading to noisy observations (e.g., rewards and states),
could significantly shape the performance of agent. Furthermore, the learning
performance of Multi-Agent Reinforcement Learning (MARL) is more susceptible to
noise due to the interference among intelligent agents. Therefore, it becomes
imperative to revolutionize the design of MARL, so as to capably ameliorate the
annoying impact of noisy rewards. In this paper, we propose a novel
decomposition-based multi-agent distributional RL method by approximating the
globally shared noisy reward by a Gaussian mixture model (GMM) and decomposing
it into the combination of individual distributional local rewards, with which
each agent can be updated locally through distributional RL. Moreover, a
diffusion model (DM) is leveraged for reward generation in order to mitigate
the issue of costly interaction expenditure for learning distributions.
Furthermore, the optimality of the distribution decomposition is theoretically
validated, while the design of loss function is carefully calibrated to avoid
the decomposition ambiguity. We also verify the effectiveness of the proposed
method through extensive simulation experiments with noisy rewards. Besides,
different risk-sensitive policies are evaluated in order to demonstrate the
superiority of distributional RL in different MARL tasks.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 07:24:15 GMT"
}
] | 1,702,425,600,000 | [
[
"Geng",
"Wei",
""
],
[
"Xiao",
"Baidi",
""
],
[
"Li",
"Rongpeng",
""
],
[
"Wei",
"Ning",
""
],
[
"Wang",
"Dong",
""
],
[
"Zhao",
"Zhifeng",
""
]
] |
2312.07086 | Mike Perkins | Mike Perkins (1), Leon Furze (2), Jasper Roe (3), Jason MacVaugh (1)
((1) British University Vietnam, (2) Deakin University, (3) James Cook
University Singapore) | The AI Assessment Scale (AIAS): A Framework for Ethical Integration of
Generative AI in Educational Assessment | This version contains a revised title and the approved text as
published | J Univ Teach Learn Pract, 21(06), 06 | 10.53761/q3azde36 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent developments in Generative Artificial Intelligence (GenAI) have
created a paradigm shift in multiple areas of society, and the use of these
technologies is likely to become a defining feature of education in coming
decades. GenAI offers transformative pedagogical opportunities, while
simultaneously posing ethical and academic challenges. Against this backdrop,
we outline a practical, simple, and sufficiently comprehensive tool to allow
for the integration of GenAI tools into educational assessment: the AI
Assessment Scale (AIAS).
The AIAS empowers educators to select the appropriate level of GenAI usage in
assessments based on the learning outcomes they seek to address. The AIAS
offers greater clarity and transparency for students and educators, provides a
fair and equitable policy tool for institutions to work with, and offers a
nuanced approach which embraces the opportunities of GenAI while recognising
that there are instances where such tools may not be pedagogically appropriate
or necessary.
By adopting a practical, flexible approach that can be implemented quickly,
the AIAS can form a much-needed starting point to address the current
uncertainty and anxiety regarding GenAI in education. As a secondary objective,
we engage with the current literature and advocate for a refocused discourse on
GenAI tools in education, one which foregrounds how technologies can help
support and enhance teaching and learning, which contrasts with the current
focus on GenAI as a facilitator of academic misconduct.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 09:08:36 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Apr 2024 03:15:00 GMT"
}
] | 1,714,003,200,000 | [
[
"Perkins",
"Mike",
""
],
[
"Furze",
"Leon",
""
],
[
"Roe",
"Jasper",
""
],
[
"MacVaugh",
"Jason",
""
]
] |
2312.07122 | Matteo Bortoletto | Matteo Bortoletto, Lei Shi, Andreas Bulling | Neural Reasoning About Agents' Goals, Preferences, and Actions | The 38th Annual AAAI Conference on Artificial Intelligence (AAAI-24) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We propose the Intuitive Reasoning Network (IRENE) - a novel neural model for
intuitive psychological reasoning about agents' goals, preferences, and actions
that can generalise previous experiences to new situations. IRENE combines a
graph neural network for learning agent and world state representations with a
transformer to encode the task context. When evaluated on the challenging Baby
Intuitions Benchmark, IRENE achieves new state-of-the-art performance on three
out of its five tasks - with up to 48.9% improvement. In contrast to existing
methods, IRENE is able to bind preferences to specific agents, to better
distinguish between rational and irrational agents, and to better understand
the role of blocking obstacles. We also investigate, for the first time, the
influence of the training tasks on test performance. Our analyses demonstrate
the effectiveness of IRENE in combining prior knowledge gained during training
for unseen evaluation tasks.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 09:52:35 GMT"
}
] | 1,702,425,600,000 | [
[
"Bortoletto",
"Matteo",
""
],
[
"Shi",
"Lei",
""
],
[
"Bulling",
"Andreas",
""
]
] |
2312.07130 | Huangxun Chen | Yimo Deng, Huangxun Chen | Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass Safety
Filters of Text-to-Image Models | 23 pages, 11 figures, under review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Text-to-image (TTI) models offer many innovative services but also raise
ethical concerns due to their potential to generate unethical images. Most
public TTI services employ safety filters to prevent unintended images. In this
work, we introduce the Divide-and-Conquer Attack to circumvent the safety
filters of state-of the-art TTI models, including DALL-E 3 and Midjourney. Our
attack leverages LLMs as text transformation agents to create adversarial
prompts. We design attack helper prompts that effectively guide LLMs to break
down an unethical drawing intent into multiple benign descriptions of
individual image elements, allowing them to bypass safety filters while still
generating unethical images. Because the latent harmful meaning only becomes
apparent when all individual elements are drawn together. Our evaluation
demonstrates that our attack successfully circumvents multiple strong
closed-box safety filters. The comprehensive success rate of DACA bypassing the
safety filters of the state-of-the-art TTI engine DALL-E 3 is above 85%, while
the success rate for bypassing Midjourney V6 exceeds 75%. Our findings have
more severe security implications than methods of manual crafting or iterative
TTI model querying due to lower attack barrier, enhanced interpretability , and
better adaptation to defense. Our prototype is available at:
https://github.com/researchcode001/Divide-and-Conquer-Attack
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 10:04:43 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Feb 2024 08:35:59 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Mar 2024 14:01:56 GMT"
}
] | 1,710,460,800,000 | [
[
"Deng",
"Yimo",
""
],
[
"Chen",
"Huangxun",
""
]
] |
2312.07158 | Yuwei Han | Yuwei Han, Yuni Lai, Yulin Zhu and Kai Zhou | Cost Aware Untargeted Poisoning Attack against Graph Neural Networks, | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) have become widely used in the field of graph
mining. However, these networks are vulnerable to structural perturbations.
While many research efforts have focused on analyzing vulnerability through
poisoning attacks, we have identified an inefficiency in current attack losses.
These losses steer the attack strategy towards modifying edges targeting
misclassified nodes or resilient nodes, resulting in a waste of structural
adversarial perturbation. To address this issue, we propose a novel attack loss
framework called the Cost Aware Poisoning Attack (CA-attack) to improve the
allocation of the attack budget by dynamically considering the classification
margins of nodes. Specifically, it prioritizes nodes with smaller positive
margins while postponing nodes with negative margins. Our experiments
demonstrate that the proposed CA-attack significantly enhances existing attack
strategies
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 10:54:02 GMT"
}
] | 1,702,425,600,000 | [
[
"Han",
"Yuwei",
""
],
[
"Lai",
"Yuni",
""
],
[
"Zhu",
"Yulin",
""
],
[
"Zhou",
"Kai",
""
]
] |
2312.07213 | Sibo Zhang | Bihui Yu, Sibo Zhang, Lili Zhou, Jingxuan Wei, Linzhuang Sun, Liping
Bu | Human-computer Interaction for Brain-inspired Computing Based on Machine
Learning And Deep Learning: A Review | 25pages, 8 figures and 4 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The continuous development of artificial intelligence has a profound impact
on biomedicine and other fields, providing new research ideas and technical
methods. Brain-inspired computing is an important intersection between
multimodal technology and biomedical field. Focusing on the application
scenarios of decoding text and speech from brain signals in human-computer
interaction, this paper presents a comprehensive review of the brain-inspired
computing models based on machine learning (ML) and deep learning (DL),
tracking their evolution, application value, challenges and potential research
trends. We first reviews its basic concepts and development history, and
divides its evolution into two stages: recent machine learning and current deep
learning, emphasizing the importance of each stage in the research of
human-computer interaction for brain-inspired computing. In addition, the
latest progress of deep learning in different tasks of human-computer
interaction for brain-inspired computing is reviewed from six perspectives,
such as data sets and different brain signals, and the application of key
technologies in the model is elaborated in detail. Despite significant advances
in brain-inspired computational models, challenges remain to fully exploit
their capabilities, and we provide insights into possible directions for future
academic research. For more detailed information, please visit our GitHub page:
https://github.com/ultracoolHub/brain-inspired-computing.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 12:26:37 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Jan 2024 13:51:26 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Mar 2024 02:29:21 GMT"
}
] | 1,710,115,200,000 | [
[
"Yu",
"Bihui",
""
],
[
"Zhang",
"Sibo",
""
],
[
"Zhou",
"Lili",
""
],
[
"Wei",
"Jingxuan",
""
],
[
"Sun",
"Linzhuang",
""
],
[
"Bu",
"Liping",
""
]
] |
2312.07243 | Enshu Liu | Enshu Liu, Xuefei Ning, Huazhong Yang, Yu Wang | A Unified Sampling Framework for Solver Searching of Diffusion
Probabilistic Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years have witnessed the rapid progress and broad application of
diffusion probabilistic models (DPMs). Sampling from DPMs can be viewed as
solving an ordinary differential equation (ODE). Despite the promising
performance, the generation of DPMs usually consumes much time due to the large
number of function evaluations (NFE). Though recent works have accelerated the
sampling to around 20 steps with high-order solvers, the sample quality with
less than 10 NFE can still be improved. In this paper, we propose a unified
sampling framework (USF) to study the optional strategies for solver. Under
this framework, we further reveal that taking different solving strategies at
different timesteps may help further decrease the truncation error, and a
carefully designed \emph{solver schedule} has the potential to improve the
sample quality by a large margin. Therefore, we propose a new sampling
framework based on the exponential integral formulation that allows free
choices of solver strategy at each step and design specific decisions for the
framework. Moreover, we propose $S^3$, a predictor-based search method that
automatically optimizes the solver schedule to get a better time-quality
trade-off of sampling. We demonstrate that $S^3$ can find outstanding solver
schedules which outperform the state-of-the-art sampling methods on CIFAR-10,
CelebA, ImageNet, and LSUN-Bedroom datasets. Specifically, we achieve 2.69 FID
with 10 NFE and 6.86 FID with 5 NFE on CIFAR-10 dataset, outperforming the SOTA
method significantly. We further apply $S^3$ to Stable-Diffusion model and get
an acceleration ratio of 2$\times$, showing the feasibility of sampling in very
few steps without retraining the neural network.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 13:19:40 GMT"
}
] | 1,702,425,600,000 | [
[
"Liu",
"Enshu",
""
],
[
"Ning",
"Xuefei",
""
],
[
"Yang",
"Huazhong",
""
],
[
"Wang",
"Yu",
""
]
] |
2312.07401 | Dun Zeng | Dun Zeng, Yong Dai, Pengyu Cheng, Longyue Wang, Tianhao Hu, Wanshun
Chen, Nan Du, Zenglin Xu | On Diversified Preferences of Large Language Model Alignment | preprint | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Aligning large language models (LLMs) with human preferences has been
recognized as the key to improving LLMs' interaction quality. However, in this
pluralistic world, human preferences can be diversified due to annotators'
different tastes, which hinders the effectiveness of LLM alignment methods.
This paper presents the first quantitative analysis of commonly used human
feedback datasets to investigate the impact of diversified preferences on
reward modeling. Our analysis reveals a correlation between the calibration
performance of reward models (RMs) and the alignment performance of LLMs. We
find that diversified preference data negatively affect the calibration
performance of RMs on human-shared preferences, such as Harmless\&Helpful,
thereby impairing the alignment performance of LLMs. To address the
ineffectiveness, we propose a novel Multi-Objective Reward learning method
(MORE) to enhance the calibration performance of RMs on shared preferences. We
validate our findings by experiments on three models and five human preference
datasets. Our method significantly improves the prediction calibration of RMs,
leading to better alignment of the Alpaca-7B model with Harmless\&Helpful
preferences. Furthermore, the connection between reward calibration and
preference alignment performance suggests that calibration error can be adopted
as a key metric for evaluating RMs. The open-source code and data are available
at https://github.com/dunzeng/MORE.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 16:17:15 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Dec 2023 16:26:58 GMT"
},
{
"version": "v3",
"created": "Sun, 18 Feb 2024 08:09:02 GMT"
},
{
"version": "v4",
"created": "Wed, 17 Apr 2024 07:28:00 GMT"
}
] | 1,713,398,400,000 | [
[
"Zeng",
"Dun",
""
],
[
"Dai",
"Yong",
""
],
[
"Cheng",
"Pengyu",
""
],
[
"Wang",
"Longyue",
""
],
[
"Hu",
"Tianhao",
""
],
[
"Chen",
"Wanshun",
""
],
[
"Du",
"Nan",
""
],
[
"Xu",
"Zenglin",
""
]
] |
2312.07482 | Rebeca D\'iaz-Redondo | Manar Mohamed Hafez, Rebeca P. D\'iaz Redondo, Ana Fern\'andez-Vilas,
H\'ector Olivera Paz\'o | Classification of retail products: From probabilistic ranking to neural
networks | 17 pages, 8 figures, journal | Applied Sciences, 2021, vol. 11, no 9, p. 4117 | 10.3390/app11094117 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Food retailing is now on an accelerated path to a success penetration into
the digital market by new ways of value creation at all stages of the consumer
decision process. One of the most important imperatives in this path is the
availability of quality data to feed all the process in digital transformation.
But the quality of data is not so obvious if we consider the variety of
products and suppliers in the grocery market. Within this context of digital
transformation of grocery industry, \textit{Midiadia} is Spanish data provider
company that works on converting data from the retailers' products into
knowledge with attributes and insights from the product labels, that is,
maintaining quality data in a dynamic market with a high dispersion of
products. Currently, they manually categorize products (groceries) according to
the information extracted directly (text processing) from the product labelling
and packaging. This paper introduces a solution to automatically categorize the
constantly changing product catalogue into a 3-level food taxonomy. Our
proposal studies three different approaches: a score-based ranking method,
traditional machine learning algorithms, and deep neural networks. Thus, we
provide four different classifiers that support a more efficient and less
error-prone maintenance of groceries catalogues, the main asset of the company.
Finally, we have compared the performance of these three alternatives,
concluding that traditional machine learning algorithms perform better, but
closely followed by the score-based approach.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 18:11:15 GMT"
}
] | 1,702,425,600,000 | [
[
"Hafez",
"Manar Mohamed",
""
],
[
"Redondo",
"Rebeca P. Díaz",
""
],
[
"Fernández-Vilas",
"Ana",
""
],
[
"Pazó",
"Héctor Olivera",
""
]
] |
2312.07635 | Leila Methnani | Leila Methnani, Virginia Dignum, Andreas Theodorou | Clash of the Explainers: Argumentation for Context-Appropriate
Explanations | 17 pages, 3 figures, Accepted at XAI^3 Workshop at ECAI 2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding when and why to apply any given eXplainable Artificial
Intelligence (XAI) technique is not a straightforward task. There is no single
approach that is best suited for a given context. This paper aims to address
the challenge of selecting the most appropriate explainer given the context in
which an explanation is required. For AI explainability to be effective,
explanations and how they are presented needs to be oriented towards the
stakeholder receiving the explanation. If -- in general -- no single
explanation technique surpasses the rest, then reasoning over the available
methods is required in order to select one that is context-appropriate. Due to
the transparency they afford, we propose employing argumentation techniques to
reach an agreement over the most suitable explainers from a given set of
possible explainers.
In this paper, we propose a modular reasoning system consisting of a given
mental model of the relevant stakeholder, a reasoner component that solves the
argumentation problem generated by a multi-explainer component, and an AI model
that is to be explained suitably to the stakeholder of interest. By formalising
supporting premises -- and inferences -- we can map stakeholder characteristics
to those of explanation techniques. This allows us to reason over the
techniques and prioritise the best one for the given context, while also
offering transparency into the selection decision.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 09:52:30 GMT"
}
] | 1,702,512,000,000 | [
[
"Methnani",
"Leila",
""
],
[
"Dignum",
"Virginia",
""
],
[
"Theodorou",
"Andreas",
""
]
] |
2312.07637 | Qi Shi Miss | Qi Shi | Responsibility in Extensive Form Games | The 38th Annual AAAI Conference on Artificial Intelligence (AAAI-24) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Two different forms of responsibility, counterfactual and seeing-to-it, have
been extensively discussed in the philosophy and AI in the context of a single
agent or multiple agents acting simultaneously. Although the generalisation of
counterfactual responsibility to a setting where multiple agents act in some
order is relatively straightforward, the same cannot be said about seeing-to-it
responsibility. Two versions of seeing-to-it modality applicable to such
settings have been proposed in the literature. Neither of them perfectly
captures the intuition of responsibility. This paper proposes a definition of
seeing-to-it responsibility for such settings that amalgamate the two
modalities.
This paper shows that the newly proposed notion of responsibility and
counterfactual responsibility are not definable through each other and studies
the responsibility gap for these two forms of responsibility. It shows that
although these two forms of responsibility are not enough to ascribe
responsibility in each possible situation, this gap does not exist if
higher-order responsibility is taken into account.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 10:41:17 GMT"
}
] | 1,702,512,000,000 | [
[
"Shi",
"Qi",
""
]
] |
2312.07711 | Daniel S. Katz | Alejandro Duque, Abdullah Syed, Kastan V. Day, Matthew J. Berry,
Daniel S. Katz, Volodymyr V. Kindratenko | Leveraging Large Language Models to Build and Execute Computational
Workflows | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The recent development of large language models (LLMs) with multi-billion
parameters, coupled with the creation of user-friendly application programming
interfaces (APIs), has paved the way for automatically generating and executing
code in response to straightforward human queries. This paper explores how
these emerging capabilities can be harnessed to facilitate complex scientific
workflows, eliminating the need for traditional coding methods. We present
initial findings from our attempt to integrate Phyloflow with OpenAI's
function-calling API, and outline a strategy for developing a comprehensive
workflow management system based on these concepts.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 20:17:13 GMT"
}
] | 1,702,512,000,000 | [
[
"Duque",
"Alejandro",
""
],
[
"Syed",
"Abdullah",
""
],
[
"Day",
"Kastan V.",
""
],
[
"Berry",
"Matthew J.",
""
],
[
"Katz",
"Daniel S.",
""
],
[
"Kindratenko",
"Volodymyr V.",
""
]
] |
2312.07721 | Antonio Busson | Antonio J. G. Busson, Rennan Gaio, Rafael H. Rocha, Francisco
Evangelista, Bruno Rizzi, Luan Carvalho, Rafael Miceli, Marcos Rabaioli,
David Favaro | Saturn Platform: Foundation Model Operations and Generative AI for
Financial Services | null | null | 10.5753/webmedia_estendido.2023.234354 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Saturn is an innovative platform that assists Foundation Model (FM) building
and its integration with IT operations (Ops). It is custom-made to meet the
requirements of data scientists, enabling them to effectively create and
implement FMs while enhancing collaboration within their technical domain. By
offering a wide range of tools and features, Saturn streamlines and automates
different stages of FM development, making it an invaluable asset for data
science teams. This white paper introduces prospective applications of
generative AI models derived from FMs in the financial sector.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 20:28:11 GMT"
}
] | 1,702,512,000,000 | [
[
"Busson",
"Antonio J. G.",
""
],
[
"Gaio",
"Rennan",
""
],
[
"Rocha",
"Rafael H.",
""
],
[
"Evangelista",
"Francisco",
""
],
[
"Rizzi",
"Bruno",
""
],
[
"Carvalho",
"Luan",
""
],
[
"Miceli",
"Rafael",
""
],
[
"Rabaioli",
"Marcos",
""
],
[
"Favaro",
"David",
""
]
] |
2312.07753 | Jayoung Kim | Jayoung Kim, Yehjin Shin, Jeongwhan Choi, Hyowon Wi, Noseong Park | Polynomial-based Self-Attention for Table Representation learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Structured data, which constitutes a significant portion of existing data
types, has been a long-standing research topic in the field of machine
learning. Various representation learning methods for tabular data have been
proposed, ranging from encoder-decoder structures to Transformers. Among these,
Transformer-based methods have achieved state-of-the-art performance not only
in tabular data but also in various other fields, including computer vision and
natural language processing. However, recent studies have revealed that
self-attention, a key component of Transformers, can lead to an oversmoothing
issue. We show that Transformers for tabular data also face this problem, and
to address the problem, we propose a novel matrix polynomial-based
self-attention layer as a substitute for the original self-attention layer,
which enhances model scalability. In our experiments with three representative
table learning models equipped with our proposed layer, we illustrate that the
layer effectively mitigates the oversmoothing problem and enhances the
representation performance of the existing methods, outperforming the
state-of-the-art table representation methods.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 21:49:26 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Dec 2023 09:13:55 GMT"
}
] | 1,702,944,000,000 | [
[
"Kim",
"Jayoung",
""
],
[
"Shin",
"Yehjin",
""
],
[
"Choi",
"Jeongwhan",
""
],
[
"Wi",
"Hyowon",
""
],
[
"Park",
"Noseong",
""
]
] |
2312.07767 | Zelin Xu | Zelin Xu, Tingsong Xiao, Wenchong He, Yu Wang, Zhe Jiang | Spatial Knowledge-Infused Hierarchical Learning: An Application in Flood
Mapping on Earth Imagery | SIGSPATIAL 2023 (Best Paper Award) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning for Earth imagery plays an increasingly important role in
geoscience applications such as agriculture, ecology, and natural disaster
management. Still, progress is often hindered by the limited training labels.
Given Earth imagery with limited training labels, a base deep neural network
model, and a spatial knowledge base with label constraints, our problem is to
infer the full labels while training the neural network. The problem is
challenging due to the sparse and noisy input labels, spatial uncertainty
within the label inference process, and high computational costs associated
with a large number of sample locations. Existing works on neuro-symbolic
models focus on integrating symbolic logic into neural networks (e.g., loss
function, model architecture, and training label augmentation), but these
methods do not fully address the challenges of spatial data (e.g., spatial
uncertainty, the trade-off between spatial granularity and computational
costs). To bridge this gap, we propose a novel Spatial Knowledge-Infused
Hierarchical Learning (SKI-HL) framework that iteratively infers sample labels
within a multi-resolution hierarchy. Our framework consists of a module to
selectively infer labels in different resolutions based on spatial uncertainty
and a module to train neural network parameters with uncertainty-aware
multi-instance learning. Extensive experiments on real-world flood mapping
datasets show that the proposed model outperforms several baseline methods. The
code is available at \url{https://github.com/ZelinXu2000/SKI-HL}.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 22:23:04 GMT"
}
] | 1,702,512,000,000 | [
[
"Xu",
"Zelin",
""
],
[
"Xiao",
"Tingsong",
""
],
[
"He",
"Wenchong",
""
],
[
"Wang",
"Yu",
""
],
[
"Jiang",
"Zhe",
""
]
] |
2312.07779 | Alexander Meinke | Alexander Meinke and Owain Evans | Tell, don't show: Declarative facts influence how LLMs generalize | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine how large language models (LLMs) generalize from abstract
declarative statements in their training data. As an illustration, consider an
LLM that is prompted to generate weather reports for London in 2050. One
possibility is that the temperatures in the reports match the mean and variance
of reports from 2023 (i.e. matching the statistics of pretraining). Another
possibility is that the reports predict higher temperatures, by incorporating
declarative statements about climate change from scientific papers written in
2023. An example of such a declarative statement is "global temperatures will
increase by $1^{\circ} \mathrm{C}$ by 2050".
To test the influence of abstract declarative statements, we construct tasks
in which LLMs are finetuned on both declarative and procedural information. We
find that declarative statements influence model predictions, even when they
conflict with procedural information. In particular, finetuning on a
declarative statement $S$ increases the model likelihood for logical
consequences of $S$. The effect of declarative statements is consistent across
three domains: aligning an AI assistant, predicting weather, and predicting
demographic features. Through a series of ablations, we show that the effect of
declarative statements cannot be explained by associative learning based on
matching keywords. Nevertheless, the effect of declarative statements on model
likelihoods is small in absolute terms and increases surprisingly little with
model size (i.e. from 330 million to 175 billion parameters). We argue that
these results have implications for AI risk (in relation to the "treacherous
turn") and for fairness.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 22:47:42 GMT"
}
] | 1,702,512,000,000 | [
[
"Meinke",
"Alexander",
""
],
[
"Evans",
"Owain",
""
]
] |
2312.07838 | Alexis Tsoukias | Berkay H. Tosunlu and Joseph H.A. Guillaume and Alexis Tsouki\`as | Conflict Transformation and Management. From Cognitive Maps to Value
Trees | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Conflict transformation and management are complex decision processes with
extremely high stakes at hand and could greatly benefit from formal approaches
to decision support. For this purpose we develop a general framework about how
to use problem structuring methods for such purposes. More precisely we show
how to transform cognitive maps to value trees in order to promote a more
design-oriented approach to decision support aiming at constructing innovative
solutions for conflict management purposes. We show that our findings have a
much wider validity since they allow to move from a descriptive representation
of a problem situation to a more prescriptive one using formal procedures and
models.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 02:06:20 GMT"
}
] | 1,702,512,000,000 | [
[
"Tosunlu",
"Berkay H.",
""
],
[
"Guillaume",
"Joseph H. A.",
""
],
[
"Tsoukiàs",
"Alexis",
""
]
] |
2312.07850 | Kezhi Wang | Feibo Jiang, Li Dong, Yubo Peng, Kezhi Wang, Kun Yang, Cunhua Pan,
Dusit Niyato, Octavia A. Dobre | Large Language Model Enhanced Multi-Agent Systems for 6G Communications | Submitted for possible journal publication | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid development of the Large Language Model (LLM) presents huge
opportunities for 6G communications, e.g., network optimization and management
by allowing users to input task requirements to LLMs by nature language.
However, directly applying native LLMs in 6G encounters various challenges,
such as a lack of private communication data and knowledge, limited logical
reasoning, evaluation, and refinement abilities. Integrating LLMs with the
capabilities of retrieval, planning, memory, evaluation and reflection in
agents can greatly enhance the potential of LLMs for 6G communications. To this
end, we propose a multi-agent system with customized communication knowledge
and tools for solving communication related tasks using natural language,
comprising three components: (1) Multi-agent Data Retrieval (MDR), which
employs the condensate and inference agents to refine and summarize
communication knowledge from the knowledge base, expanding the knowledge
boundaries of LLMs in 6G communications; (2) Multi-agent Collaborative Planning
(MCP), which utilizes multiple planning agents to generate feasible solutions
for the communication related task from different perspectives based on the
retrieved knowledge; (3) Multi-agent Evaluation and Reflecxion (MER), which
utilizes the evaluation agent to assess the solutions, and applies the
reflexion agent and refinement agent to provide improvement suggestions for
current solutions. Finally, we validate the effectiveness of the proposed
multi-agent system by designing a semantic communication system, as a case
study of 6G communications.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 02:35:57 GMT"
}
] | 1,702,512,000,000 | [
[
"Jiang",
"Feibo",
""
],
[
"Dong",
"Li",
""
],
[
"Peng",
"Yubo",
""
],
[
"Wang",
"Kezhi",
""
],
[
"Yang",
"Kun",
""
],
[
"Pan",
"Cunhua",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Dobre",
"Octavia A.",
""
]
] |
2312.07876 | Wei Zhao | Wei Zhao, Zhe Li, Jun Sun | Causality Analysis for Evaluating the Security of Large Language Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) such as GPT and Llama2 are increasingly adopted
in many safety-critical applications. Their security is thus essential. Even
with considerable efforts spent on reinforcement learning from human feedback
(RLHF), recent studies have shown that LLMs are still subject to attacks such
as adversarial perturbation and Trojan attacks. Further research is thus needed
to evaluate their security and/or understand the lack of it. In this work, we
propose a framework for conducting light-weight causality-analysis of LLMs at
the token, layer, and neuron level. We applied our framework to open-source
LLMs such as Llama2 and Vicuna and had multiple interesting discoveries. Based
on a layer-level causality analysis, we show that RLHF has the effect of
overfitting a model to harmful prompts. It implies that such security can be
easily overcome by `unusual' harmful prompts. As evidence, we propose an
adversarial perturbation method that achieves 100\% attack success rate on the
red-teaming tasks of the Trojan Detection Competition 2023. Furthermore, we
show the existence of one mysterious neuron in both Llama2 and Vicuna that has
an unreasonably high causal effect on the output. While we are uncertain on why
such a neuron exists, we show that it is possible to conduct a ``Trojan''
attack targeting that particular neuron to completely cripple the LLM, i.e., we
can generate transferable suffixes to prompts that frequently make the LLM
produce meaningless responses.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 03:35:43 GMT"
}
] | 1,702,512,000,000 | [
[
"Zhao",
"Wei",
""
],
[
"Li",
"Zhe",
""
],
[
"Sun",
"Jun",
""
]
] |
2312.07993 | Zeynep G. Saribatur | Zeynep G. Saribatur and Stefan Woltran | A Unified View on Forgetting and Strong Equivalence Notions in Answer
Set Programming | This is an extended version of a paper to be published at AAAI 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Answer Set Programming (ASP) is a prominent rule-based language for knowledge
representation and reasoning with roots in logic programming and non-monotonic
reasoning. The aim to capture the essence of removing (ir)relevant details in
ASP programs led to the investigation of different notions, from strong
persistence (SP) forgetting, to faithful abstractions, and, recently, strong
simplifications, where the latter two can be seen as relaxed and strengthened
notions of forgetting, respectively. Although it was observed that these
notions are related, especially given that they have characterizations through
the semantics for strong equivalence, it remained unclear whether they can be
brought together. In this work, we bridge this gap by introducing a novel
relativized equivalence notion, which is a relaxation of the recent
simplification notion, that is able to capture all related notions from the
literature. We provide necessary and sufficient conditions for relativized
simplifiability, which shows that the challenging part is for when the context
programs do not contain all the atoms to remove. We then introduce an operator
that combines projection and a relaxation of (SP)-forgetting to obtain the
relativized simplifications. We furthermore present complexity results that
complete the overall picture.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 09:05:48 GMT"
}
] | 1,702,512,000,000 | [
[
"Saribatur",
"Zeynep G.",
""
],
[
"Woltran",
"Stefan",
""
]
] |
2312.08064 | Evdoxia Taka | Evdoxia Taka, Yuri Nakao, Ryosuke Sonoda, Takuya Yokota, Lin Luo,
Simone Stumpf | Exploring the Impact of Lay User Feedback for Improving AI Fairness | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fairness in AI is a growing concern for high-stakes decision making. Engaging
stakeholders, especially lay users, in fair AI development is promising yet
overlooked. Recent efforts explore enabling lay users to provide AI
fairness-related feedback, but there is still a lack of understanding of how to
integrate users' feedback into an AI model and the impacts of doing so. To
bridge this gap, we collected feedback from 58 lay users on the fairness of a
XGBoost model trained on the Home Credit dataset, and conducted offline
experiments to investigate the effects of retraining models on accuracy, and
individual and group fairness. Our work contributes baseline results of
integrating user fairness feedback in XGBoost, and a dataset and code framework
to bootstrap research in engaging stakeholders in AI fairness. Our discussion
highlights the challenges of employing user feedback in AI fairness and points
the way to a future application area of interactive machine learning.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 11:17:29 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Dec 2023 14:35:54 GMT"
}
] | 1,702,944,000,000 | [
[
"Taka",
"Evdoxia",
""
],
[
"Nakao",
"Yuri",
""
],
[
"Sonoda",
"Ryosuke",
""
],
[
"Yokota",
"Takuya",
""
],
[
"Luo",
"Lin",
""
],
[
"Stumpf",
"Simone",
""
]
] |
2312.08084 | Tianshuo Peng | Tianshuo Peng, Zuchao Li, Ping Wang, Lefei Zhang, Hai Zhao | A Novel Energy based Model Mechanism for Multi-modal Aspect-Based
Sentiment Analysis | AAAI2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-modal aspect-based sentiment analysis (MABSA) has recently attracted
increasing attention. The span-based extraction methods, such as FSUIE,
demonstrate strong performance in sentiment analysis due to their joint
modeling of input sequences and target labels. However, previous methods still
have certain limitations: (i) They ignore the difference in the focus of visual
information between different analysis targets (aspect or sentiment). (ii)
Combining features from uni-modal encoders directly may not be sufficient to
eliminate the modal gap and can cause difficulties in capturing the image-text
pairwise relevance. (iii) Existing span-based methods for MABSA ignore the
pairwise relevance of target span boundaries. To tackle these limitations, we
propose a novel framework called DQPSA for multi-modal sentiment analysis.
Specifically, our model contains a Prompt as Dual Query (PDQ) module that uses
the prompt as both a visual query and a language query to extract prompt-aware
visual information and strengthen the pairwise relevance between visual
information and the analysis target. Additionally, we introduce an Energy-based
Pairwise Expert (EPE) module that models the boundaries pairing of the analysis
target from the perspective of an Energy-based Model. This expert predicts
aspect or sentiment span based on pairwise stability. Experiments on three
widely used benchmarks demonstrate that DQPSA outperforms previous approaches
and achieves a new state-of-the-art performance.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 12:00:46 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Dec 2023 13:00:27 GMT"
}
] | 1,702,857,600,000 | [
[
"Peng",
"Tianshuo",
""
],
[
"Li",
"Zuchao",
""
],
[
"Wang",
"Ping",
""
],
[
"Zhang",
"Lefei",
""
],
[
"Zhao",
"Hai",
""
]
] |
2312.08157 | Qian Chen | Qian Chen, Taolin Zhang, Dongyang Li, Xiaofeng He | CIDR: A Cooperative Integrated Dynamic Refining Method for Minimal
Feature Removal Problem | Accepted by AAAI2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The minimal feature removal problem in the post-hoc explanation area aims to
identify the minimal feature set (MFS). Prior studies using the greedy
algorithm to calculate the minimal feature set lack the exploration of feature
interactions under a monotonic assumption which cannot be satisfied in general
scenarios. In order to address the above limitations, we propose a Cooperative
Integrated Dynamic Refining method (CIDR) to efficiently discover minimal
feature sets. Specifically, we design Cooperative Integrated Gradients (CIG) to
detect interactions between features. By incorporating CIG and characteristics
of the minimal feature set, we transform the minimal feature removal problem
into a knapsack problem. Additionally, we devise an auxiliary Minimal Feature
Refinement algorithm to determine the minimal feature set from numerous
candidate sets. To the best of our knowledge, our work is the first to address
the minimal feature removal problem in the field of natural language
processing. Extensive experiments demonstrate that CIDR is capable of tracing
representative minimal feature sets with improved interpretability across
various models and datasets.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 14:10:30 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Feb 2024 13:27:28 GMT"
}
] | 1,707,436,800,000 | [
[
"Chen",
"Qian",
""
],
[
"Zhang",
"Taolin",
""
],
[
"Li",
"Dongyang",
""
],
[
"He",
"Xiaofeng",
""
]
] |
2312.08248 | Huan Yan | Huan Yan and Yong Li | A Survey of Generative AI for Intelligent Transportation Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intelligent transportation systems play a crucial role in modern traffic
management and optimization, greatly improving traffic efficiency and safety.
With the rapid development of generative artificial intelligence (Generative
AI) technologies in the fields of image generation and natural language
processing, generative AI has also played a crucial role in addressing key
issues in intelligent transportation systems, such as data sparsity, difficulty
in observing abnormal scenarios, and in modeling data uncertainty. In this
review, we systematically investigate the relevant literature on generative AI
techniques in addressing key issues in different types of tasks in intelligent
transportation systems. First, we introduce the principles of different
generative AI techniques, and their potential applications. Then, we classify
tasks in intelligent transportation systems into four types: traffic
perception, traffic prediction, traffic simulation, and traffic
decision-making. We systematically illustrate how generative AI techniques
addresses key issues in these four different types of tasks. Finally, we
summarize the challenges faced in applying generative AI to intelligent
transportation systems, and discuss future research directions based on
different application scenarios.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 16:13:23 GMT"
}
] | 1,702,512,000,000 | [
[
"Yan",
"Huan",
""
],
[
"Li",
"Yong",
""
]
] |
2312.08403 | Hao Wu | Hao Wu, Yuxuan Liang, Wei Xiong, Zhengyang Zhou, Wei Huang, Shilong
Wang, Kun Wang | Earthfarseer: Versatile Spatio-Temporal Dynamical Systems Modeling in
One Model | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Efficiently modeling spatio-temporal (ST) physical processes and observations
presents a challenging problem for the deep learning community. Many recent
studies have concentrated on meticulously reconciling various advantages,
leading to designed models that are neither simple nor practical. To address
this issue, this paper presents a systematic study on existing shortcomings
faced by off-the-shelf models, including lack of local fidelity, poor
prediction performance over long time-steps,low scalability, and inefficiency.
To systematically address the aforementioned problems, we propose an
EarthFarseer, a concise framework that combines parallel local convolutions and
global Fourier-based transformer architectures, enabling dynamically capture
the local-global spatial interactions and dependencies. EarthFarseer also
incorporates a multi-scale fully convolutional and Fourier architectures to
efficiently and effectively capture the temporal evolution. Our proposal
demonstrates strong adaptability across various tasks and datasets, with fast
convergence and better local fidelity in long time-steps predictions. Extensive
experiments and visualizations over eight human society physical and natural
physical datasets demonstrates the state-of-the-art performance of
EarthFarseer. We release our code at
https://github.com/easylearningscores/EarthFarseer.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 07:20:24 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Dec 2023 16:16:02 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Jun 2024 11:46:47 GMT"
}
] | 1,717,459,200,000 | [
[
"Wu",
"Hao",
""
],
[
"Liang",
"Yuxuan",
""
],
[
"Xiong",
"Wei",
""
],
[
"Zhou",
"Zhengyang",
""
],
[
"Huang",
"Wei",
""
],
[
"Wang",
"Shilong",
""
],
[
"Wang",
"Kun",
""
]
] |
2312.08463 | Siddarth Shandeep Singh | Siddarth Singh, Omayma Mahjoub, Ruan de Kock, Wiem Khlifi, Abidine
Vall, Kale-ab Tessera and Arnu Pretorius | How much can change in a year? Revisiting Evaluation in Multi-Agent
Reinforcement Learning | 6 pages, AAAI XAI4DRL workshop 2023; typos corrected, images updated,
page count updated | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Establishing sound experimental standards and rigour is important in any
growing field of research. Deep Multi-Agent Reinforcement Learning (MARL) is
one such nascent field. Although exciting progress has been made, MARL has
recently come under scrutiny for replicability issues and a lack of
standardised evaluation methodology, specifically in the cooperative setting.
Although protocols have been proposed to help alleviate the issue, it remains
important to actively monitor the health of the field. In this work, we extend
the database of evaluation methodology previously published by containing
meta-data on MARL publications from top-rated conferences and compare the
findings extracted from this updated database to the trends identified in their
work. Our analysis shows that many of the worrying trends in performance
reporting remain. This includes the omission of uncertainty quantification, not
reporting all relevant evaluation details and a narrowing of algorithmic
development classes. Promisingly, we do observe a trend towards more difficult
scenarios in SMAC-v1, which if continued into SMAC-v2 will encourage novel
algorithmic development. Our data indicate that replicability needs to be
approached more proactively by the MARL community to ensure trust in the field
as we move towards exciting new frontiers.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 19:06:34 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Jan 2024 12:46:42 GMT"
}
] | 1,706,486,400,000 | [
[
"Singh",
"Siddarth",
""
],
[
"Mahjoub",
"Omayma",
""
],
[
"de Kock",
"Ruan",
""
],
[
"Khlifi",
"Wiem",
""
],
[
"Vall",
"Abidine",
""
],
[
"Tessera",
"Kale-ab",
""
],
[
"Pretorius",
"Arnu",
""
]
] |
2312.08466 | Siddarth Shandeep Singh | Omayma Mahjoub, Ruan de Kock, Siddarth Singh, Wiem Khlifi, Abidine
Vall, Kale-ab Tessera and Arnu Pretorius | Efficiently Quantifying Individual Agent Importance in Cooperative MARL | 8 pages, AAAI XAI4DRL workshop 2023; references updated, figure 8
style updated, typos | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Measuring the contribution of individual agents is challenging in cooperative
multi-agent reinforcement learning (MARL). In cooperative MARL, team
performance is typically inferred from a single shared global reward. Arguably,
among the best current approaches to effectively measure individual agent
contributions is to use Shapley values. However, calculating these values is
expensive as the computational complexity grows exponentially with respect to
the number of agents. In this paper, we adapt difference rewards into an
efficient method for quantifying the contribution of individual agents,
referred to as Agent Importance, offering a linear computational complexity
relative to the number of agents. We show empirically that the computed values
are strongly correlated with the true Shapley values, as well as the true
underlying individual agent rewards, used as the ground truth in environments
where these are available. We demonstrate how Agent Importance can be used to
help study MARL systems by diagnosing algorithmic failures discovered in prior
MARL benchmarking work. Our analysis illustrates Agent Importance as a valuable
explainability component for future MARL benchmarks.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 19:09:37 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Jan 2024 13:07:55 GMT"
}
] | 1,706,486,400,000 | [
[
"Mahjoub",
"Omayma",
""
],
[
"de Kock",
"Ruan",
""
],
[
"Singh",
"Siddarth",
""
],
[
"Khlifi",
"Wiem",
""
],
[
"Vall",
"Abidine",
""
],
[
"Tessera",
"Kale-ab",
""
],
[
"Pretorius",
"Arnu",
""
]
] |
2312.08468 | Siddarth Shandeep Singh | Wiem Khlifi, Siddarth Singh, Omayma Mahjoub, Ruan de Kock, Abidine
Vall, Rihab Gorsane and Arnu Pretorius | On Diagnostics for Understanding Agent Training Behaviour in Cooperative
MARL | 4 pages, AAAI XAI4DRL workshop 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Cooperative multi-agent reinforcement learning (MARL) has made substantial
strides in addressing the distributed decision-making challenges. However, as
multi-agent systems grow in complexity, gaining a comprehensive understanding
of their behaviour becomes increasingly challenging. Conventionally, tracking
team rewards over time has served as a pragmatic measure to gauge the
effectiveness of agents in learning optimal policies. Nevertheless, we argue
that relying solely on the empirical returns may obscure crucial insights into
agent behaviour. In this paper, we explore the application of explainable AI
(XAI) tools to gain profound insights into agent behaviour. We employ these
diagnostics tools within the context of Level-Based Foraging and Multi-Robot
Warehouse environments and apply them to a diverse array of MARL algorithms. We
demonstrate how our diagnostics can enhance the interpretability and
explainability of MARL systems, providing a better understanding of agent
behaviour.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 19:10:10 GMT"
}
] | 1,702,598,400,000 | [
[
"Khlifi",
"Wiem",
""
],
[
"Singh",
"Siddarth",
""
],
[
"Mahjoub",
"Omayma",
""
],
[
"de Kock",
"Ruan",
""
],
[
"Vall",
"Abidine",
""
],
[
"Gorsane",
"Rihab",
""
],
[
"Pretorius",
"Arnu",
""
]
] |
2312.08517 | Dong Li | Ruoming Jin and Dong Li | (Debiased) Contrastive Learning Loss for Recommendation (Technical
Report) | This manuscript was initially submitted for review in February 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we perform a systemic examination of the recommendation
losses, including listwise (softmax), pairwise(BPR), and pointwise
(mean-squared error, MSE, and Cosine Contrastive Loss, CCL) losses through the
lens of contrastive learning. We introduce and study both debiased InfoNCE and
mutual information neural estimator (MINE), for the first time, under the
recommendation setting. We also relate and differentiate these two losses with
the BPR loss through the lower bound analysis. Furthermore, we present the
debiased pointwise loss (for both MSE and CCL) and theoretically certify both
iALS and EASE, two of the most popular linear models, are inherently debiased.
The empirical experimental results demonstrate the effectiveness of the
debiased losses and newly introduced mutual-information losses outperform the
existing (biased) ones.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 21:09:56 GMT"
}
] | 1,702,598,400,000 | [
[
"Jin",
"Ruoming",
""
],
[
"Li",
"Dong",
""
]
] |
2312.08520 | Dong Li | Dong Li and Ruoming Jin and Bin Ren | Revisiting Recommendation Loss Functions through Contrastive Learning
(Technical Report) | This manuscript was initially submitted for review in August 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Inspired by the success of contrastive learning, we systematically examine
recommendation losses, including listwise (softmax), pairwise (BPR), and
pointwise (MSE and CCL) losses. In this endeavor, we introduce InfoNCE+, an
optimized generalization of InfoNCE with balance coefficients, and highlight
its performance advantages, particularly when aligned with our new decoupled
contrastive loss, MINE+. We also leverage debiased InfoNCE to debias pointwise
recommendation loss (CCL) as Debiased CCL. Interestingly, our analysis reveals
that linear models like iALS and EASE are inherently debiased. Empirical
results demonstrates the effectiveness of MINE+ and Debiased-CCL.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 21:15:29 GMT"
}
] | 1,702,598,400,000 | [
[
"Li",
"Dong",
""
],
[
"Jin",
"Ruoming",
""
],
[
"Ren",
"Bin",
""
]
] |
2312.08629 | Haiyang Tang | Haiyang Tang, Zhenyi Liu, Dongping Chen, Qingzhao Chu | ChatSOS: LLM-based knowledge Q&A system for safety engineering | in Chinese language | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in large language models (LLMs) have notably propelled
natural language processing (NLP) capabilities, demonstrating significant
potential in safety engineering applications. Despite these advancements, LLMs
face constraints in processing specialized tasks, attributed to factors such as
corpus size, input processing limitations, and privacy concerns. Obtaining
useful information from reliable sources in a limited time is crucial for LLM.
Addressing this, our study introduces an LLM-based Q&A system for safety
engineering, enhancing the comprehension and response accuracy of the model. We
employed prompt engineering to incorporate external knowledge databases, thus
enriching the LLM with up-to-date and reliable information. The system analyzes
historical incident reports through statistical methods, utilizes vector
embedding to construct a vector database, and offers an efficient
similarity-based search functionality. Our findings indicate that the
integration of external knowledge significantly augments the capabilities of
LLM for in-depth problem analysis and autonomous task assignment. It
effectively summarizes accident reports and provides pertinent recommendations.
This integration approach not only expands LLM applications in safety
engineering but also sets a precedent for future developments towards
automation and intelligent systems.
| [
{
"version": "v1",
"created": "Thu, 14 Dec 2023 03:25:23 GMT"
}
] | 1,702,598,400,000 | [
[
"Tang",
"Haiyang",
""
],
[
"Liu",
"Zhenyi",
""
],
[
"Chen",
"Dongping",
""
],
[
"Chu",
"Qingzhao",
""
]
] |
2312.08680 | Yang Gao | Haoyuan Dong, Yang Gao, Haishuai Wang, Hong Yang, Peng Zhang | Heterogeneous Graph Neural Architecture Search with GPT-4 | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Heterogeneous graph neural architecture search (HGNAS) represents a powerful
tool for automatically designing effective heterogeneous graph neural networks.
However, existing HGNAS algorithms suffer from inefficient searches and
unstable results. In this paper, we present a new GPT-4 based HGNAS model to
improve the search efficiency and search accuracy of HGNAS. Specifically, we
present a new GPT-4 enhanced Heterogeneous Graph Neural Architecture Search
(GHGNAS for short). The basic idea of GHGNAS is to design a set of prompts that
can guide GPT-4 toward the task of generating new heterogeneous graph neural
architectures. By iteratively asking GPT-4 with the prompts, GHGNAS continually
validates the accuracy of the generated HGNNs and uses the feedback to further
optimize the prompts. Experimental results show that GHGNAS can design new
HGNNs by leveraging the powerful generalization capability of GPT-4. Moreover,
GHGNAS runs more effectively and stably than previous HGNAS models based on
reinforcement learning and differentiable search algorithms.
| [
{
"version": "v1",
"created": "Thu, 14 Dec 2023 06:31:52 GMT"
}
] | 1,702,598,400,000 | [
[
"Dong",
"Haoyuan",
""
],
[
"Gao",
"Yang",
""
],
[
"Wang",
"Haishuai",
""
],
[
"Yang",
"Hong",
""
],
[
"Zhang",
"Peng",
""
]
] |
2312.08702 | Linzhuang Sun | Linzhuang Sun, Nan Xu, Jingxuan Wei, Bihui Yu, Liping Bu, Yin Luo | Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided
by Self-presentation Theory | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Having the ability to empathize is crucial for accurately representing human
behavior during conversations. Despite numerous research aim to improve the
cognitive capability of models by incorporating external knowledge, there has
been limited attention on the sensible and rational expression of the
conversation itself, which are crucial components of the cognitive empathy.
Guided by self-presentation theory in sociology, we have designed an innovative
categorical approach that segregates historical dialogues into sensible and
rational sentences and subsequently elucidate the context through the designed
attention mechanism. However, the rational information within the conversation
is restricted and the external knowledge used in previous methods have
limitations of semantic contradiction and narrow vision field. Considering the
impressive performance of LLM in the domain of intelligent agent. We employ
LLaMA2-70b as a rational brain to analyze the profound logical information
maintained in conversations, which assists the model assessing the balance of
sensibility and rationality to produce quality empathetic responses.
Experimental evaluations demonstrate that our method outperforms other
comparable methods on both automatic and human evaluations.
| [
{
"version": "v1",
"created": "Thu, 14 Dec 2023 07:38:12 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Dec 2023 07:16:58 GMT"
},
{
"version": "v3",
"created": "Tue, 2 Jan 2024 01:41:51 GMT"
}
] | 1,704,240,000,000 | [
[
"Sun",
"Linzhuang",
""
],
[
"Xu",
"Nan",
""
],
[
"Wei",
"Jingxuan",
""
],
[
"Yu",
"Bihui",
""
],
[
"Bu",
"Liping",
""
],
[
"Luo",
"Yin",
""
]
] |
2312.08722 | M\"uge Kural | M\"uge Kural, Ali Gebe\c{s}\c{c}e, Tilek Chubakov, G\"ozde G\"ul
\c{S}ahin | Quantifying Divergence for Human-AI Collaboration and Cognitive Trust | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Predicting the collaboration likelihood and measuring cognitive trust to AI
systems is more important than ever. To do that, previous research mostly focus
solely on the model features (e.g., accuracy, confidence) and ignore the human
factor. To address that, we propose several decision-making similarity measures
based on divergence metrics (e.g., KL, JSD) calculated over the labels acquired
from humans and a wide range of models. We conduct a user study on a textual
entailment task, where the users are provided with soft labels from various
models and asked to pick the closest option to them. The users are then shown
the similarities/differences to their most similar model and are surveyed for
their likelihood of collaboration and cognitive trust to the selected system.
Finally, we qualitatively and quantitatively analyze the relation between the
proposed decision-making similarity measures and the survey results. We find
that people tend to collaborate with their most similar models -- measured via
JSD -- yet this collaboration does not necessarily imply a similar level of
cognitive trust. We release all resources related to the user study (e.g.,
design, outputs), models, and metrics at our repo.
| [
{
"version": "v1",
"created": "Thu, 14 Dec 2023 08:08:19 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Jan 2024 08:46:56 GMT"
}
] | 1,705,622,400,000 | [
[
"Kural",
"Müge",
""
],
[
"Gebeşçe",
"Ali",
""
],
[
"Chubakov",
"Tilek",
""
],
[
"Şahin",
"Gözde Gül",
""
]
] |
2312.08762 | Liqi He | Liqi He, Zuchao Li, Xiantao Cai, Ping Wang | Multi-modal Latent Space Learning for Chain-of-Thought Reasoning in
Language Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chain-of-thought (CoT) reasoning has exhibited impressive performance in
language models for solving complex tasks and answering questions. However,
many real-world questions require multi-modal information, such as text and
images. Previous research on multi-modal CoT has primarily focused on
extracting fixed image features from off-the-shelf vision models and then
fusing them with text using attention mechanisms. This approach has limitations
because these vision models were not designed for complex reasoning tasks and
do not align well with language thoughts. To overcome this limitation, we
introduce a novel approach for multi-modal CoT reasoning that utilizes latent
space learning via diffusion processes to generate effective image features
that align with language thoughts. Our method fuses image features and text
representations at a deep level and improves the complex reasoning ability of
multi-modal CoT. We demonstrate the efficacy of our proposed method on
multi-modal ScienceQA and machine translation benchmarks, achieving
state-of-the-art performance on ScienceQA. Overall, our approach offers a more
robust and effective solution for multi-modal reasoning in language models,
enhancing their ability to tackle complex real-world problems.
| [
{
"version": "v1",
"created": "Thu, 14 Dec 2023 09:13:09 GMT"
}
] | 1,702,598,400,000 | [
[
"He",
"Liqi",
""
],
[
"Li",
"Zuchao",
""
],
[
"Cai",
"Xiantao",
""
],
[
"Wang",
"Ping",
""
]
] |
2312.08827 | Song Gao | Song Gao | Artificial Intelligence and Human Geography | 12 pages; chapter in the Encyclopedia of Human Geography | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper examines the recent advances and applications of AI in human
geography especially the use of machine (deep) learning, including place
representation and modeling, spatial analysis and predictive mapping, and urban
planning and design. AI technologies have enabled deeper insights into complex
human-environment interactions, contributing to more effective scientific
exploration, understanding of social dynamics, and spatial decision-making.
Furthermore, human geography offers crucial contributions to AI, particularly
in context-aware model development, human-centered design, biases and ethical
considerations, and data privacy. The synergy beween AI and human geography is
essential for addressing global challenges like disaster resilience, poverty,
and equitable resource access. This interdisciplinary collaboration between AI
and geography will help advance the development of GeoAI and promise a better
and sustainable world for all.
| [
{
"version": "v1",
"created": "Thu, 14 Dec 2023 11:20:22 GMT"
}
] | 1,702,598,400,000 | [
[
"Gao",
"Song",
""
]
] |
2312.09009 | Dapeng Li | Dapeng Li, Na Lou, Bin Zhang, Zhiwei Xu, Guoliang Fan | Adaptive parameter sharing for multi-agent reinforcement learning | 5 pages, accepted for ICASSP 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parameter sharing, as an important technique in multi-agent systems, can
effectively solve the scalability issue in large-scale agent problems. However,
the effectiveness of parameter sharing largely depends on the environment
setting. When agents have different identities or tasks, naive parameter
sharing makes it difficult to generate sufficiently differentiated strategies
for agents. Inspired by research pertaining to the brain in biology, we propose
a novel parameter sharing method. It maps each type of agent to different
regions within a shared network based on their identity, resulting in distinct
subnetworks. Therefore, our method can increase the diversity of strategies
among different agents without introducing additional training parameters.
Through experiments conducted in multiple environments, our method has shown
better performance than other parameter sharing methods.
| [
{
"version": "v1",
"created": "Thu, 14 Dec 2023 15:00:32 GMT"
}
] | 1,702,598,400,000 | [
[
"Li",
"Dapeng",
""
],
[
"Lou",
"Na",
""
],
[
"Zhang",
"Bin",
""
],
[
"Xu",
"Zhiwei",
""
],
[
"Fan",
"Guoliang",
""
]
] |
2312.09050 | Lingqiang Chen | Lingqiang Chen, Qinglin Zhao, Guanghui Li, Mengchu Zhou, Chenglong
Dai, and Yiming Feng | A Sparse Cross Attention-based Graph Convolution Network with Auxiliary
Information Awareness for Traffic Flow Prediction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep graph convolution networks (GCNs) have recently shown excellent
performance in traffic prediction tasks. However, they face some challenges.
First, few existing models consider the influence of auxiliary information,
i.e., weather and holidays, which may result in a poor grasp of
spatial-temporal dynamics of traffic data. Second, both the construction of a
dynamic adjacent matrix and regular graph convolution operations have quadratic
computation complexity, which restricts the scalability of GCN-based models. To
address such challenges, this work proposes a deep encoder-decoder model
entitled AIMSAN. It contains an auxiliary information-aware module (AIM) and
sparse cross attention-based graph convolution network (SAN). The former learns
multi-attribute auxiliary information and obtains its embedded presentation of
different time-window sizes. The latter uses a cross-attention mechanism to
construct dynamic adjacent matrices by fusing traffic data and embedded
auxiliary data. Then, SAN applies diffusion GCN on traffic data to mine rich
spatial-temporal dynamics. Furthermore, AIMSAN considers and uses the spatial
sparseness of traffic nodes to reduce the quadratic computation complexity.
Experimental results on three public traffic datasets demonstrate that the
proposed method outperforms other counterparts in terms of various performance
indices. Specifically, the proposed method has competitive performance with the
state-of-the-art algorithms but saves 35.74% of GPU memory usage, 42.25% of
training time, and 45.51% of validation time on average.
| [
{
"version": "v1",
"created": "Thu, 14 Dec 2023 15:48:23 GMT"
}
] | 1,702,598,400,000 | [
[
"Chen",
"Lingqiang",
""
],
[
"Zhao",
"Qinglin",
""
],
[
"Li",
"Guanghui",
""
],
[
"Zhou",
"Mengchu",
""
],
[
"Dai",
"Chenglong",
""
],
[
"Feng",
"Yiming",
""
]
] |
2312.09219 | Bo Xiong | Bo Xiong, Mojtaba Nayyeri, Linhao Luo, Zihao Wang, Shirui Pan, Steffen
Staab | NestE: Modeling Nested Relational Structures for Knowledge Graph
Reasoning | The 38th Annual AAAI Conference on Artificial Intelligence (AAAI'24) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Reasoning with knowledge graphs (KGs) has primarily focused on triple-shaped
facts. Recent advancements have been explored to enhance the semantics of these
facts by incorporating more potent representations, such as hyper-relational
facts. However, these approaches are limited to \emph{atomic facts}, which
describe a single piece of information. This paper extends beyond \emph{atomic
facts} and delves into \emph{nested facts}, represented by quoted triples where
subjects and objects are triples themselves (e.g., ((\emph{BarackObama},
\emph{holds\_position}, \emph{President}), \emph{succeed\_by},
(\emph{DonaldTrump}, \emph{holds\_position}, \emph{President}))). These nested
facts enable the expression of complex semantics like \emph{situations} over
time and \emph{logical patterns} over entities and relations. In response, we
introduce NestE, a novel KG embedding approach that captures the semantics of
both atomic and nested factual knowledge. NestE represents each atomic fact as
a $1\times3$ matrix, and each nested relation is modeled as a $3\times3$ matrix
that rotates the $1\times3$ atomic fact matrix through matrix multiplication.
Each element of the matrix is represented as a complex number in the
generalized 4D hypercomplex space, including (spherical) quaternions,
hyperbolic quaternions, and split-quaternions. Through thorough analysis, we
demonstrate the embedding's efficacy in capturing diverse logical patterns over
nested facts, surpassing the confines of first-order logic-like expressions.
Our experimental results showcase NestE's significant performance gains over
current baselines in triple prediction and conditional link prediction. The
code and pre-trained models are open available at
https://github.com/xiongbo010/NestE.
| [
{
"version": "v1",
"created": "Thu, 14 Dec 2023 18:49:30 GMT"
}
] | 1,702,598,400,000 | [
[
"Xiong",
"Bo",
""
],
[
"Nayyeri",
"Mojtaba",
""
],
[
"Luo",
"Linhao",
""
],
[
"Wang",
"Zihao",
""
],
[
"Pan",
"Shirui",
""
],
[
"Staab",
"Steffen",
""
]
] |
2312.09397 | Can Cui | Can Cui, Zichong Yang, Yupeng Zhou, Yunsheng Ma, Juanwu Lu, Lingxi Li,
Yaobin Chen, Jitesh Panchal and Ziran Wang | Personalized Autonomous Driving with Large Language Models: Field
Experiments | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Integrating large language models (LLMs) in autonomous vehicles enables
conversation with AI systems to drive the vehicle. However, it also emphasizes
the requirement for such systems to comprehend commands accurately and achieve
higher-level personalization to adapt to the preferences of drivers or
passengers over a more extended period. In this paper, we introduce an
LLM-based framework, Talk2Drive, capable of translating natural verbal commands
into executable controls and learning to satisfy personal preferences for
safety, efficiency, and comfort with a proposed memory module. This is the
first-of-its-kind multi-scenario field experiment that deploys LLMs on a
real-world autonomous vehicle. Experiments showcase that the proposed system
can comprehend human intentions at different intuition levels, ranging from
direct commands like "can you drive faster" to indirect commands like "I am
really in a hurry now". Additionally, we use the takeover rate to quantify the
trust of human drivers in the LLM-based autonomous driving system, where
Talk2Drive significantly reduces the takeover rate in highway, intersection,
and parking scenarios. We also validate that the proposed memory module
considers personalized preferences and further reduces the takeover rate by up
to 65.2% compared with those without a memory module. The experiment video can
be watched at https://www.youtube.com/watch?v=4BWsfPaq1Ro
| [
{
"version": "v1",
"created": "Thu, 14 Dec 2023 23:23:37 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Feb 2024 06:39:22 GMT"
},
{
"version": "v3",
"created": "Wed, 8 May 2024 17:24:33 GMT"
}
] | 1,715,212,800,000 | [
[
"Cui",
"Can",
""
],
[
"Yang",
"Zichong",
""
],
[
"Zhou",
"Yupeng",
""
],
[
"Ma",
"Yunsheng",
""
],
[
"Lu",
"Juanwu",
""
],
[
"Li",
"Lingxi",
""
],
[
"Chen",
"Yaobin",
""
],
[
"Panchal",
"Jitesh",
""
],
[
"Wang",
"Ziran",
""
]
] |
2312.09513 | Yifei Sun | Feng Lu, Wei Li, Yifei Sun, Cheng Song, Yufei Ren, Albert Y. Zomaya | CGS-Mask: Making Time Series Predictions Intuitive for All | Accepted by AAAI24 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial intelligence (AI) has immense potential in time series prediction,
but most explainable tools have limited capabilities in providing a systematic
understanding of important features over time. These tools typically rely on
evaluating a single time point, overlook the time ordering of inputs, and
neglect the time-sensitive nature of time series applications. These factors
make it difficult for users, particularly those without domain knowledge, to
comprehend AI model decisions and obtain meaningful explanations. We propose
CGS-Mask, a post-hoc and model-agnostic cellular genetic strip mask-based
saliency approach to address these challenges. CGS-Mask uses consecutive time
steps as a cohesive entity to evaluate the impact of features on the final
prediction, providing binary and sustained feature importance scores over time.
Our algorithm optimizes the mask population iteratively to obtain the optimal
mask in a reasonable time. We evaluated CGS-Mask on synthetic and real-world
datasets, and it outperformed state-of-the-art methods in elucidating the
importance of features over time. According to our pilot user study via a
questionnaire survey, CGS-Mask is the most effective approach in presenting
easily understandable time series prediction results, enabling users to
comprehend the decision-making process of AI models with ease.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 03:31:21 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Dec 2023 02:14:26 GMT"
},
{
"version": "v3",
"created": "Fri, 12 Apr 2024 08:44:25 GMT"
}
] | 1,713,139,200,000 | [
[
"Lu",
"Feng",
""
],
[
"Li",
"Wei",
""
],
[
"Sun",
"Yifei",
""
],
[
"Song",
"Cheng",
""
],
[
"Ren",
"Yufei",
""
],
[
"Zomaya",
"Albert Y.",
""
]
] |
2312.09532 | Bing Liu | Bing Liu | Grounding for Artificial Intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A core function of intelligence is grounding, which is the process of
connecting the natural language and abstract knowledge to the internal
representation of the real world in an intelligent being, e.g., a human. Human
cognition is grounded in our sensorimotor experiences in the external world and
subjective feelings in our internal world. We use languages to communicate with
each other and the languages are grounded on our shared sensorimotor
experiences and feelings. Without this shard grounding, it is impossible for us
to understand each other because all natural languages are highly abstract and
are only able to describe a tiny portion of what has happened or is happening
in the real world. Although grounding at high or abstract levels has been
studied in different fields and applications, to our knowledge, limited
systematic work at fine-grained levels has been done. With the rapid progress
of large language models (LLMs), it is imperative that we have a sound
understanding of grounding in order to move to the next level of intelligence.
It is also believed that grounding is necessary for Artificial General
Intelligence (AGI). This paper makes an attempt to systematically study this
problem.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 04:45:48 GMT"
}
] | 1,702,857,600,000 | [
[
"Liu",
"Bing",
""
]
] |
2312.09539 | Ting Wang | Xiao Du, Yutong Ye, Pengyu Zhang, Yaning Yang, Mingsong Chen, Ting
Wang | Situation-Dependent Causal Influence-Based Cooperative Multi-agent
Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning to collaborate has witnessed significant progress in multi-agent
reinforcement learning (MARL). However, promoting coordination among agents and
enhancing exploration capabilities remain challenges. In multi-agent
environments, interactions between agents are limited in specific situations.
Effective collaboration between agents thus requires a nuanced understanding of
when and how agents' actions influence others. To this end, in this paper, we
propose a novel MARL algorithm named Situation-Dependent Causal Influence-Based
Cooperative Multi-agent Reinforcement Learning (SCIC), which incorporates a
novel Intrinsic reward mechanism based on a new cooperation criterion measured
by situation-dependent causal influence among agents. Our approach aims to
detect inter-agent causal influences in specific situations based on the
criterion using causal intervention and conditional mutual information. This
effectively assists agents in exploring states that can positively impact other
agents, thus promoting cooperation between agents. The resulting update links
coordinated exploration and intrinsic reward distribution, which enhance
overall collaboration and performance. Experimental results on various MARL
benchmarks demonstrate the superiority of our method compared to
state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 05:09:32 GMT"
}
] | 1,702,857,600,000 | [
[
"Du",
"Xiao",
""
],
[
"Ye",
"Yutong",
""
],
[
"Zhang",
"Pengyu",
""
],
[
"Yang",
"Yaning",
""
],
[
"Chen",
"Mingsong",
""
],
[
"Wang",
"Ting",
""
]
] |
2312.09546 | Paulo Garcia | Warisa Sritriratanarak and Paulo Garcia | On a Functional Definition of Intelligence | submitted; under review at "Journal of Intelligent Computing, SPJ" | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Without an agreed-upon definition of intelligence, asking "is this system
intelligent?"" is an untestable question. This lack of consensus hinders
research, and public perception, on Artificial Intelligence (AI), particularly
since the rise of generative- and large-language models. Most work on precisely
capturing what we mean by "intelligence" has come from the fields of
philosophy, psychology, and cognitive science. Because these perspectives are
intrinsically linked to intelligence as it is demonstrated by natural
creatures, we argue such fields cannot, and will not, provide a sufficiently
rigorous definition that can be applied to artificial means. Thus, we present
an argument for a purely functional, black-box definition of intelligence,
distinct from how that intelligence is actually achieved; focusing on the
"what", rather than the "how". To achieve this, we first distinguish other
related concepts (sentience, sensation, agency, etc.) from the notion of
intelligence, particularly identifying how these concepts pertain to artificial
intelligent systems. As a result, we achieve a formal definition of
intelligence that is conceptually testable from only external observation, that
suggests intelligence is a continuous variable. We conclude by identifying
challenges that still remain towards quantifiable measurement. This work
provides a useful perspective for both the development of AI, and for public
perception of the capabilities and risks of AI.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 05:46:49 GMT"
}
] | 1,702,857,600,000 | [
[
"Sritriratanarak",
"Warisa",
""
],
[
"Garcia",
"Paulo",
""
]
] |
2312.09561 | Muneera Bano Dr | Muneera Bano, Didar Zowghi, Pip Shea, Georgina Ibarra | Investigating Responsible AI for Scientific Research: An Empirical Study | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Scientific research organizations that are developing and deploying
Artificial Intelligence (AI) systems are at the intersection of technological
progress and ethical considerations. The push for Responsible AI (RAI) in such
institutions underscores the increasing emphasis on integrating ethical
considerations within AI design and development, championing core values like
fairness, accountability, and transparency. For scientific research
organizations, prioritizing these practices is paramount not just for
mitigating biases and ensuring inclusivity, but also for fostering trust in AI
systems among both users and broader stakeholders. In this paper, we explore
the practices at a research organization concerning RAI practices, aiming to
assess the awareness and preparedness regarding the ethical risks inherent in
AI design and development. We have adopted a mixed-method research approach,
utilising a comprehensive survey combined with follow-up in-depth interviews
with selected participants from AI-related projects. Our results have revealed
certain knowledge gaps concerning ethical, responsible, and inclusive AI, with
limitations in awareness of the available AI ethics frameworks. This revealed
an overarching underestimation of the ethical risks that AI technologies can
present, especially when implemented without proper guidelines and governance.
Our findings reveal the need for a holistic and multi-tiered strategy to uplift
capabilities and better support science research teams for responsible,
ethical, and inclusive AI development and deployment.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 06:40:27 GMT"
}
] | 1,702,857,600,000 | [
[
"Bano",
"Muneera",
""
],
[
"Zowghi",
"Didar",
""
],
[
"Shea",
"Pip",
""
],
[
"Ibarra",
"Georgina",
""
]
] |
2312.09658 | Alexander Shukhman | Leonid Legashev, Alexander Shukhman, Vadim Badikov | Algorithms for automatic intents extraction and utterances
classification for goal-oriented dialogue systems | in Russian language This work has been submitted to the IEEE for
possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessible | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern machine learning techniques in the natural language processing domain
can be used to automatically generate scripts for goal-oriented dialogue
systems. The current article presents a general framework for studying the
automatic generation of scripts for goal-oriented dialogue systems. A method
for preprocessing dialog data sets in JSON format is described. A comparison is
made of two methods for extracting user intent based on BERTopic and latent
Dirichlet allocation. A comparison has been made of two implemented algorithms
for classifying statements of users of a goal-oriented dialogue system based on
logistic regression and BERT transformer models. The BERT transformer approach
using the bert-base-uncased model showed better results for the three metrics
Precision (0.80), F1-score (0.78) and Matthews correlation coefficient (0.74)
in comparison with other methods.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 10:12:43 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Apr 2024 15:53:27 GMT"
}
] | 1,714,435,200,000 | [
[
"Legashev",
"Leonid",
""
],
[
"Shukhman",
"Alexander",
""
],
[
"Badikov",
"Vadim",
""
]
] |
2312.09693 | Han Wang | Han Wang, Nirmalendu Prakash, Nguyen Khoi Hoang, Ming Shan Hee, Usman
Naseem, Roy Ka-Wei Lee | Prompting Large Language Models for Topic Modeling | 6 pages, 3 figures, IEEE International Conference on Big Data | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Topic modeling is a widely used technique for revealing underlying thematic
structures within textual data. However, existing models have certain
limitations, particularly when dealing with short text datasets that lack
co-occurring words. Moreover, these models often neglect sentence-level
semantics, focusing primarily on token-level semantics. In this paper, we
propose PromptTopic, a novel topic modeling approach that harnesses the
advanced language understanding of large language models (LLMs) to address
these challenges. It involves extracting topics at the sentence level from
individual documents, then aggregating and condensing these topics into a
predefined quantity, ultimately providing coherent topics for texts of varying
lengths. This approach eliminates the need for manual parameter tuning and
improves the quality of extracted topics. We benchmark PromptTopic against the
state-of-the-art baselines on three vastly diverse datasets, establishing its
proficiency in discovering meaningful topics. Furthermore, qualitative analysis
showcases PromptTopic's ability to uncover relevant topics in multiple
datasets.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 11:15:05 GMT"
}
] | 1,702,857,600,000 | [
[
"Wang",
"Han",
""
],
[
"Prakash",
"Nirmalendu",
""
],
[
"Hoang",
"Nguyen Khoi",
""
],
[
"Hee",
"Ming Shan",
""
],
[
"Naseem",
"Usman",
""
],
[
"Lee",
"Roy Ka-Wei",
""
]
] |
2312.09695 | DaPeng Zhi | Dapeng Zhi, Peixin Wang, Cheng Chen, Min Zhang | Robustness Verification of Deep Reinforcement Learning Based Control
Systems using Reward Martingales | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Reinforcement Learning (DRL) has gained prominence as an effective
approach for control systems. However, its practical deployment is impeded by
state perturbations that can severely impact system performance. Addressing
this critical challenge requires robustness verification about system
performance, which involves tackling two quantitative questions: (i) how to
establish guaranteed bounds for expected cumulative rewards, and (ii) how to
determine tail bounds for cumulative rewards. In this work, we present the
first approach for robustness verification of DRL-based control systems by
introducing reward martingales, which offer a rigorous mathematical foundation
to characterize the impact of state perturbations on system performance in
terms of cumulative rewards. Our verified results provide provably quantitative
certificates for the two questions. We then show that reward martingales can be
implemented and trained via neural networks, against different types of control
policies. Experimental results demonstrate that our certified bounds tightly
enclose simulation outcomes on various DRL-based control systems, indicating
the effectiveness and generality of the proposed approach.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 11:16:47 GMT"
}
] | 1,702,857,600,000 | [
[
"Zhi",
"Dapeng",
""
],
[
"Wang",
"Peixin",
""
],
[
"Chen",
"Cheng",
""
],
[
"Zhang",
"Min",
""
]
] |
2312.09699 | Nicolas Troquard | Nicolas Troquard, Martina De Sanctis, Paola Inverardi, Patrizio
Pelliccione, Gian Luca Scoccia | Social, Legal, Ethical, Empathetic, and Cultural Rules: Compilation and
Reasoning (Extended Version) | In proceedings of the 38th Annual AAAI Conference on Artificial
Intelligence | null | 10.1609/aaai.v38i20.30245 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rise of AI-based and autonomous systems is raising concerns and
apprehension due to potential negative repercussions stemming from their
behavior or decisions. These systems must be designed to comply with the human
contexts in which they will operate. To this extent, Townsend et al. (2022)
introduce the concept of SLEEC (social, legal, ethical, empathetic, or
cultural) rules that aim to facilitate the formulation, verification, and
enforcement of the rules AI-based and autonomous systems should obey. They lay
out a methodology to elicit them and to let philosophers, lawyers, domain
experts, and others to formulate them in natural language. To enable their
effective use in AI systems, it is necessary to translate these rules
systematically into a formal language that supports automated reasoning. In
this study, we first conduct a linguistic analysis of the SLEEC rules pattern,
which justifies the translation of SLEEC rules into classical logic. Then we
investigate the computational complexity of reasoning about SLEEC rules and
show how logical programming frameworks can be employed to implement SLEEC
rules in practical scenarios. The result is a readily applicable strategy for
implementing AI systems that conform to norms expressed as SLEEC rules.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 11:23:49 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Apr 2024 10:09:15 GMT"
}
] | 1,712,102,400,000 | [
[
"Troquard",
"Nicolas",
""
],
[
"De Sanctis",
"Martina",
""
],
[
"Inverardi",
"Paola",
""
],
[
"Pelliccione",
"Patrizio",
""
],
[
"Scoccia",
"Gian Luca",
""
]
] |
2312.09738 | Dingning Liu | Dingning Liu, Xiaomeng Dong, Renrui Zhang, Xu Luo, Peng Gao, Xiaoshui
Huang, Yongshun Gong, Zhihui Wang | 3DAxiesPrompts: Unleashing the 3D Spatial Task Capabilities of GPT-4V | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present a new visual prompting method called 3DAxiesPrompts
(3DAP) to unleash the capabilities of GPT-4V in performing 3D spatial tasks.
Our investigation reveals that while GPT-4V exhibits proficiency in discerning
the position and interrelations of 2D entities through current visual prompting
techniques, its abilities in handling 3D spatial tasks have yet to be explored.
In our approach, we create a 3D coordinate system tailored to 3D imagery,
complete with annotated scale information. By presenting images infused with
the 3DAP visual prompt as inputs, we empower GPT-4V to ascertain the spatial
positioning information of the given 3D target image with a high degree of
precision. Through experiments, We identified three tasks that could be stably
completed using the 3DAP method, namely, 2D to 3D Point Reconstruction, 2D to
3D point matching, and 3D Object Detection. We perform experiments on our
proposed dataset 3DAP-Data, the results from these experiments validate the
efficacy of 3DAP-enhanced GPT-4V inputs, marking a significant stride in 3D
spatial task execution.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 12:24:19 GMT"
}
] | 1,702,857,600,000 | [
[
"Liu",
"Dingning",
""
],
[
"Dong",
"Xiaomeng",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Luo",
"Xu",
""
],
[
"Gao",
"Peng",
""
],
[
"Huang",
"Xiaoshui",
""
],
[
"Gong",
"Yongshun",
""
],
[
"Wang",
"Zhihui",
""
]
] |
2312.09897 | Saul Calderon Ramirez | Nelson Perez-Rojas, Saul Calderon-Ramirez, Martin Solis-Salazar, Mario
Romero-Sandoval, Monica Arias-Monge, Horacio Saggion | A Novel Dataset for Financial Education Text Simplification in Spanish | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Text simplification, crucial in natural language processing, aims to make
texts more comprehensible, particularly for specific groups like visually
impaired Spanish speakers, a less-represented language in this field. In
Spanish, there are few datasets that can be used to create text simplification
systems. Our research has the primary objective to develop a Spanish financial
text simplification dataset. We created a dataset with 5,314 complex and
simplified sentence pairs using established simplification rules. We also
compared our dataset with the simplifications generated from GPT-3, Tuner, and
MT5, in order to evaluate the feasibility of data augmentation using these
systems. In this manuscript we present the characteristics of our dataset and
the findings of the comparisons with other systems. The dataset is available at
Hugging face, saul1917/FEINA.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 15:47:08 GMT"
}
] | 1,702,857,600,000 | [
[
"Perez-Rojas",
"Nelson",
""
],
[
"Calderon-Ramirez",
"Saul",
""
],
[
"Solis-Salazar",
"Martin",
""
],
[
"Romero-Sandoval",
"Mario",
""
],
[
"Arias-Monge",
"Monica",
""
],
[
"Saggion",
"Horacio",
""
]
] |
2312.09928 | Kaushik Roy | Amit Sheth and Kaushik Roy | Neurosymbolic Value-Inspired AI (Why, What, and How) | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The rapid progression of Artificial Intelligence (AI) systems, facilitated by
the advent of Large Language Models (LLMs), has resulted in their widespread
application to provide human assistance across diverse industries. This trend
has sparked significant discourse centered around the ever-increasing need for
LLM-based AI systems to function among humans as part of human society, sharing
human values, especially as these systems are deployed in high-stakes settings
(e.g., healthcare, autonomous driving, etc.). Towards this end, neurosymbolic
AI systems are attractive due to their potential to enable easy-to-understand
and interpretable interfaces for facilitating value-based decision-making, by
leveraging explicit representations of shared values. In this paper, we
introduce substantial extensions to Khaneman's System one/two framework and
propose a neurosymbolic computational framework called Value-Inspired AI (VAI).
It outlines the crucial components essential for the robust and practical
implementation of VAI systems, aiming to represent and integrate various
dimensions of human values. Finally, we further offer insights into the current
progress made in this direction and outline potential future directions for the
field.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 16:33:57 GMT"
}
] | 1,702,857,600,000 | [
[
"Sheth",
"Amit",
""
],
[
"Roy",
"Kaushik",
""
]
] |
2312.09963 | Matteo Cardellini | Matteo Cardellini, Enrico Giunchiglia, and Marco Maratea | Symbolic Numeric Planning with Patterns | Accepted at AAAI24 | null | 10.1609/aaai.v38i18.29985 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose a novel approach for solving linear numeric
planning problems, called Symbolic Pattern Planning. Given a planning problem
$\Pi$, a bound $n$ and a pattern -- defined as an arbitrary sequence of actions
-- we encode the problem of finding a plan for $\Pi$ with bound $n$ as a
formula with fewer variables and/or clauses than the state-of-the-art rolled-up
and relaxed-relaxed-$\exists$ encodings. More importantly, we prove that for
any given bound, it is never the case that the latter two encodings allow
finding a valid plan while ours does not. On the experimental side, we consider
6 other planning systems -- including the ones which participated in this
year's International Planning Competition (IPC) -- and we show that our planner
Patty has remarkably good comparative performances on this year's IPC problems.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 17:20:25 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Jan 2024 14:44:18 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Feb 2024 09:52:37 GMT"
}
] | 1,711,584,000,000 | [
[
"Cardellini",
"Matteo",
""
],
[
"Giunchiglia",
"Enrico",
""
],
[
"Maratea",
"Marco",
""
]
] |
2312.09995 | Miguel Terra-Neves PhD | Miguel Terra-Neves and Jos\'e Amaral and Alexandre Lemos and Rui
Quintino and Pedro Resende and Antonio Alegria | SAT-Based Algorithms for Regular Graph Pattern Matching | Shorter version accepted for publication at AAAI 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph matching is a fundamental problem in pattern recognition, with many
applications such as software analysis and computational biology. One
well-known type of graph matching problem is graph isomorphism, which consists
of deciding if two graphs are identical. Despite its usefulness, the properties
that one may check using graph isomorphism are rather limited, since it only
allows strict equality checks between two graphs. For example, it does not
allow one to check complex structural properties such as if the target graph is
an arbitrary length sequence followed by an arbitrary size loop.
We propose a generalization of graph isomorphism that allows one to check
such properties through a declarative specification. This specification is
given in the form of a Regular Graph Pattern (ReGaP), a special type of graph,
inspired by regular expressions, that may contain wildcard nodes that represent
arbitrary structures such as variable-sized sequences or subgraphs. We propose
a SAT-based algorithm for checking if a target graph matches a given ReGaP. We
also propose a preprocessing technique for improving the performance of the
algorithm and evaluate it through an extensive experimental evaluation on
benchmarks from the CodeSearchNet dataset.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2023 18:12:44 GMT"
}
] | 1,702,857,600,000 | [
[
"Terra-Neves",
"Miguel",
""
],
[
"Amaral",
"José",
""
],
[
"Lemos",
"Alexandre",
""
],
[
"Quintino",
"Rui",
""
],
[
"Resende",
"Pedro",
""
],
[
"Alegria",
"Antonio",
""
]
] |
2312.10372 | Qihang Ai | Qihang Ai, Jianwu Zhou, Haiyun Jiang, Lemao Liu, Shuming Shi | When Graph Data Meets Multimodal: A New Paradigm for Graph Understanding
and Reasoning | 15 pages, 10 figures, 9 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph data is ubiquitous in the physical world, and it has always been a
challenge to efficiently model graph structures using a unified paradigm for
the understanding and reasoning on various graphs. Moreover, in the era of
large language models, integrating complex graph information into text
sequences has become exceptionally difficult, which hinders the ability to
interact with graph data through natural language instructions.The paper
presents a new paradigm for understanding and reasoning about graph data by
integrating image encoding and multimodal technologies. This approach enables
the comprehension of graph data through an instruction-response format,
utilizing GPT-4V's advanced capabilities. The study evaluates this paradigm on
various graph types, highlighting the model's strengths and weaknesses,
particularly in Chinese OCR performance and complex reasoning tasks. The
findings suggest new direction for enhancing graph data processing and natural
language interaction.
| [
{
"version": "v1",
"created": "Sat, 16 Dec 2023 08:14:11 GMT"
}
] | 1,702,944,000,000 | [
[
"Ai",
"Qihang",
""
],
[
"Zhou",
"Jianwu",
""
],
[
"Jiang",
"Haiyun",
""
],
[
"Liu",
"Lemao",
""
],
[
"Shi",
"Shuming",
""
]
] |
2312.10417 | Zhiwei Zha | Zhiwei Zha, Jiaan Wang, Zhixu Li, Xiangru Zhu, Wei Song, Yanghua Xiao | M2ConceptBase: A Fine-grained Aligned Multi-modal Conceptual Knowledge
Base | 12 pages, 7 figures, 7 tables, Submitted to TKDE | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large multi-modal models (LMMs) have demonstrated promising intelligence
owing to the rapid development of pre-training techniques. However, their
fine-grained cross-modal alignment ability is constrained by the coarse
alignment in image-text pairs. This limitation hinders awareness of
fine-grained concepts, resulting in sub-optimal performance. In this paper, we
propose a multi-modal conceptual knowledge base, named M2ConceptBase, which
aims to provide fine-grained alignment between images and concepts.
Specifically, M2ConceptBase models concepts as nodes, associating each with
relevant images and detailed text, thereby enhancing LMMs' cross-modal
alignment with rich conceptual knowledge. To collect concept-image and
concept-description alignments, we propose a context-aware multi-modal symbol
grounding approach that considers context information in existing large-scale
image-text pairs with respect to each concept. A cutting-edge large language
model supplements descriptions for concepts not grounded via our symbol
grounding approach. Finally, our M2ConceptBase contains more than 951K images
and 152K concepts, each associating with an average of 6.27 images and a single
detailed description. We conduct experiments on the OK-VQA task, demonstrating
that our M2ConceptBase facilitates the model in achieving state-of-the-art
performance. Moreover, we construct a comprehensive benchmark to evaluate the
concept understanding of LMMs and show that M2ConceptBase could effectively
improve LMMs' concept understanding and cross-modal alignment abilities.
| [
{
"version": "v1",
"created": "Sat, 16 Dec 2023 11:06:11 GMT"
}
] | 1,702,944,000,000 | [
[
"Zha",
"Zhiwei",
""
],
[
"Wang",
"Jiaan",
""
],
[
"Li",
"Zhixu",
""
],
[
"Zhu",
"Xiangru",
""
],
[
"Song",
"Wei",
""
],
[
"Xiao",
"Yanghua",
""
]
] |
2312.10728 | Andrew Melnik | Andrew Melnik, Robin Schiewer, Moritz Lange, Andrei Muresanu, Mozhgan
Saeidi, Animesh Garg, Helge Ritter | Benchmarks for Physical Reasoning AI | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Physical reasoning is a crucial aspect in the development of general AI
systems, given that human learning starts with interacting with the physical
world before progressing to more complex concepts. Although researchers have
studied and assessed the physical reasoning of AI approaches through various
specific benchmarks, there is no comprehensive approach to evaluating and
measuring progress. Therefore, we aim to offer an overview of existing
benchmarks and their solution approaches and propose a unified perspective for
measuring the physical reasoning capacity of AI systems. We select benchmarks
that are designed to test algorithmic performance in physical reasoning tasks.
While each of the selected benchmarks poses a unique challenge, their ensemble
provides a comprehensive proving ground for an AI generalist agent with a
measurable skill level for various physical reasoning concepts. This gives an
advantage to such an ensemble of benchmarks over other holistic benchmarks that
aim to simulate the real world by intertwining its complexity and many
concepts. We group the presented set of physical reasoning benchmarks into
subcategories so that more narrow generalist AI agents can be tested first on
these groups.
| [
{
"version": "v1",
"created": "Sun, 17 Dec 2023 14:24:03 GMT"
}
] | 1,702,944,000,000 | [
[
"Melnik",
"Andrew",
""
],
[
"Schiewer",
"Robin",
""
],
[
"Lange",
"Moritz",
""
],
[
"Muresanu",
"Andrei",
""
],
[
"Saeidi",
"Mozhgan",
""
],
[
"Garg",
"Animesh",
""
],
[
"Ritter",
"Helge",
""
]
] |
2312.10904 | Christopher Mungall | Sabrina Toro, Anna V Anagnostopoulos, Sue Bello, Kai Blumberg,
Rhiannon Cameron, Leigh Carmody, Alexander D Diehl, Damion Dooley, William
Duncan, Petra Fey, Pascale Gaudet, Nomi L Harris, Marcin Joachimiak, Leila
Kiani, Tiago Lubiana, Monica C Munoz-Torres, Shawn O'Neil, David
Osumi-Sutherland, Aleix Puig, Justin P Reese, Leonore Reiser, Sofia Robb,
Troy Ruemping, James Seager, Eric Sid, Ray Stefancsik, Magalie Weber, Valerie
Wood, Melissa A Haendel, Christopher J Mungall | Dynamic Retrieval Augmented Generation of Ontologies using Artificial
Intelligence (DRAGON-AI) | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Ontologies are fundamental components of informatics infrastructure in
domains such as biomedical, environmental, and food sciences, representing
consensus knowledge in an accurate and computable form. However, their
construction and maintenance demand substantial resources, necessitating
substantial collaborative efforts of domain experts, curators, and ontology
experts.
We present Dynamic Retrieval Augmented Generation of Ontologies using AI
(DRAGON-AI), an ontology generation method employing Large Language Models
(LLMs) and Retrieval Augmented Generation (RAG). This method can generate
textual and logical ontology components, drawing from existing knowledge in
multiple ontologies, as well as unstructured textual sources.
We assessed DRAGON-AI across ten diverse ontologies, making use of extensive
manual evaluation of results. We demonstrate high precision for relationship
generation, close to but lower than precision from logic-based reasoning. We
also demonstrate definition generation comparable with but lower than
human-generated definitions. Notably, expert evaluators were better able to
discern subtle flaws in AI-generated definitions. We also demonstrated the
ability of DRAGON-AI to incorporate natural language instructions in the form
of GitHub issues.
These findings suggest DRAGON-AI's potential to substantially aid the manual
ontology construction process. However, our results also underscore the
importance of having expert curators and ontology editors drive the ontology
generation process.
| [
{
"version": "v1",
"created": "Mon, 18 Dec 2023 03:19:31 GMT"
}
] | 1,702,944,000,000 | [
[
"Toro",
"Sabrina",
""
],
[
"Anagnostopoulos",
"Anna V",
""
],
[
"Bello",
"Sue",
""
],
[
"Blumberg",
"Kai",
""
],
[
"Cameron",
"Rhiannon",
""
],
[
"Carmody",
"Leigh",
""
],
[
"Diehl",
"Alexander D",
""
],
[
"Dooley",
"Damion",
""
],
[
"Duncan",
"William",
""
],
[
"Fey",
"Petra",
""
],
[
"Gaudet",
"Pascale",
""
],
[
"Harris",
"Nomi L",
""
],
[
"Joachimiak",
"Marcin",
""
],
[
"Kiani",
"Leila",
""
],
[
"Lubiana",
"Tiago",
""
],
[
"Munoz-Torres",
"Monica C",
""
],
[
"O'Neil",
"Shawn",
""
],
[
"Osumi-Sutherland",
"David",
""
],
[
"Puig",
"Aleix",
""
],
[
"Reese",
"Justin P",
""
],
[
"Reiser",
"Leonore",
""
],
[
"Robb",
"Sofia",
""
],
[
"Ruemping",
"Troy",
""
],
[
"Seager",
"James",
""
],
[
"Sid",
"Eric",
""
],
[
"Stefancsik",
"Ray",
""
],
[
"Weber",
"Magalie",
""
],
[
"Wood",
"Valerie",
""
],
[
"Haendel",
"Melissa A",
""
],
[
"Mungall",
"Christopher J",
""
]
] |
2312.11027 | Jingqing Ruan | Jingqing Ruan, Kaishen Wang, Qingyang Zhang, Dengpeng Xing, Bo Xu | Learning Top-k Subtask Planning Tree based on Discriminative
Representation Pre-training for Decision Making | Accepted by Machine Intelligence Research | null | 10.1007/s11633-023-1483-z | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many complicated real-world tasks can be broken down into smaller, more
manageable parts, and planning with prior knowledge extracted from these
simplified pieces is crucial for humans to make accurate decisions. However,
replicating this process remains a challenge for AI agents and naturally raises
two questions: How to extract discriminative knowledge representation from
priors? How to develop a rational plan to decompose complex problems? Most
existing representation learning methods employing a single encoder structure
are fragile and sensitive to complex and diverse dynamics. To address this
issue, we introduce a multiple-encoder and individual-predictor regime to learn
task-essential representations from sufficient data for simple subtasks.
Multiple encoders can extract adequate task-relevant dynamics without
confusion, and the shared predictor can discriminate the task characteristics.
We also use the attention mechanism to generate a top-k subtask planning tree,
which customizes subtask execution plans in guiding complex decisions on unseen
tasks. This process enables forward-looking and globality by flexibly adjusting
the depth and width of the planning tree. Empirical results on a challenging
platform composed of some basic simple tasks and combinatorially rich synthetic
tasks consistently outperform some competitive baselines and demonstrate the
benefits of our design.
| [
{
"version": "v1",
"created": "Mon, 18 Dec 2023 09:00:31 GMT"
},
{
"version": "v2",
"created": "Mon, 20 May 2024 10:02:25 GMT"
}
] | 1,716,249,600,000 | [
[
"Ruan",
"Jingqing",
""
],
[
"Wang",
"Kaishen",
""
],
[
"Zhang",
"Qingyang",
""
],
[
"Xing",
"Dengpeng",
""
],
[
"Xu",
"Bo",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.