id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
listlengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2312.11143 | Felipe Trevizan | Dillon Z. Chen and Sylvie Thi\'ebaux and Felipe Trevizan | Learning Domain-Independent Heuristics for Grounded and Lifted Planning | Extended version of AAAI 2024 paper | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present three novel graph representations of planning tasks suitable for
learning domain-independent heuristics using Graph Neural Networks (GNNs) to
guide search. In particular, to mitigate the issues caused by large grounded
GNNs we present the first method for learning domain-independent heuristics
with only the lifted representation of a planning task. We also provide a
theoretical analysis of the expressiveness of our models, showing that some are
more powerful than STRIPS-HGN, the only other existing model for learning
domain-independent heuristics. Our experiments show that our heuristics
generalise to much larger problems than those in the training set, vastly
surpassing STRIPS-HGN heuristics.
| [
{
"version": "v1",
"created": "Mon, 18 Dec 2023 12:32:45 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Dec 2023 11:11:07 GMT"
}
] | 1,703,116,800,000 | [
[
"Chen",
"Dillon Z.",
""
],
[
"Thiébaux",
"Sylvie",
""
],
[
"Trevizan",
"Felipe",
""
]
] |
2312.11280 | Abhijnan Chakraborty | Daman Deep Singh, Amit Kumar, Abhijnan Chakraborty | Towards Fairness in Online Service with k Servers and its Application on
Fair Food Delivery | AAAI 2024 Conference | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The k-SERVER problem is one of the most prominent problems in online
algorithms with several variants and extensions. However, simplifying
assumptions like instantaneous server movements and zero service time has
hitherto limited its applicability to real-world problems. In this paper, we
introduce a realistic generalization of k-SERVER without such assumptions - the
k-FOOD problem, where requests with source-destination locations and an
associated pickup time window arrive in an online fashion, and each has to be
served by exactly one of the available k servers. The k-FOOD problem offers the
versatility to model a variety of real-world use cases such as food delivery,
ride sharing, and quick commerce. Moreover, motivated by the need for fairness
in online platforms, we introduce the FAIR k-FOOD problem with the max-min
objective. We establish that both k-FOOD and FAIR k-FOOD problems are strongly
NP-hard and develop an optimal offline algorithm that arises naturally from a
time-expanded flow network. Subsequently, we propose an online algorithm
DOC4FOOD involving virtual movements of servers to the nearest request
location. Experiments on a real-world food-delivery dataset, alongside
synthetic datasets, establish the efficacy of the proposed algorithm against
state-of-the-art fair food delivery algorithms.
| [
{
"version": "v1",
"created": "Mon, 18 Dec 2023 15:22:03 GMT"
}
] | 1,702,944,000,000 | [
[
"Singh",
"Daman Deep",
""
],
[
"Kumar",
"Amit",
""
],
[
"Chakraborty",
"Abhijnan",
""
]
] |
2312.11364 | Tristan Bester | Tristan Bester, Benjamin Rosman, Steven James, Geraud Nangue Tasse | Counting Reward Automata: Sample Efficient Reinforcement Learning
Through the Exploitation of Reward Function Structure | 14 pages, 11 Figures, Published in AAAI W25: Neuro-Symbolic Learning
and Reasoning in the era of Large Language Models (NuCLeaR) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present counting reward automata-a finite state machine variant capable of
modelling any reward function expressible as a formal language. Unlike previous
approaches, which are limited to the expression of tasks as regular languages,
our framework allows for tasks described by unrestricted grammars. We prove
that an agent equipped with such an abstract machine is able to solve a larger
set of tasks than those utilising current approaches. We show that this
increase in expressive power does not come at the cost of increased automaton
complexity. A selection of learning algorithms are presented which exploit
automaton structure to improve sample efficiency. We show that the state
machines required in our formulation can be specified from natural language
task descriptions using large language models. Empirical results demonstrate
that our method outperforms competing approaches in terms of sample efficiency,
automaton complexity, and task completion.
| [
{
"version": "v1",
"created": "Mon, 18 Dec 2023 17:20:38 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Feb 2024 19:19:37 GMT"
}
] | 1,708,387,200,000 | [
[
"Bester",
"Tristan",
""
],
[
"Rosman",
"Benjamin",
""
],
[
"James",
"Steven",
""
],
[
"Tasse",
"Geraud Nangue",
""
]
] |
2312.11414 | Konstantinos Voudouris | Konstantinos Voudouris, Ibrahim Alhas, Wout Schellaert, Matthew
Crosby, Joel Holmes, John Burden, Niharika Chaubey, Niall Donnelly,
Matishalin Patel, Marta Halina, Jos\'e Hern\'andez-Orallo, Lucy G. Cheke | Animal-AI 3: What's New & Why You Should Care | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | The Animal-AI Environment is a unique game-based research platform designed
to serve both the artificial intelligence and cognitive science research
communities. In this paper, we present Animal-AI 3, the latest version of the
environment, outlining several major new features that make the game more
engaging for humans and more complex for AI systems. New features include
interactive buttons, reward dispensers, and player notifications, as well as an
overhaul of the environment's graphics and processing for significant increases
in agent training time and quality of the human player experience. We provide
detailed guidance on how to build computational and behavioural experiments
with Animal-AI 3. We present results from a series of agents, including the
state-of-the-art Deep Reinforcement Learning agent (dreamer-v3), on newly
designed tests and the Animal-AI Testbed of 900 tasks inspired by research in
comparative psychology. Animal-AI 3 is designed to facilitate collaboration
between the cognitive sciences and artificial intelligence. This paper serves
as a stand-alone document that motivates, describes, and demonstrates Animal-AI
3 for the end user.
| [
{
"version": "v1",
"created": "Mon, 18 Dec 2023 18:18:10 GMT"
}
] | 1,702,944,000,000 | [
[
"Voudouris",
"Konstantinos",
""
],
[
"Alhas",
"Ibrahim",
""
],
[
"Schellaert",
"Wout",
""
],
[
"Crosby",
"Matthew",
""
],
[
"Holmes",
"Joel",
""
],
[
"Burden",
"John",
""
],
[
"Chaubey",
"Niharika",
""
],
[
"Donnelly",
"Niall",
""
],
[
"Patel",
"Matishalin",
""
],
[
"Halina",
"Marta",
""
],
[
"Hernández-Orallo",
"José",
""
],
[
"Cheke",
"Lucy G.",
""
]
] |
2312.11527 | Hayet Dahmri | Hayet Dahmri and Salim Bouamama | A Simulated Annealing-Based Multiobjective Optimization Algorithm for
Minimum Weight Minimum Connected Dominating Set Problem | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Minimum connected dominating set problem is an NP-hard combinatorial
optimization problem in graph theory. Finding connected dominating set is of
high interest in various domains such as wireless sensor networks, optical
networks, and systems biology. Its weighted variant named minimum weight
connected dominating set is also useful in such applications. In this paper, we
propose a simulated annealing algorithm based on a greedy heuristic for
tackling a variant of the minimum connected dominating set problem and that by
exploiting two objectives together namely the cardinality and the total weight
of the connected dominating set. Experimental results compared to those
obtained by a recent proposed research show the superiority of our approach.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 13:36:04 GMT"
},
{
"version": "v2",
"created": "Sat, 25 May 2024 13:43:18 GMT"
}
] | 1,716,854,400,000 | [
[
"Dahmri",
"Hayet",
""
],
[
"Bouamama",
"Salim",
""
]
] |
2312.11651 | Fadi Al Machot | Fadi Al Machot | Bridging Logic and Learning: A Neural-Symbolic Approach for Enhanced
Reasoning in Neural Models (ASPER) | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Neural-symbolic learning, an intersection of neural networks and symbolic
reasoning, aims to blend neural networks' learning capabilities with symbolic
AI's interpretability and reasoning. This paper introduces an approach designed
to improve the performance of neural models in learning reasoning tasks. It
achieves this by integrating Answer Set Programming (ASP) solvers and
domain-specific expertise, which is an approach that diverges from traditional
complex neural-symbolic models. In this paper, a shallow artificial neural
network (ANN) is specifically trained to solve Sudoku puzzles with minimal
training data. The model has a unique loss function that integrates losses
calculated using the ASP solver outputs, effectively enhancing its training
efficiency. Most notably, the model shows a significant improvement in solving
Sudoku puzzles using only 12 puzzles for training and testing without
hyperparameter tuning. This advancement indicates that the model's enhanced
reasoning capabilities have practical applications, extending well beyond
Sudoku puzzles to potentially include a variety of other domains. The code can
be found on GitHub: https://github.com/Fadi2200/ASPEN.
| [
{
"version": "v1",
"created": "Mon, 18 Dec 2023 19:06:00 GMT"
}
] | 1,703,030,400,000 | [
[
"Machot",
"Fadi Al",
""
]
] |
2312.11675 | Christian Muise | Christian Muise, Sheila A. McIlraith, J. Christopher Beck | PRP Rebooted: Advancing the State of the Art in FOND Planning | 13 pages, 4 figures, AAAI conference paper Update: Fixed abstract and
typos | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fully Observable Non-Deterministic (FOND) planning is a variant of classical
symbolic planning in which actions are nondeterministic, with an action's
outcome known only upon execution. It is a popular planning paradigm with
applications ranging from robot planning to dialogue-agent design and reactive
synthesis. Over the last 20 years, a number of approaches to FOND planning have
emerged. In this work, we establish a new state of the art, following in the
footsteps of some of the most powerful FOND planners to date. Our planner, PR2,
decisively outperforms the four leading FOND planners, at times by a large
margin, in 17 of 18 domains that represent a comprehensive benchmark suite.
Ablation studies demonstrate the impact of various techniques we introduce,
with the largest improvement coming from our novel FOND-aware heuristic.
| [
{
"version": "v1",
"created": "Mon, 18 Dec 2023 19:40:41 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Dec 2023 03:55:15 GMT"
}
] | 1,703,116,800,000 | [
[
"Muise",
"Christian",
""
],
[
"McIlraith",
"Sheila A.",
""
],
[
"Beck",
"J. Christopher",
""
]
] |
2312.11690 | Mehrad Ansari | Mehrad Ansari and Seyed Mohamad Moosavi | Agent-based Learning of Materials Datasets from Scientific Literature | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advancements in machine learning and artificial intelligence are transforming
materials discovery. Yet, the availability of structured experimental data
remains a bottleneck. The vast corpus of scientific literature presents a
valuable and rich resource of such data. However, manual dataset creation from
these resources is challenging due to issues in maintaining quality and
consistency, scalability limitations, and the risk of human error and bias.
Therefore, in this work, we develop a chemist AI agent, powered by large
language models (LLMs), to overcome these challenges by autonomously creating
structured datasets from natural language text, ranging from sentences and
paragraphs to extensive scientific research articles. Our chemist AI agent,
Eunomia, can plan and execute actions by leveraging the existing knowledge from
decades of scientific research articles, scientists, the Internet and other
tools altogether. We benchmark the performance of our approach in three
different information extraction tasks with various levels of complexity,
including solid-state impurity doping, metal-organic framework (MOF) chemical
formula, and property relations. Our results demonstrate that our zero-shot
agent, with the appropriate tools, is capable of attaining performance that is
either superior or comparable to the state-of-the-art fine-tuned materials
information extraction methods. This approach simplifies compilation of machine
learning-ready datasets for various materials discovery applications, and
significantly ease the accessibility of advanced natural language processing
tools for novice users in natural language. The methodology in this work is
developed as an open-source software on https://github.com/AI4ChemS/Eunomia.
| [
{
"version": "v1",
"created": "Mon, 18 Dec 2023 20:29:58 GMT"
}
] | 1,703,030,400,000 | [
[
"Ansari",
"Mehrad",
""
],
[
"Moosavi",
"Seyed Mohamad",
""
]
] |
2312.11753 | Juho Kim | Juho Kim | Recording and Describing Poker Hands | 8 pages, accepted to 2024 IEEE Conference on Games | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper introduces the Poker Hand History (PHH) file format, designed to
standardize the recording of poker hands across different game variants.
Despite poker's widespread popularity in the mainstream culture as a mind sport
and its prominence in the field of artificial intelligence (AI) research as a
benchmark for imperfect information AI agents, it lacks a consistent format
that humans can use to document poker hands across different variants that can
also easily be parsed by machines. To address this gap in the literature, we
propose the PHH format which provides a concise human-readable machine-friendly
representation of hand history that comprehensively captures various details of
the hand, ranging from initial game parameters and actions to contextual
parameters including but not limited to the venue, players, and time control
information. In the supplementary, we provide 10,088 hands covering 11
different variants in the PHH format. The full specification is available on
https://github.com/uoftcprg/phh-std
| [
{
"version": "v1",
"created": "Mon, 18 Dec 2023 23:39:01 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Jan 2024 06:49:19 GMT"
},
{
"version": "v3",
"created": "Thu, 4 Apr 2024 08:06:03 GMT"
},
{
"version": "v4",
"created": "Fri, 10 May 2024 20:22:28 GMT"
}
] | 1,715,644,800,000 | [
[
"Kim",
"Juho",
""
]
] |
2312.11761 | Jay Mahajan | Jay Mahajan, Samuel Hum, Jack Henhapl, Diya Yunus, Matthew Gadbury,
Emi Brown, Jeff Ginger, H. Chad Lane | MineObserver 2.0: A Deep Learning & In-Game Framework for Assessing
Natural Language Descriptions of Minecraft Imagery | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | MineObserver 2.0 is an AI framework that uses Computer Vision and Natural
Language Processing for assessing the accuracy of learner-generated
descriptions of Minecraft images that include some scientifically relevant
content. The system automatically assesses the accuracy of participant
observations, written in natural language, made during science learning
activities that take place in Minecraft. We demonstrate our system working in
real-time and describe a teacher support dashboard to showcase observations,
both of which advance our previous work. We present the results of a study
showing that MineObserver 2.0 improves over its predecessor both in perceived
accuracy of the system's generated descriptions as well as in usefulness of the
system's feedback. In future work we intend improve system-generated
descriptions, give teachers more control and upgrade the system to perform
continuous learning to more effectively and rapidly respond to novel
observations made by learners.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2023 00:15:35 GMT"
}
] | 1,703,030,400,000 | [
[
"Mahajan",
"Jay",
""
],
[
"Hum",
"Samuel",
""
],
[
"Henhapl",
"Jack",
""
],
[
"Yunus",
"Diya",
""
],
[
"Gadbury",
"Matthew",
""
],
[
"Brown",
"Emi",
""
],
[
"Ginger",
"Jeff",
""
],
[
"Lane",
"H. Chad",
""
]
] |
2312.11865 | Weiyu Ma | Weiyu Ma, Qirui Mi, Xue Yan, Yuqiao Wu, Runji Lin, Haifeng Zhang, Jun
Wang | Large Language Models Play StarCraft II: Benchmarks and A Chain of
Summarization Approach | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | StarCraft II is a challenging benchmark for AI agents due to the necessity of
both precise micro level operations and strategic macro awareness. Previous
works, such as Alphastar and SCC, achieve impressive performance on tackling
StarCraft II , however, still exhibit deficiencies in long term strategic
planning and strategy interpretability. Emerging large language model (LLM)
agents, such as Voyage and MetaGPT, presents the immense potential in solving
intricate tasks. Motivated by this, we aim to validate the capabilities of LLMs
on StarCraft II, a highly complex RTS game.To conveniently take full advantage
of LLMs` reasoning abilities, we first develop textual StratCraft II
environment, called TextStarCraft II, which LLM agent can interact. Secondly,
we propose a Chain of Summarization method, including single frame
summarization for processing raw observations and multi frame summarization for
analyzing game information, providing command recommendations, and generating
strategic decisions. Our experiment consists of two parts: first, an evaluation
by human experts, which includes assessing the LLMs`s mastery of StarCraft II
knowledge and the performance of LLM agents in the game; second, the in game
performance of LLM agents, encompassing aspects like win rate and the impact of
Chain of Summarization.Experiment results demonstrate that: 1. LLMs possess the
relevant knowledge and complex planning abilities needed to address StarCraft
II scenarios; 2. Human experts consider the performance of LLM agents to be
close to that of an average player who has played StarCraft II for eight years;
3. LLM agents are capable of defeating the built in AI at the Harder(Lv5)
difficulty level. We have open sourced the code and released demo videos of LLM
agent playing StarCraft II.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2023 05:27:16 GMT"
}
] | 1,703,030,400,000 | [
[
"Ma",
"Weiyu",
""
],
[
"Mi",
"Qirui",
""
],
[
"Yan",
"Xue",
""
],
[
"Wu",
"Yuqiao",
""
],
[
"Lin",
"Runji",
""
],
[
"Zhang",
"Haifeng",
""
],
[
"Wang",
"Jun",
""
]
] |
2312.11935 | Yuyang Xia | Yuyang Xia, Shuncheng Liu, Quanlin Yu, Liwei Deng, You Zhang, Han Su
and Kai Zheng | Parameterized Decision-making with Multi-modal Perception for Autonomous
Driving | IEEE International Conference on Data Engineering (ICDE2024) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous driving is an emerging technology that has advanced rapidly over
the last decade. Modern transportation is expected to benefit greatly from a
wise decision-making framework of autonomous vehicles, including the
improvement of mobility and the minimization of risks and travel time. However,
existing methods either ignore the complexity of environments only fitting
straight roads, or ignore the impact on surrounding vehicles during
optimization phases, leading to weak environmental adaptability and incomplete
optimization objectives. To address these limitations, we propose a
parameterized decision-making framework with multi-modal perception based on
deep reinforcement learning, called AUTO. We conduct a comprehensive perception
to capture the state features of various traffic participants around the
autonomous vehicle, based on which we design a graph-based model to learn a
state representation of the multi-modal semantic features. To distinguish
between lane-following and lane-changing, we decompose an action of the
autonomous vehicle into a parameterized action structure that first decides
whether to change lanes and then computes an exact action to execute. A hybrid
reward function takes into account aspects of safety, traffic efficiency,
passenger comfort, and impact to guide the framework to generate optimal
actions. In addition, we design a regularization term and a multi-worker
paradigm to enhance the training. Extensive experiments offer evidence that
AUTO can advance state-of-the-art in terms of both macroscopic and microscopic
effectiveness.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2023 08:27:02 GMT"
}
] | 1,703,030,400,000 | [
[
"Xia",
"Yuyang",
""
],
[
"Liu",
"Shuncheng",
""
],
[
"Yu",
"Quanlin",
""
],
[
"Deng",
"Liwei",
""
],
[
"Zhang",
"You",
""
],
[
"Su",
"Han",
""
],
[
"Zheng",
"Kai",
""
]
] |
2312.11955 | Nan Jiang | Nan Jiang, Md Nasim, Yexiang Xue | Vertical Symbolic Regression | arXiv admin note: text overlap with arXiv:2306.08057 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Automating scientific discovery has been a grand goal of Artificial
Intelligence (AI) and will bring tremendous societal impact. Learning symbolic
expressions from experimental data is a vital step in AI-driven scientific
discovery. Despite exciting progress, most endeavors have focused on the
horizontal discovery paths, i.e., they directly search for the best expression
in the full hypothesis space involving all the independent variables.
Horizontal paths are challenging due to the exponentially large hypothesis
space involving all the independent variables. We propose Vertical Symbolic
Regression (VSR) to expedite symbolic regression. The VSR starts by fitting
simple expressions involving a few independent variables under controlled
experiments where the remaining variables are held constant. It then extends
the expressions learned in previous rounds by adding new independent variables
and using new control variable experiments allowing these variables to vary.
The first few steps in vertical discovery are significantly cheaper than the
horizontal path, as their search is in reduced hypothesis spaces involving a
small set of variables. As a consequence, vertical discovery has the potential
to supercharge state-of-the-art symbolic regression approaches in handling
complex equations with many contributing factors. Theoretically, we show that
the search space of VSR can be exponentially smaller than that of horizontal
approaches when learning a class of expressions. Experimentally, VSR
outperforms several baselines in learning symbolic expressions involving many
independent variables.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2023 08:55:47 GMT"
}
] | 1,703,030,400,000 | [
[
"Jiang",
"Nan",
""
],
[
"Nasim",
"Md",
""
],
[
"Xue",
"Yexiang",
""
]
] |
2312.12010 | Krishna Balajirao Manoorkar | Marcel Boersma, Krishna Manoorkar, Alessandra Palmigiano, Mattia
Panettiere, Apostolos Tzimoulis, Nachoem Wijnberg | Outlier detection using flexible categorisation and interrogative
agendas | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Categorization is one of the basic tasks in machine learning and data
analysis. Building on formal concept analysis (FCA), the starting point of the
present work is that different ways to categorize a given set of objects exist,
which depend on the choice of the sets of features used to classify them, and
different such sets of features may yield better or worse categorizations,
relative to the task at hand. In their turn, the (a priori) choice of a
particular set of features over another might be subjective and express a
certain epistemic stance (e.g. interests, relevance, preferences) of an agent
or a group of agents, namely, their interrogative agenda. In the present paper,
we represent interrogative agendas as sets of features, and explore and compare
different ways to categorize objects w.r.t. different sets of features
(agendas). We first develop a simple unsupervised FCA-based algorithm for
outlier detection which uses categorizations arising from different agendas. We
then present a supervised meta-learning algorithm to learn suitable (fuzzy)
agendas for categorization as sets of features with different weights or
masses. We combine this meta-learning algorithm with the unsupervised outlier
detection algorithm to obtain a supervised outlier detection algorithm. We show
that these algorithms perform at par with commonly used algorithms for outlier
detection on commonly used datasets in outlier detection. These algorithms
provide both local and global explanations of their results.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2023 10:05:09 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Dec 2023 10:51:52 GMT"
}
] | 1,703,116,800,000 | [
[
"Boersma",
"Marcel",
""
],
[
"Manoorkar",
"Krishna",
""
],
[
"Palmigiano",
"Alessandra",
""
],
[
"Panettiere",
"Mattia",
""
],
[
"Tzimoulis",
"Apostolos",
""
],
[
"Wijnberg",
"Nachoem",
""
]
] |
2312.12119 | Susanne Hindennach | Susanne Hindennach, Lei Shi, Filip Mileti\'c and Andreas Bulling | Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI
Research | 21 pages, 6 figures, to be published in PACM HCI (CSCW '24) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | When users perceive AI systems as mindful, independent agents, they hold them
responsible instead of the AI experts who created and designed these systems.
So far, it has not been studied whether explanations support this shift in
responsibility through the use of mind-attributing verbs like "to think". To
better understand the prevalence of mind-attributing explanations we analyse AI
explanations in 3,533 explainable AI (XAI) research articles from the Semantic
Scholar Open Research Corpus (S2ORC). Using methods from semantic shift
detection, we identify three dominant types of mind attribution: (1)
metaphorical (e.g. "to learn" or "to predict"), (2) awareness (e.g. "to
consider"), and (3) agency (e.g. "to make decisions"). We then analyse the
impact of mind-attributing explanations on awareness and responsibility in a
vignette-based experiment with 199 participants. We find that participants who
were given a mind-attributing explanation were more likely to rate the AI
system as aware of the harm it caused. Moreover, the mind-attributing
explanation had a responsibility-concealing effect: Considering the AI experts'
involvement lead to reduced ratings of AI responsibility for participants who
were given a non-mind-attributing or no explanation. In contrast, participants
who read the mind-attributing explanation still held the AI system responsible
despite considering the AI experts' involvement. Taken together, our work
underlines the need to carefully phrase explanations about AI systems in
scientific writing to reduce mind attribution and clearly communicate human
responsibility.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2023 12:49:32 GMT"
}
] | 1,703,030,400,000 | [
[
"Hindennach",
"Susanne",
""
],
[
"Shi",
"Lei",
""
],
[
"Miletić",
"Filip",
""
],
[
"Bulling",
"Andreas",
""
]
] |
2312.12290 | Muhammad Suffian | Muhammad Suffian, Ulrike Kuhl, Jose M. Alonso-Moral, Alessandro
Bogliolo | Toward enriched Cognitive Learning with XAI | 10 pages, 2 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As computational systems supported by artificial intelligence (AI) techniques
continue to play an increasingly pivotal role in making high-stakes
recommendations and decisions across various domains, the demand for
explainable AI (XAI) has grown significantly, extending its impact into
cognitive learning research. Providing explanations for novel concepts is
recognised as a fundamental aid in the learning process, particularly when
addressing challenges stemming from knowledge deficiencies and skill
application. Addressing these difficulties involves timely explanations and
guidance throughout the learning process, prompting the interest of AI experts
in developing explainer models. In this paper, we introduce an intelligent
system (CL-XAI) for Cognitive Learning which is supported by XAI, focusing on
two key research objectives: exploring how human learners comprehend the
internal mechanisms of AI models using XAI tools and evaluating the
effectiveness of such tools through human feedback. The use of CL-XAI is
illustrated with a game-inspired virtual use case where learners tackle
combinatorial problems to enhance problem-solving skills and deepen their
understanding of complex concepts, highlighting the potential for
transformative advances in cognitive learning and co-learning.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2023 16:13:47 GMT"
}
] | 1,703,030,400,000 | [
[
"Suffian",
"Muhammad",
""
],
[
"Kuhl",
"Ulrike",
""
],
[
"Alonso-Moral",
"Jose M.",
""
],
[
"Bogliolo",
"Alessandro",
""
]
] |
2312.12341 | Suwei Yang | Suwei Yang and Kuldeep S. Meel | Engineering an Exact Pseudo-Boolean Model Counter | 13 pages, 8 figures. To appear in AAAI24 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model counting, a fundamental task in computer science, involves determining
the number of satisfying assignments to a Boolean formula, typically
represented in conjunctive normal form (CNF). While model counting for CNF
formulas has received extensive attention with a broad range of applications,
the study of model counting for Pseudo-Boolean (PB) formulas has been
relatively overlooked. Pseudo-Boolean formulas, being more succinct than
propositional Boolean formulas, offer greater flexibility in representing
real-world problems. Consequently, there is a crucial need to investigate
efficient techniques for model counting for PB formulas.
In this work, we propose the first exact Pseudo-Boolean model counter,
PBCount, that relies on knowledge compilation approach via algebraic decision
diagrams. Our extensive empirical evaluation shows that PBCount can compute
counts for 1513 instances while the current state-of-the-art approach could
only handle 1013 instances. Our work opens up several avenues for future work
in the context of model counting for PB formulas, such as the development of
preprocessing techniques and exploration of approaches other than knowledge
compilation.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2023 17:14:06 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Feb 2024 01:49:00 GMT"
}
] | 1,708,387,200,000 | [
[
"Yang",
"Suwei",
""
],
[
"Meel",
"Kuldeep S.",
""
]
] |
2312.12554 | Carlos Linares L\'opez | Sofia Lemons, Wheeler Ruml, Robert C. Holte, Carlos Linares L\'opez | Rectangle Search: An Anytime Beam Search (Extended Version) | 30 pages, 200+ figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Anytime heuristic search algorithms try to find a (potentially suboptimal)
solution as quickly as possible and then work to find better and better
solutions until an optimal solution is obtained or time is exhausted. The most
widely-known anytime search algorithms are based on best-first search. In this
paper, we propose a new algorithm, rectangle search, that is instead based on
beam search, a variant of breadth-first search. It repeatedly explores
alternatives at all depth levels and is thus best-suited to problems featuring
deep local minima. Experiments using a variety of popular search benchmarks
suggest that rectangle search is competitive with fixed-width beam search and
often performs better than the previous best anytime search algorithms.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2023 19:50:45 GMT"
}
] | 1,703,116,800,000 | [
[
"Lemons",
"Sofia",
""
],
[
"Ruml",
"Wheeler",
""
],
[
"Holte",
"Robert C.",
""
],
[
"López",
"Carlos Linares",
""
]
] |
2312.12568 | Akbir M Khan Mr | Akbir Khan and Timon Willi and Newton Kwan and Andrea Tacchetti and
Chris Lu and Edward Grefenstette and Tim Rockt\"aschel and Jakob Foerster | Scaling Opponent Shaping to High Dimensional Games | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In multi-agent settings with mixed incentives, methods developed for zero-sum
games have been shown to lead to detrimental outcomes. To address this issue,
opponent shaping (OS) methods explicitly learn to influence the learning
dynamics of co-players and empirically lead to improved individual and
collective outcomes. However, OS methods have only been evaluated in
low-dimensional environments due to the challenges associated with estimating
higher-order derivatives or scaling model-free meta-learning. Alternative
methods that scale to more complex settings either converge to undesirable
solutions or rely on unrealistic assumptions about the environment or
co-players. In this paper, we successfully scale an OS-based approach to
general-sum games with temporally-extended actions and long-time horizons for
the first time. After analysing the representations of the meta-state and
history used by previous algorithms, we propose a simplified version called
Shaper. We show empirically that Shaper leads to improved individual and
collective outcomes in a range of challenging settings from literature. We
further formalize a technique previously implicit in the literature, and
analyse its contribution to opponent shaping. We show empirically that this
technique is helpful for the functioning of prior methods in certain
environments. Lastly, we show that previous environments, such as the CoinGame,
are inadequate for analysing temporally-extended general-sum interactions.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2023 20:05:23 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Feb 2024 10:00:20 GMT"
},
{
"version": "v3",
"created": "Sat, 10 Feb 2024 21:52:17 GMT"
}
] | 1,707,782,400,000 | [
[
"Khan",
"Akbir",
""
],
[
"Willi",
"Timon",
""
],
[
"Kwan",
"Newton",
""
],
[
"Tacchetti",
"Andrea",
""
],
[
"Lu",
"Chris",
""
],
[
"Grefenstette",
"Edward",
""
],
[
"Rocktäschel",
"Tim",
""
],
[
"Foerster",
"Jakob",
""
]
] |
2312.12891 | Steven James | William Hill, Ireton Liu, Anita De Mello Koch, Damion Harvey, Nishanth
Kumar, George Konidaris, Steven James | MinePlanner: A Benchmark for Long-Horizon Planning in Large Minecraft
Worlds | Accepted to the 6th ICAPS Workshop on the International Planning
Competition (WIPC 2024) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new benchmark for planning tasks based on the Minecraft game.
Our benchmark contains 45 tasks overall, but also provides support for creating
both propositional and numeric instances of new Minecraft tasks automatically.
We benchmark numeric and propositional planning systems on these tasks, with
results demonstrating that state-of-the-art planners are currently incapable of
dealing with many of the challenges advanced by our new benchmark, such as
scaling to instances with thousands of objects. Based on these results, we
identify areas of improvement for future planners. Our framework is made
available at https://github.com/IretonLiu/mine-pddl/.
| [
{
"version": "v1",
"created": "Wed, 20 Dec 2023 10:04:39 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Apr 2024 11:22:36 GMT"
}
] | 1,714,435,200,000 | [
[
"Hill",
"William",
""
],
[
"Liu",
"Ireton",
""
],
[
"Koch",
"Anita De Mello",
""
],
[
"Harvey",
"Damion",
""
],
[
"Kumar",
"Nishanth",
""
],
[
"Konidaris",
"George",
""
],
[
"James",
"Steven",
""
]
] |
2312.13487 | Katarina Doctor Z | Katarina Doctor, Mayank Kejriwal, Lawrence Holder, Eric Kildebeck,
Emma Resmini, Christopher Pereyda, Robert J. Steininger, Daniel V.
Oliven\c{c}a | Understanding and Estimating Domain Complexity Across Domains | 34 pages, 13 figures, 7 tables. arXiv admin note: substantial text
overlap with arXiv:2303.04141 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial Intelligence (AI) systems, trained in controlled environments,
often struggle in real-world complexities. We propose a general framework for
estimating domain complexity across diverse environments, like open-world
learning and real-world applications. This framework distinguishes between
intrinsic complexity (inherent to the domain) and extrinsic complexity
(dependent on the AI agent). By analyzing dimensionality, sparsity, and
diversity within these categories, we offer a comprehensive view of domain
challenges. This approach enables quantitative predictions of AI difficulty
during environment transitions, avoids bias in novel situations, and helps
navigate the vast search spaces of open-world domains.
| [
{
"version": "v1",
"created": "Wed, 20 Dec 2023 23:47:17 GMT"
}
] | 1,703,203,200,000 | [
[
"Doctor",
"Katarina",
""
],
[
"Kejriwal",
"Mayank",
""
],
[
"Holder",
"Lawrence",
""
],
[
"Kildebeck",
"Eric",
""
],
[
"Resmini",
"Emma",
""
],
[
"Pereyda",
"Christopher",
""
],
[
"Steininger",
"Robert J.",
""
],
[
"Olivença",
"Daniel V.",
""
]
] |
2312.13680 | Jiaxin Pan | Jiaxin Pan, Mojtaba Nayyeri, Yinan Li, Steffen Staab | HGE: Embedding Temporal Knowledge Graphs in a Product Space of
Heterogeneous Geometric Subspaces | The 38th Annual AAAI Conference on Artificial Intelligence (AAAI'24) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Temporal knowledge graphs represent temporal facts $(s,p,o,\tau)$ relating a
subject $s$ and an object $o$ via a relation label $p$ at time $\tau$, where
$\tau$ could be a time point or time interval. Temporal knowledge graphs may
exhibit static temporal patterns at distinct points in time and dynamic
temporal patterns between different timestamps. In order to learn a rich set of
static and dynamic temporal patterns and apply them for inference, several
embedding approaches have been suggested in the literature. However, as most of
them resort to single underlying embedding spaces, their capability to model
all kinds of temporal patterns was severely limited by having to adhere to the
geometric property of their one embedding space. We lift this limitation by an
embedding approach that maps temporal facts into a product space of several
heterogeneous geometric subspaces with distinct geometric properties, i.e.\
Complex, Dual, and Split-complex spaces. In addition, we propose a
temporal-geometric attention mechanism to integrate information from different
geometric subspaces conveniently according to the captured relational and
temporal information. Experimental results on standard temporal benchmark
datasets favorably evaluate our approach against state-of-the-art models.
| [
{
"version": "v1",
"created": "Thu, 21 Dec 2023 09:04:30 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Dec 2023 09:20:07 GMT"
}
] | 1,703,635,200,000 | [
[
"Pan",
"Jiaxin",
""
],
[
"Nayyeri",
"Mojtaba",
""
],
[
"Li",
"Yinan",
""
],
[
"Staab",
"Steffen",
""
]
] |
2312.13682 | Guillaume Perez | Guillaume Perez, Gael Glorian, Wijnand Suijlen, Arnaud Lallouet | A Constraint Programming Model for Scheduling the Unloading of Trains in
Ports: Extended | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a model to schedule the next 24 hours of operations
in a bulk cargo port to unload bulk cargo trains onto stockpiles. It is a
problem that includes multiple parts such as splitting long trains into shorter
ones and the routing of bulk material through a configurable network of
conveyors to the stockpiles. Managing such trains (up to three kilometers long)
also requires specialized equipment. The real world nature of the problem
specification implies the necessity to manage heterogeneous data. Indeed, when
new equipment is added (e.g. dumpers) or a new type of wagon comes in use,
older or different equipment will still be in use as well. All these details
need to be accounted for. In fact, avoiding a full deadlock of the facility
after a new but ineffective schedule is produced. In this paper, we provide a
detailed presentation of this real world problem and its associated data. This
allows us to propose an effective constraint programming model to solve this
problem. We also discuss the model design and the different implementations of
the propagators that we used in practice. Finally, we show how this model,
coupled with a large neighborhood search, was able to find 24 hour schedules
efficiently.
| [
{
"version": "v1",
"created": "Thu, 21 Dec 2023 09:11:03 GMT"
}
] | 1,703,203,200,000 | [
[
"Perez",
"Guillaume",
""
],
[
"Glorian",
"Gael",
""
],
[
"Suijlen",
"Wijnand",
""
],
[
"Lallouet",
"Arnaud",
""
]
] |
2312.13912 | Ehsan Kafshdar Goharshady | Krishnendu Chatterjee, Ehsan Kafshdar Goharshady, Mehrdad Karrabi,
Petr Novotn\'y, {\DJ}or{\dj}e \v{Z}ikeli\'c | Solving Long-run Average Reward Robust MDPs via Stochastic Games | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Markov decision processes (MDPs) provide a standard framework for sequential
decision making under uncertainty. However, MDPs do not take uncertainty in
transition probabilities into account. Robust Markov decision processes (RMDPs)
address this shortcoming of MDPs by assigning to each transition an uncertainty
set rather than a single probability value. In this work, we consider polytopic
RMDPs in which all uncertainty sets are polytopes and study the problem of
solving long-run average reward polytopic RMDPs. We present a novel perspective
on this problem and show that it can be reduced to solving long-run average
reward turn-based stochastic games with finite state and action spaces. This
reduction allows us to derive several important consequences that were hitherto
not known to hold for polytopic RMDPs. First, we derive new computational
complexity bounds for solving long-run average reward polytopic RMDPs, showing
for the first time that the threshold decision problem for them is in $NP \cap
coNP$ and that they admit a randomized algorithm with sub-exponential expected
runtime. Second, we present Robust Polytopic Policy Iteration (RPPI), a novel
policy iteration algorithm for solving long-run average reward polytopic RMDPs.
Our experimental evaluation shows that RPPI is much more efficient in solving
long-run average reward polytopic RMDPs compared to state-of-the-art methods
based on value iteration.
| [
{
"version": "v1",
"created": "Thu, 21 Dec 2023 15:00:06 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Apr 2024 17:05:38 GMT"
}
] | 1,714,521,600,000 | [
[
"Chatterjee",
"Krishnendu",
""
],
[
"Goharshady",
"Ehsan Kafshdar",
""
],
[
"Karrabi",
"Mehrdad",
""
],
[
"Novotný",
"Petr",
""
],
[
"Žikelić",
"Đorđe",
""
]
] |
2312.14121 | Jakub Kowalski | Micha{\l} Maras, Micha{\l} K\k{e}pa, Jakub Kowalski, Marek Szyku{\l}a | Fast and Knowledge-Free Deep Learning for General Game Playing (Student
Abstract) | AAAI-24 Student Abstracts | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We develop a method of adapting the AlphaZero model to General Game Playing
(GGP) that focuses on faster model generation and requires less knowledge to be
extracted from the game rules. The dataset generation uses MCTS playing instead
of self-play; only the value network is used, and attention layers replace the
convolutional ones. This allows us to abandon any assumptions about the action
space and board topology. We implement the method within the Regular Boardgames
GGP system and show that we can build models outperforming the UCT baseline for
most games efficiently.
| [
{
"version": "v1",
"created": "Thu, 21 Dec 2023 18:44:19 GMT"
}
] | 1,703,203,200,000 | [
[
"Maras",
"Michał",
""
],
[
"Kępa",
"Michał",
""
],
[
"Kowalski",
"Jakub",
""
],
[
"Szykuła",
"Marek",
""
]
] |
2312.14394 | Tangwen Qian | Tangwen Qian, Yile Chen, Gao Cong, Yongjun Xu, Fei Wang | AdapTraj: A Multi-Source Domain Generalization Framework for Multi-Agent
Trajectory Prediction | Accepted by ICDE 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-agent trajectory prediction, as a critical task in modeling complex
interactions of objects in dynamic systems, has attracted significant research
attention in recent years. Despite the promising advances, existing studies all
follow the assumption that data distribution observed during model learning
matches that encountered in real-world deployments. However, this assumption
often does not hold in practice, as inherent distribution shifts might exist in
the mobility patterns for deployment environments, thus leading to poor domain
generalization and performance degradation. Consequently, it is appealing to
leverage trajectories from multiple source domains to mitigate such
discrepancies for multi-agent trajectory prediction task. However, the
development of multi-source domain generalization in this task presents two
notable issues: (1) negative transfer; (2) inadequate modeling for external
factors. To address these issues, we propose a new causal formulation to
explicitly model four types of features: domain-invariant and domain-specific
features for both the focal agent and neighboring agents. Building upon the new
formulation, we propose AdapTraj, a multi-source domain generalization
framework specifically tailored for multi-agent trajectory prediction. AdapTraj
serves as a plug-and-play module that is adaptable to a variety of models.
Extensive experiments on four datasets with different domains demonstrate that
AdapTraj consistently outperforms other baselines by a substantial margin.
| [
{
"version": "v1",
"created": "Fri, 22 Dec 2023 02:49:56 GMT"
}
] | 1,703,462,400,000 | [
[
"Qian",
"Tangwen",
""
],
[
"Chen",
"Yile",
""
],
[
"Cong",
"Gao",
""
],
[
"Xu",
"Yongjun",
""
],
[
"Wang",
"Fei",
""
]
] |
2312.14421 | Mohamed-Hamza Ibrahim | Ayao Bobi, Rokia Missaoui and Mohamed Hamza Ibrahim | Enhancing Actionable Formal Concept Identification with Base-Equivalent
Conceptual-Relevance | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In knowledge discovery applications, the pattern set generated from data can
be tremendously large and hard to explore by analysts. In the Formal Concept
Analysis (FCA) framework, there have been studies to identify important formal
concepts through the stability index and other quality measures. In this paper,
we introduce the Base-Equivalent Conceptual Relevance (BECR) score, a novel
conceptual relevance interestingness measure for improving the identification
of actionable concepts. From a conceptual perspective, the base and equivalent
attributes are considered meaningful information and are highly essential to
maintain the conceptual structure of concepts. Thus, the basic idea of BECR is
that the more base and equivalent attributes and minimal generators a concept
intent has, the more relevant it is. As such, BECR quantifies these attributes
and minimal generators per concept intent. Our preliminary experiments on
synthetic and real-world datasets show the efficiency of BECR compared to the
well-known stability index.
| [
{
"version": "v1",
"created": "Fri, 22 Dec 2023 03:57:40 GMT"
}
] | 1,703,462,400,000 | [
[
"Bobi",
"Ayao",
""
],
[
"Missaoui",
"Rokia",
""
],
[
"Ibrahim",
"Mohamed Hamza",
""
]
] |
2312.14472 | Jinmin He | Jinmin He, Kai Li, Yifan Zang, Haobo Fu, Qiang Fu, Junliang Xing, Jian
Cheng | Not All Tasks Are Equally Difficult: Multi-Task Deep Reinforcement
Learning with Dynamic Depth Routing | AAAI2024, with supplementary material | 38th AAAI Conference on Artificial Intelligence (AAAI2024),
Vancouver, BC, Canada, 2024 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-task reinforcement learning endeavors to accomplish a set of different
tasks with a single policy. To enhance data efficiency by sharing parameters
across multiple tasks, a common practice segments the network into distinct
modules and trains a routing network to recombine these modules into
task-specific policies. However, existing routing approaches employ a fixed
number of modules for all tasks, neglecting that tasks with varying
difficulties commonly require varying amounts of knowledge. This work presents
a Dynamic Depth Routing (D2R) framework, which learns strategic skipping of
certain intermediate modules, thereby flexibly choosing different numbers of
modules for each task. Under this framework, we further introduce a ResRouting
method to address the issue of disparate routing paths between behavior and
target policies during off-policy training. In addition, we design an automatic
route-balancing mechanism to encourage continued routing exploration for
unmastered tasks without disturbing the routing of mastered ones. We conduct
extensive experiments on various robotics manipulation tasks in the Meta-World
benchmark, where D2R achieves state-of-the-art performance with significantly
improved learning efficiency.
| [
{
"version": "v1",
"created": "Fri, 22 Dec 2023 06:51:30 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Jan 2024 14:35:05 GMT"
}
] | 1,706,227,200,000 | [
[
"He",
"Jinmin",
""
],
[
"Li",
"Kai",
""
],
[
"Zang",
"Yifan",
""
],
[
"Fu",
"Haobo",
""
],
[
"Fu",
"Qiang",
""
],
[
"Xing",
"Junliang",
""
],
[
"Cheng",
"Jian",
""
]
] |
2312.14670 | Alessandro Antonucci | Alessandro Antonucci, Gregorio Piqu\'e, Marco Zaffalon | Zero-shot Causal Graph Extrapolation from Text via LLMs | XAI4Sci Workshop @ AAAI24 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We evaluate the ability of large language models (LLMs) to infer causal
relations from natural language. Compared to traditional natural language
processing and deep learning techniques, LLMs show competitive performance in a
benchmark of pairwise relations without needing (explicit) training samples.
This motivates us to extend our approach to extrapolating causal graphs through
iterated pairwise queries. We perform a preliminary analysis on a benchmark of
biomedical abstracts with ground-truth causal graphs validated by experts. The
results are promising and support the adoption of LLMs for such a crucial step
in causal inference, especially in medical domains, where the amount of
scientific text to analyse might be huge, and the causal statements are often
implicit.
| [
{
"version": "v1",
"created": "Fri, 22 Dec 2023 13:14:38 GMT"
}
] | 1,703,462,400,000 | [
[
"Antonucci",
"Alessandro",
""
],
[
"Piqué",
"Gregorio",
""
],
[
"Zaffalon",
"Marco",
""
]
] |
2312.14824 | Daniel Koutas | Daniel Koutas, Elizabeth Bismut, Daniel Straub | An investigation of belief-free DRL and MCTS for inspection and
maintenance planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel Deep Reinforcement Learning (DRL) architecture for
sequential decision processes under uncertainty, as encountered in inspection
and maintenance (I&M) planning. Unlike other DRL algorithms for (I&M) planning,
the proposed +RQN architecture dispenses with computing the belief state and
directly handles erroneous observations instead. We apply the algorithm to a
basic I&M planning problem for a one-component system subject to deterioration.
In addition, we investigate the performance of Monte Carlo tree search for the
I&M problem and compare it to the +RQN. The comparison includes a statistical
analysis of the two methods' resulting policies, as well as their visualization
in the belief space.
| [
{
"version": "v1",
"created": "Fri, 22 Dec 2023 16:53:02 GMT"
}
] | 1,703,462,400,000 | [
[
"Koutas",
"Daniel",
""
],
[
"Bismut",
"Elizabeth",
""
],
[
"Straub",
"Daniel",
""
]
] |
2312.14852 | Bo-Wen Zhang | Rongao Li, Jie Fu, Bo-Wen Zhang, Tao Huang, Zhihong Sun, Chen Lyu,
Guang Liu, Zhi Jin, Ge Li | TACO: Topics in Algorithmic COde generation dataset | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce TACO, an open-source, large-scale code generation dataset, with
a focus on the optics of algorithms, designed to provide a more challenging
training dataset and evaluation benchmark in the field of code generation
models. TACO includes competition-level programming questions that are more
challenging, to enhance or evaluate problem understanding and reasoning
abilities in real-world programming scenarios. There are 25433 and 1000 coding
problems in training and test set, as well as up to 1.55 million diverse
solution answers. Moreover, each TACO problem includes several fine-grained
labels such as task topics, algorithms, programming skills, and difficulty
levels, providing a more precise reference for the training and evaluation of
code generation models. The dataset and evaluation scripts are available on
Hugging Face Hub (https://huggingface.co/datasets/BAAI/TACO) and Github
(https://github.com/FlagOpen/TACO).
| [
{
"version": "v1",
"created": "Fri, 22 Dec 2023 17:25:42 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Dec 2023 13:32:25 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Dec 2023 10:09:18 GMT"
}
] | 1,703,808,000,000 | [
[
"Li",
"Rongao",
""
],
[
"Fu",
"Jie",
""
],
[
"Zhang",
"Bo-Wen",
""
],
[
"Huang",
"Tao",
""
],
[
"Sun",
"Zhihong",
""
],
[
"Lyu",
"Chen",
""
],
[
"Liu",
"Guang",
""
],
[
"Jin",
"Zhi",
""
],
[
"Li",
"Ge",
""
]
] |
2312.15163 | Elizabeth Ondula | Elizabeth Akinyi Ondula, Bhaskar Krishnamachari | Reinforcement Learning for Safe Occupancy Strategies in Educational
Spaces during an Epidemic | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epidemic modeling, encompassing deterministic and stochastic approaches, is
vital for understanding infectious diseases and informing public health
strategies. This research adopts a prescriptive approach, focusing on
reinforcement learning (RL) to develop strategies that balance minimizing
infections with maximizing in-person interactions in educational settings. We
introduce SafeCampus , a novel tool that simulates infection spread and
facilitates the exploration of various RL algorithms in response to epidemic
challenges. SafeCampus incorporates a custom RL environment, informed by
stochastic epidemic models, to realistically represent university campus
dynamics during epidemics. We evaluate Q-learning for a discretized state space
which resulted in a policy matrix that not only guides occupancy decisions
under varying epidemic conditions but also illustrates the inherent trade-off
in epidemic management. This trade-off is characterized by the dilemma between
stricter measures, which may effectively reduce infections but impose less
educational benefit (more in-person interactions), and more lenient policies,
which could lead to higher infection rates.
| [
{
"version": "v1",
"created": "Sat, 23 Dec 2023 04:51:23 GMT"
}
] | 1,703,635,200,000 | [
[
"Ondula",
"Elizabeth Akinyi",
""
],
[
"Krishnamachari",
"Bhaskar",
""
]
] |
2312.15692 | Jiuding Yang | Weidong Guo, Jiuding Yang, Kaitong Yang, Xiangyang Li, Zhuwei Rao, Yu
Xu, Di Niu | Instruction Fusion: Advancing Prompt Evolution through Hybridization | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The fine-tuning of Large Language Models (LLMs) specialized in code
generation has seen notable advancements through the use of open-domain coding
queries. Despite the successes, existing methodologies like Evol-Instruct
encounter performance limitations, impeding further enhancements in code
generation tasks. This paper examines the constraints of existing prompt
evolution techniques and introduces a novel approach, Instruction Fusion (IF).
IF innovatively combines two distinct prompts through a hybridization process,
thereby enhancing the evolution of training prompts for code LLMs. Our
experimental results reveal that the proposed novel method effectively
addresses the shortcomings of prior methods, significantly improving the
performance of Code LLMs across five code generation benchmarks, namely
HumanEval, HumanEval+, MBPP, MBPP+ and MultiPL-E, which underscore the
effectiveness of Instruction Fusion in advancing the capabilities of LLMs in
code generation.
| [
{
"version": "v1",
"created": "Mon, 25 Dec 2023 11:00:37 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Dec 2023 10:18:43 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Feb 2024 08:14:57 GMT"
}
] | 1,707,350,400,000 | [
[
"Guo",
"Weidong",
""
],
[
"Yang",
"Jiuding",
""
],
[
"Yang",
"Kaitong",
""
],
[
"Li",
"Xiangyang",
""
],
[
"Rao",
"Zhuwei",
""
],
[
"Xu",
"Yu",
""
],
[
"Niu",
"Di",
""
]
] |
2312.15864 | Yingkai Xiao | Yingkai Xiao, Jingjin Liu, Hankz Hankui Zhuo | BalMCTS: Balancing Objective Function and Search Nodes in MCTS for
Constraint Optimization Problems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constraint Optimization Problems (COP) pose intricate challenges in
combinatorial problems usually addressed through Branch and Bound (B\&B)
methods, which involve maintaining priority queues and iteratively selecting
branches to search for solutions. However, conventional approaches take a
considerable amount of time to find optimal solutions, and it is also crucial
to quickly identify a near-optimal feasible solution in a shorter time. In this
paper, we aim to investigate the effectiveness of employing a depth-first
search algorithm for solving COP, specifically focusing on identifying optimal
or near-optimal solutions within top $n$ solutions. Hence, we propose a novel
heuristic neural network algorithm based on MCTS, which, by simultaneously
conducting search and training, enables the neural network to effectively serve
as a heuristic during Backtracking. Furthermore, our approach incorporates
encoding COP problems and utilizing graph neural networks to aggregate
information about variables and constraints, offering more appropriate
variables for assignments. Experimental results on stochastic COP instances
demonstrate that our method identifies feasible solutions with a gap of less
than 17.63% within the initial 5 feasible solutions. Moreover, when applied to
attendant Constraint Satisfaction Problem (CSP) instances, our method exhibits
a remarkable reduction of less than 5% in searching nodes compared to
state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Tue, 26 Dec 2023 03:09:08 GMT"
}
] | 1,703,635,200,000 | [
[
"Xiao",
"Yingkai",
""
],
[
"Liu",
"Jingjin",
""
],
[
"Zhuo",
"Hankz Hankui",
""
]
] |
2312.16044 | Siqi Lai | Siqi Lai, Zhao Xu, Weijia Zhang, Hao Liu and Hui Xiong | LLMLight: Large Language Models as Traffic Signal Control Agents | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic Signal Control (TSC) is a crucial component in urban traffic
management, aiming to optimize road network efficiency and reduce congestion.
Traditional methods in TSC, primarily based on transportation engineering and
reinforcement learning (RL), often exhibit limitations in generalization across
varied traffic scenarios and lack interpretability. This paper presents
LLMLight, a novel framework employing Large Language Models (LLMs) as
decision-making agents for TSC. Specifically, the framework begins by
instructing the LLM with a knowledgeable prompt detailing real-time traffic
conditions. Leveraging the advanced generalization capabilities of LLMs,
LLMLight engages a reasoning and decision-making process akin to human
intuition for effective traffic control. Moreover, we build LightGPT, a
specialized backbone LLM tailored for TSC tasks. By learning nuanced traffic
patterns and control strategies, LightGPT enhances the LLMLight framework
cost-effectively. Extensive experiments on nine real-world and synthetic
datasets showcase the remarkable effectiveness, generalization ability, and
interpretability of LLMLight against nine transportation-based and RL-based
baselines.
| [
{
"version": "v1",
"created": "Tue, 26 Dec 2023 13:17:06 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Feb 2024 17:11:59 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Feb 2024 13:02:23 GMT"
},
{
"version": "v4",
"created": "Tue, 5 Mar 2024 13:21:38 GMT"
}
] | 1,709,683,200,000 | [
[
"Lai",
"Siqi",
""
],
[
"Xu",
"Zhao",
""
],
[
"Zhang",
"Weijia",
""
],
[
"Liu",
"Hao",
""
],
[
"Xiong",
"Hui",
""
]
] |
2312.16127 | Liman Wang | Liman Wang, Hanyang Zhong | LLM-SAP: Large Language Model Situational Awareness Based Planning | 18 pages including appendix.
Website:https://github.com/HanyangZhong/Situational_Planning_datasets | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work pioneers evaluating emergent planning capabilities based on
situational awareness in large language models. We contribute (i) novel
benchmarks and metrics for standardized assessment; (ii) a unique dataset to
spur progress; and (iii) demonstrations that prompting and multi-agent schemes
significantly enhance planning performance in context-sensitive planning tasks.
Positioning this within a situated agent and automated planning research, we
highlight inherent reliability challenges--efficiently mapping world states to
actions without environmental guidance remains open despite simulated domain
advances. Although out-of-scope, limitations around validation methodology and
data availability indicate exciting directions, including fine-tuning on
expanded planning corpora and optimizations for triggering fast latent
planning. By conclusively demonstrating current methods' promise and
limitations via rigorous comparison, we catalyze investigating reliable
goal-directed reasoning for situated agents.
| [
{
"version": "v1",
"created": "Tue, 26 Dec 2023 17:19:09 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Jan 2024 04:33:57 GMT"
},
{
"version": "v3",
"created": "Wed, 3 Jan 2024 15:13:50 GMT"
},
{
"version": "v4",
"created": "Sun, 4 Feb 2024 23:50:11 GMT"
}
] | 1,707,177,600,000 | [
[
"Wang",
"Liman",
""
],
[
"Zhong",
"Hanyang",
""
]
] |
2312.16230 | Lu Li | Lu Li and Huangxing Li | Navigating Decision Landscapes: The Impact of Principals on
Decision-Making Dynamics | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We explored decision-making dynamics in social systems, referencing the 'herd
behavior' from prior studies where individuals follow preceding choices without
understanding the underlying reasons. While previous research highlighted a
preference for the optimal choice without external influences, our study
introduced principals or external guides, adding complexity to the
decision-making process. The reliability of these principals significantly
influenced decisions. Notably, even occasional trust in an unreliable principal
could alter decision outcomes. Furthermore, when a principal's advice was
purely random, heightened trust led to more decision errors. Our findings
emphasize the need for caution when placing trust in decision-making contexts.
| [
{
"version": "v1",
"created": "Mon, 25 Dec 2023 00:24:29 GMT"
}
] | 1,703,808,000,000 | [
[
"Li",
"Lu",
""
],
[
"Li",
"Huangxing",
""
]
] |
2312.16364 | Xia Wang | Xia Wang, Anda Liang, Jonathan Sprinkle and Taylor T. Johnson | Robustness Verification for Knowledge-Based Logic of Risky Driving
Scenes | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many decision-making scenarios in modern life benefit from the decision
support of artificial intelligence algorithms, which focus on a data-driven
philosophy and automated programs or systems. However, crucial decision issues
related to security, fairness, and privacy should consider more human knowledge
and principles to supervise such AI algorithms to reach more proper solutions
and to benefit society more effectively. In this work, we extract
knowledge-based logic that defines risky driving formats learned from public
transportation accident datasets, which haven't been analyzed in detail to the
best of our knowledge. More importantly, this knowledge is critical for
recognizing traffic hazards and could supervise and improve AI models in
safety-critical systems. Then we use automated verification methods to verify
the robustness of such logic. More specifically, we gather 72 accident datasets
from Data.gov and organize them by state. Further, we train Decision Tree and
XGBoost models on each state's dataset, deriving accident judgment logic.
Finally, we deploy robustness verification on these tree-based models under
multiple parameter combinations.
| [
{
"version": "v1",
"created": "Wed, 27 Dec 2023 00:13:51 GMT"
}
] | 1,703,808,000,000 | [
[
"Wang",
"Xia",
""
],
[
"Liang",
"Anda",
""
],
[
"Sprinkle",
"Jonathan",
""
],
[
"Johnson",
"Taylor T.",
""
]
] |
2312.16704 | Adnan Theerens | Adnan Theerens and Chris Cornelis | On the Granular Representation of Fuzzy Quantifier-Based Fuzzy Rough
Sets | null | Information Sciences 665 (2024) | 10.1016/j.ins.2024.120385 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Rough set theory is a well-known mathematical framework that can deal with
inconsistent data by providing lower and upper approximations of concepts. A
prominent property of these approximations is their granular representation:
that is, they can be written as unions of simple sets, called granules. The
latter can be identified with "if. . . , then. . . " rules, which form the
backbone of rough set rule induction. It has been shown previously that this
property can be maintained for various fuzzy rough set models, including those
based on ordered weighted average (OWA) operators. In this paper, we will focus
on some instances of the general class of fuzzy quantifier-based fuzzy rough
sets (FQFRS). In these models, the lower and upper approximations are evaluated
using binary and unary fuzzy quantifiers, respectively. One of the main targets
of this study is to examine the granular representation of different models of
FQFRS. The main findings reveal that Choquet-based fuzzy rough sets can be
represented granularly under the same conditions as OWA-based fuzzy rough sets,
whereas Sugeno-based FRS can always be represented granularly. This observation
highlights the potential of these models for resolving data inconsistencies and
managing noise.
| [
{
"version": "v1",
"created": "Wed, 27 Dec 2023 20:02:40 GMT"
}
] | 1,710,806,400,000 | [
[
"Theerens",
"Adnan",
""
],
[
"Cornelis",
"Chris",
""
]
] |
2312.17445 | Jie Shuai | Jia Liu, Jie Shuai, Xiyao Li | State Machine of Thoughts: Leveraging Past Reasoning Trajectories for
Enhancing Problem Solving | 9 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current Large Language Model-based agents reason within an
exploration-evaluation framework, navigating problem-solving processes in a
tree-like manner. However, these methods often neglect successful reasoning
trajectories once a problem is resolved, leading to inefficient use of these
trajectories for future analogous problems. To address this inefficiency, we
adopt a state machine to record experience derived from previous reasoning
trajectories. Within the state machine, states represent decomposed
sub-problems, while state transitions reflect the dependencies among
sub-problems. The state machine records both successful and failed
trajectories. Utilizing the experience from the state machine, our proposed
State Machine of Thoughts (SMoT) selects the most optimal sub-solutions and
avoids incorrect ones. Our experiments show that SMoT can significantly improve
problem-solving abilities in two exploration-intensive problems: the 24-point
game and a taxi navigation reinforcement learning game.
| [
{
"version": "v1",
"created": "Fri, 29 Dec 2023 03:00:04 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Mar 2024 02:16:07 GMT"
}
] | 1,710,201,600,000 | [
[
"Liu",
"Jia",
""
],
[
"Shuai",
"Jie",
""
],
[
"Li",
"Xiyao",
""
]
] |
2312.17653 | Ming Yan | Ming Yan, Ruihao Li, Hao Zhang, Hao Wang, Zhilan Yang, Ji Yan | LARP: Language-Agent Role Play for Open-World Games | 12 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Language agents have shown impressive problem-solving skills within defined
settings and brief timelines. Yet, with the ever-evolving complexities of
open-world simulations, there's a pressing need for agents that can flexibly
adapt to complex environments and consistently maintain a long-term memory to
ensure coherent actions. To bridge the gap between language agents and
open-world games, we introduce Language Agent for Role-Playing (LARP), which
includes a cognitive architecture that encompasses memory processing and a
decision-making assistant, an environment interaction module with a
feedback-driven learnable action space, and a postprocessing method that
promotes the alignment of various personalities. The LARP framework refines
interactions between users and agents, predefined with unique backgrounds and
personalities, ultimately enhancing the gaming experience in open-world
contexts. Furthermore, it highlights the diverse uses of language models in a
range of areas such as entertainment, education, and various simulation
scenarios. The project page is released at https://miao-ai-lab.github.io/LARP/.
| [
{
"version": "v1",
"created": "Sun, 24 Dec 2023 10:08:59 GMT"
}
] | 1,704,067,200,000 | [
[
"Yan",
"Ming",
""
],
[
"Li",
"Ruihao",
""
],
[
"Zhang",
"Hao",
""
],
[
"Wang",
"Hao",
""
],
[
"Yang",
"Zhilan",
""
],
[
"Yan",
"Ji",
""
]
] |
2401.00005 | Evgenii Vityaev | Evgenii Vityaev | Consciousness as a logically consistent and prognostic model of reality | 22 pages | null | 10.1016/j.cogsys.2019.09.021 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The work demonstrates that brain might reflect the external world causal
relationships in the form of a logically consistent and prognostic model of
reality, which shows up as consciousness. The paper analyses and solves the
problem of statistical ambiguity and provides a formal model of causal
relationships as probabilistic maximally specific rules. We suppose that brain
makes all possible inferences from causal relationships. We prove that the
suggested formal model has a property of an unambiguous inference: from
consistent premises we infer a consistent conclusion. It enables a set of all
inferences to form a consistent model of the perceived world. Causal
relationships may create fixed points of cyclic inter-predictable properties.
We consider the "natural" classification introduced by John St. Mill and
demonstrate that a variety of fixed points of the objects' attributes forms a
"natural" classification of the external world. Then we consider notions of
"natural" categories and causal models of categories, introduced by Eleanor
Rosch and Bob Rehder and demonstrate that fixed points of causal relationships
between objects attributes, which we perceive, formalize these notions. If the
"natural" classification describes the objects of the external world, and
"natural" concepts the perception of these objects, then the theory of
integrated information, introduced by G. Tononi, describes the information
processes of the brain for "natural" concepts formation that reflects the
"natural" classification. We argue that integrated information provides high
accuracy of the objects identification. A computer-based experiment is provided
that illustrates fixed points formation for coded digits.
| [
{
"version": "v1",
"created": "Sun, 10 Dec 2023 14:07:20 GMT"
}
] | 1,704,153,600,000 | [
[
"Vityaev",
"Evgenii",
""
]
] |
2401.00006 | Shaopeng Zhai | Shaopeng Zhai, Jie Wang, Tianyi Zhang, Fuxian Huang, Qi Zhang, Ming
Zhou, Jing Hou, Yu Qiao and Yu Liu | Building Open-Ended Embodied Agent via Language-Policy Bidirectional
Adaptation | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Building embodied agents on integrating Large Language Models (LLMs) and
Reinforcement Learning (RL) have revolutionized human-AI interaction:
researchers can now leverage language instructions to plan decision-making for
open-ended tasks. However, existing research faces challenges in meeting the
requirement of open-endedness. They typically either train LLM/RL models to
adapt to a fixed counterpart, limiting exploration of novel skills and
hindering the efficacy of human-AI interaction. To this end, we present
OpenPAL, a co-training framework comprising two stages: (1) fine-tuning a
pre-trained LLM to translate human instructions into goals for planning, and
goal-conditioned training a policy for decision-making; (2) co-training to
align the LLM and policy, achieving instruction open-endedness. We conducted
experiments using Contra, an open-ended FPS game, demonstrating that an agent
trained with OpenPAL not only comprehends arbitrary instructions but also
exhibits efficient execution. These results suggest that OpenPAL holds the
potential to construct open-ended embodied agents in practical scenarios.
| [
{
"version": "v1",
"created": "Tue, 12 Dec 2023 11:06:07 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Feb 2024 03:39:25 GMT"
},
{
"version": "v3",
"created": "Tue, 6 Feb 2024 16:30:55 GMT"
}
] | 1,707,264,000,000 | [
[
"Zhai",
"Shaopeng",
""
],
[
"Wang",
"Jie",
""
],
[
"Zhang",
"Tianyi",
""
],
[
"Huang",
"Fuxian",
""
],
[
"Zhang",
"Qi",
""
],
[
"Zhou",
"Ming",
""
],
[
"Hou",
"Jing",
""
],
[
"Qiao",
"Yu",
""
],
[
"Liu",
"Yu",
""
]
] |
2401.00062 | Mena Rizk | Mena Rizk, Daniela Rosu, Mark Fox | Semantic Computing for Organizational Effectiveness: From Organization
Theory to Practice through Semantics-Based Modelling | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | A critical function of an organization is to foster the level of integration
(coordination and cooperation) necessary to achieve its objectives. The need to
coordinate and motivation to cooperate emerges from the myriad dependencies
between an organization's members and their work. Therefore, to reason about
solutions to coordination and cooperation problems requires a robust
representation that includes the underlying dependencies. We find that such a
representation remains missing from formal organizational models, and we
leverage semantics to bridge this gap. Drawing on well-established
organizational research and our extensive fieldwork with one of North America's
largest municipalities, (1) we introduce an ontology, formalized in first-order
logic, that operationalizes concepts like outcome, reward, and epistemic
dependence, and their links to potential integration risks; and (2) present
real-world applications of this ontology to analyze and support integration in
complex government infrastructure projects. Our ontology is implemented and
validated in both Z3 and OWL. Key features of our model include inferable
dependencies, explainable coordination and cooperation risks, and actionable
insights on how dependency structures within an organization can be altered to
mitigate the risks. Conceptualizing real-world challenges like incentive
misalignment, free-riding, and subgoal optimization in terms of dependency
structures, our semantics-based approach represents a novel method for
modelling and enhancing coordination and cooperation. Integrated within a
decision-support system, our model may serve as an impactful aid for
organizational design and effectiveness. More broadly, our approach underscores
the transformative potential of semantics in deriving tangible, real-world
value from existing organization theory.
| [
{
"version": "v1",
"created": "Fri, 29 Dec 2023 19:37:35 GMT"
}
] | 1,704,153,600,000 | [
[
"Rizk",
"Mena",
""
],
[
"Rosu",
"Daniela",
""
],
[
"Fox",
"Mark",
""
]
] |
2401.00211 | Longchao Da | Longchao Da, Kuanru Liou, Tiejin Chen, Xuesong Zhou, Xiangyong Luo,
Yezhou Yang, Hua Wei | Open-TI: Open Traffic Intelligence with Augmented Language Model | 22 pages main content, 8 pages appendix | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transportation has greatly benefited the cities' development in the modern
civilization process. Intelligent transportation, leveraging advanced computer
algorithms, could further increase people's daily commuting efficiency.
However, intelligent transportation, as a cross-discipline, often requires
practitioners to comprehend complicated algorithms and obscure neural networks,
bringing a challenge for the advanced techniques to be trusted and deployed in
practical industries. Recognizing the expressiveness of the pre-trained large
language models, especially the potential of being augmented with abilities to
understand and execute intricate commands, we introduce Open-TI. Serving as a
bridge to mitigate the industry-academic gap, Open-TI is an innovative model
targeting the goal of Turing Indistinguishable Traffic Intelligence, it is
augmented with the capability to harness external traffic analysis packages
based on existing conversations. Marking its distinction, Open-TI is the first
method capable of conducting exhaustive traffic analysis from scratch -
spanning from map data acquisition to the eventual execution in complex
simulations. Besides, Open-TI is able to conduct task-specific embodiment like
training and adapting the traffic signal control policies (TSC), explore demand
optimizations, etc. Furthermore, we explored the viability of LLMs directly
serving as control agents, by understanding the expected intentions from
Open-TI, we designed an agent-to-agent communication mode to support Open-TI
conveying messages to ChatZero (control agent), and then the control agent
would choose from the action space to proceed the execution. We eventually
provide the formal implementation structure, and the open-ended design invites
further community-driven enhancements.
| [
{
"version": "v1",
"created": "Sat, 30 Dec 2023 11:50:11 GMT"
}
] | 1,704,153,600,000 | [
[
"Da",
"Longchao",
""
],
[
"Liou",
"Kuanru",
""
],
[
"Chen",
"Tiejin",
""
],
[
"Zhou",
"Xuesong",
""
],
[
"Luo",
"Xiangyong",
""
],
[
"Yang",
"Yezhou",
""
],
[
"Wei",
"Hua",
""
]
] |
2401.00298 | Boaz Taitler | Omer Ben-Porat, Yishay Mansour, Michal Moshkovitz, Boaz Taitler | Principal-Agent Reward Shaping in MDPs | Full version of a paper accepted to AAAI'24 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Principal-agent problems arise when one party acts on behalf of another,
leading to conflicts of interest. The economic literature has extensively
studied principal-agent problems, and recent work has extended this to more
complex scenarios such as Markov Decision Processes (MDPs). In this paper, we
further explore this line of research by investigating how reward shaping under
budget constraints can improve the principal's utility. We study a two-player
Stackelberg game where the principal and the agent have different reward
functions, and the agent chooses an MDP policy for both players. The principal
offers an additional reward to the agent, and the agent picks their policy
selfishly to maximize their reward, which is the sum of the original and the
offered reward. Our results establish the NP-hardness of the problem and offer
polynomial approximation algorithms for two classes of instances: Stochastic
trees and deterministic decision processes with a finite horizon.
| [
{
"version": "v1",
"created": "Sat, 30 Dec 2023 18:30:44 GMT"
}
] | 1,704,153,600,000 | [
[
"Ben-Porat",
"Omer",
""
],
[
"Mansour",
"Yishay",
""
],
[
"Moshkovitz",
"Michal",
""
],
[
"Taitler",
"Boaz",
""
]
] |
2401.00430 | Weijian Mai | Weijian Mai, Jian Zhang, Pengfei Fang, Zhijun Zhang | Brain-Conditional Multimodal Synthesis: A Survey and Taxonomy | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the era of Artificial Intelligence Generated Content (AIGC), conditional
multimodal synthesis technologies (e.g., text-to-image, text-to-video,
text-to-audio, etc) are gradually reshaping the natural content in the real
world. The key to multimodal synthesis technology is to establish the mapping
relationship between different modalities. Brain signals, serving as potential
reflections of how the brain interprets external information, exhibit a
distinctive One-to-Many correspondence with various external modalities. This
correspondence makes brain signals emerge as a promising guiding condition for
multimodal content synthesis. Brian-conditional multimodal synthesis refers to
decoding brain signals back to perceptual experience, which is crucial for
developing practical brain-computer interface systems and unraveling complex
mechanisms underlying how the brain perceives and comprehends external stimuli.
This survey comprehensively examines the emerging field of AIGC-based
Brain-conditional Multimodal Synthesis, termed AIGC-Brain, to delineate the
current landscape and future directions. To begin, related brain neuroimaging
datasets, functional brain regions, and mainstream generative models are
introduced as the foundation of AIGC-Brain decoding and analysis. Next, we
provide a comprehensive taxonomy for AIGC-Brain decoding models and present
task-specific representative work and detailed implementation strategies to
facilitate comparison and in-depth analysis. Quality assessments are then
introduced for both qualitative and quantitative evaluation. Finally, this
survey explores insights gained, providing current challenges and outlining
prospects of AIGC-Brain. Being the inaugural survey in this domain, this paper
paves the way for the progress of AIGC-Brain research, offering a foundational
overview to guide future work.
| [
{
"version": "v1",
"created": "Sun, 31 Dec 2023 09:00:40 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Jan 2024 08:50:27 GMT"
}
] | 1,704,326,400,000 | [
[
"Mai",
"Weijian",
""
],
[
"Zhang",
"Jian",
""
],
[
"Fang",
"Pengfei",
""
],
[
"Zhang",
"Zhijun",
""
]
] |
2401.00880 | Till Hofmann | Till Hofmann | Towards Bridging the Gap between High-Level Reasoning and Execution on
Robots | PhD Thesis | null | 10.18154/RWTH-2023-10508 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | When reasoning about actions, e.g., by means of task planning or agent
programming with Golog, the robot's actions are typically modeled on an
abstract level, where complex actions such as picking up an object are treated
as atomic primitives with deterministic effects and preconditions that only
depend on the current state. However, when executing such an action on a robot
it can no longer be seen as a primitive. Instead, action execution is a complex
task involving multiple steps with additional temporal preconditions and timing
constraints. Furthermore, the action may be noisy, e.g., producing erroneous
sensing results and not always having the desired effects. While these aspects
are typically ignored in reasoning tasks, they need to be dealt with during
execution. In this thesis, we propose several approaches towards closing this
gap.
| [
{
"version": "v1",
"created": "Sat, 30 Dec 2023 12:26:12 GMT"
}
] | 1,704,240,000,000 | [
[
"Hofmann",
"Till",
""
]
] |
2401.01459 | Ananya Joshi | Ananya Joshi, Tina Townes, Nolan Gormley, Luke Neureiter, Roni
Rosenfeld, Bryan Wilder | Outlier Ranking in Large-Scale Public Health Streams | 6 figures, 8 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Disease control experts inspect public health data streams daily for outliers
worth investigating, like those corresponding to data quality issues or disease
outbreaks. However, they can only examine a few of the thousands of
maximally-tied outliers returned by univariate outlier detection methods
applied to large-scale public health data streams. To help experts distinguish
the most important outliers from these thousands of tied outliers, we propose a
new task for algorithms to rank the outputs of any univariate method applied to
each of many streams. Our novel algorithm for this task, which leverages
hierarchical networks and extreme value analysis, performed the best across
traditional outlier detection metrics in a human-expert evaluation using public
health data streams. Most importantly, experts have used our open-source Python
implementation since April 2023 and report identifying outliers worth
investigating 9.1x faster than their prior baseline. Other organizations can
readily adapt this implementation to create rankings from the outputs of their
tailored univariate methods across large-scale streams.
| [
{
"version": "v1",
"created": "Tue, 2 Jan 2024 23:08:49 GMT"
}
] | 1,704,326,400,000 | [
[
"Joshi",
"Ananya",
""
],
[
"Townes",
"Tina",
""
],
[
"Gormley",
"Nolan",
""
],
[
"Neureiter",
"Luke",
""
],
[
"Rosenfeld",
"Roni",
""
],
[
"Wilder",
"Bryan",
""
]
] |
2401.01753 | Sean Moran | Amal Vaidya, Mohan Krishna Vankayalapati, Jacky Chan, Senad
Ibraimoski, Sean Moran | A Generative AI Assistant to Accelerate Cloud Migration | arXiv admin comment: This version has been removed by arXiv
administrators as the submitter did not have the rights to agree to the
license at the time of submission | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present a tool that leverages generative AI to accelerate the migration of
on-premises applications to the cloud. The Cloud Migration LLM accepts input
from the user specifying the parameters of their migration, and outputs a
migration strategy with an architecture diagram. A user study suggests that the
migration LLM can assist inexperienced users in finding the right cloud
migration profile, while avoiding complexities of a manual approach.
| [
{
"version": "v1",
"created": "Wed, 3 Jan 2024 14:13:24 GMT"
}
] | 1,705,017,600,000 | [
[
"Vaidya",
"Amal",
""
],
[
"Vankayalapati",
"Mohan Krishna",
""
],
[
"Chan",
"Jacky",
""
],
[
"Ibraimoski",
"Senad",
""
],
[
"Moran",
"Sean",
""
]
] |
2401.01814 | Fazl Barez | Michelle Lo, Shay B. Cohen, Fazl Barez | Large Language Models Relearn Removed Concepts | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Advances in model editing through neuron pruning hold promise for removing
undesirable concepts from large language models. However, it remains unclear
whether models have the capacity to reacquire pruned concepts after editing. To
investigate this, we evaluate concept relearning in models by tracking concept
saliency and similarity in pruned neurons during retraining. Our findings
reveal that models can quickly regain performance post-pruning by relocating
advanced concepts to earlier layers and reallocating pruned concepts to primed
neurons with similar semantics. This demonstrates that models exhibit
polysemantic capacities and can blend old and new concepts in individual
neurons. While neuron pruning provides interpretability into model concepts,
our results highlight the challenges of permanent concept removal for improved
model \textit{safety}. Monitoring concept reemergence and developing techniques
to mitigate relearning of unsafe concepts will be important directions for more
robust model editing. Overall, our work strongly demonstrates the resilience
and fluidity of concept representations in LLMs post concept removal.
| [
{
"version": "v1",
"created": "Wed, 3 Jan 2024 16:15:57 GMT"
}
] | 1,704,326,400,000 | [
[
"Lo",
"Michelle",
""
],
[
"Cohen",
"Shay B.",
""
],
[
"Barez",
"Fazl",
""
]
] |
2401.01836 | Cheng Chi | Cheng Chi | Neural Control: Concurrent System Identification and Control Learning
with Neural ODE | 9 pages, code open sourced in format of Google Colab notebooks;
Resubmitted for adding missed references in the last submission | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Controlling continuous-time dynamical systems is generally a two step
process: first, identify or model the system dynamics with differential
equations, then, minimize the control objectives to achieve optimal control
function and optimal state trajectories. However, any inaccuracy in dynamics
modeling will lead to sub-optimality in the resulting control function. To
address this, we propose a neural ODE based method for controlling unknown
dynamical systems, denoted as Neural Control (NC), which combines dynamics
identification and optimal control learning using a coupled neural ODE. Through
an intriguing interplay between the two neural networks in coupled neural ODE
structure, our model concurrently learns system dynamics as well as optimal
controls that guides towards target states. Our experiments demonstrate the
effectiveness of our model for learning optimal control of unknown dynamical
systems. Codes available at
https://github.com/chichengmessi/neural_ode_control/tree/main
| [
{
"version": "v1",
"created": "Wed, 3 Jan 2024 17:05:17 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Jan 2024 02:24:09 GMT"
},
{
"version": "v3",
"created": "Sun, 4 Feb 2024 15:27:07 GMT"
},
{
"version": "v4",
"created": "Mon, 22 Apr 2024 16:43:11 GMT"
}
] | 1,713,830,400,000 | [
[
"Chi",
"Cheng",
""
]
] |
2401.02500 | Vishal Pallagani | Vishal Pallagani, Kaushik Roy, Bharath Muppasani, Francesco Fabiano,
Andrea Loreggia, Keerthiram Murugesan, Biplav Srivastava, Francesca Rossi,
Lior Horesh, Amit Sheth | On the Prospects of Incorporating Large Language Models (LLMs) in
Automated Planning and Scheduling (APS) | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Automated Planning and Scheduling is among the growing areas in Artificial
Intelligence (AI) where mention of LLMs has gained popularity. Based on a
comprehensive review of 126 papers, this paper investigates eight categories
based on the unique applications of LLMs in addressing various aspects of
planning problems: language translation, plan generation, model construction,
multi-agent planning, interactive planning, heuristics optimization, tool
integration, and brain-inspired planning. For each category, we articulate the
issues considered and existing gaps. A critical insight resulting from our
review is that the true potential of LLMs unfolds when they are integrated with
traditional symbolic planners, pointing towards a promising neuro-symbolic
approach. This approach effectively combines the generative aspects of LLMs
with the precision of classical planning methods. By synthesizing insights from
existing literature, we underline the potential of this integration to address
complex planning challenges. Our goal is to encourage the ICAPS community to
recognize the complementary strengths of LLMs and symbolic planners, advocating
for a direction in automated planning that leverages these synergistic
capabilities to develop more advanced and intelligent planning systems.
| [
{
"version": "v1",
"created": "Thu, 4 Jan 2024 19:22:09 GMT"
},
{
"version": "v2",
"created": "Sat, 20 Jan 2024 12:10:26 GMT"
}
] | 1,705,968,000,000 | [
[
"Pallagani",
"Vishal",
""
],
[
"Roy",
"Kaushik",
""
],
[
"Muppasani",
"Bharath",
""
],
[
"Fabiano",
"Francesco",
""
],
[
"Loreggia",
"Andrea",
""
],
[
"Murugesan",
"Keerthiram",
""
],
[
"Srivastava",
"Biplav",
""
],
[
"Rossi",
"Francesca",
""
],
[
"Horesh",
"Lior",
""
],
[
"Sheth",
"Amit",
""
]
] |
2401.02643 | Zicong Hong | Jiahang Zhou, Yanyu Chen, Zicong Hong, Wuhui Chen, Yue Yu, Tao Zhang,
Hui Wang, Chuanfu Zhang, Zibin Zheng | Training and Serving System of Foundation Models: A Comprehensive Survey | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Foundation models (e.g., ChatGPT, DALL-E, PengCheng Mind, PanGu-$\Sigma$)
have demonstrated extraordinary performance in key technological areas, such as
natural language processing and visual recognition, and have become the
mainstream trend of artificial general intelligence. This has led more and more
major technology giants to dedicate significant human and financial resources
to actively develop their foundation model systems, which drives continuous
growth of these models' parameters. As a result, the training and serving of
these models have posed significant challenges, including substantial computing
power, memory consumption, bandwidth demands, etc. Therefore, employing
efficient training and serving strategies becomes particularly crucial. Many
researchers have actively explored and proposed effective methods. So, a
comprehensive survey of them is essential for system developers and
researchers. This paper extensively explores the methods employed in training
and serving foundation models from various perspectives. It provides a detailed
categorization of these state-of-the-art methods, including finer aspects such
as network, computing, and storage. Additionally, the paper summarizes the
challenges and presents a perspective on the future development direction of
foundation model systems. Through comprehensive discussion and analysis, it
hopes to provide a solid theoretical basis and practical guidance for future
research and applications, promoting continuous innovation and development in
foundation model systems.
| [
{
"version": "v1",
"created": "Fri, 5 Jan 2024 05:27:15 GMT"
}
] | 1,704,672,000,000 | [
[
"Zhou",
"Jiahang",
""
],
[
"Chen",
"Yanyu",
""
],
[
"Hong",
"Zicong",
""
],
[
"Chen",
"Wuhui",
""
],
[
"Yu",
"Yue",
""
],
[
"Zhang",
"Tao",
""
],
[
"Wang",
"Hui",
""
],
[
"Zhang",
"Chuanfu",
""
],
[
"Zheng",
"Zibin",
""
]
] |
2401.02703 | Abisha Thapa Magar | Abisha Thapa Magar, Anup Shakya, Somdeb Sarkhel, Deepak Venugopal | Verifying Relational Explanations: A Probabilistic Approach | Published in Proceedings of 2023 IEEE Conference on Big Data | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Explanations on relational data are hard to verify since the explanation
structures are more complex (e.g. graphs). To verify interpretable explanations
(e.g. explanations of predictions made in images, text, etc.), typically human
subjects are used since it does not necessarily require a lot of expertise.
However, to verify the quality of a relational explanation requires expertise
and is hard to scale-up. GNNExplainer is arguably one of the most popular
explanation methods for Graph Neural Networks. In this paper, we develop an
approach where we assess the uncertainty in explanations generated by
GNNExplainer. Specifically, we ask the explainer to generate explanations for
several counterfactual examples. We generate these examples as symmetric
approximations of the relational structure in the original data. From these
explanations, we learn a factor graph model to quantify uncertainty in an
explanation. Our results on several datasets show that our approach can help
verify explanations from GNNExplainer by reliably estimating the uncertainty of
a relation specified in the explanation.
| [
{
"version": "v1",
"created": "Fri, 5 Jan 2024 08:14:51 GMT"
}
] | 1,704,672,000,000 | [
[
"Magar",
"Abisha Thapa",
""
],
[
"Shakya",
"Anup",
""
],
[
"Sarkhel",
"Somdeb",
""
],
[
"Venugopal",
"Deepak",
""
]
] |
2401.02705 | Zhitao Wang | Zhitao Wang, Wei Wang, Zirao Li, Long Wang, Can Yi, Xinjie Xu, Luyang
Cao, Hanjing Su, Shouzhi Chen, Jun Zhou | XUAT-Copilot: Multi-Agent Collaborative System for Automated User
Acceptance Testing with Large Language Model | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In past years, we have been dedicated to automating user acceptance testing
(UAT) process of WeChat Pay, one of the most influential mobile payment
applications in China. A system titled XUAT has been developed for this
purpose. However, there is still a human-labor-intensive stage, i.e, test
scripts generation, in the current system. Therefore, in this paper, we
concentrate on methods of boosting the automation level of the current system,
particularly the stage of test scripts generation. With recent notable
successes, large language models (LLMs) demonstrate significant potential in
attaining human-like intelligence and there has been a growing research area
that employs LLMs as autonomous agents to obtain human-like decision-making
capabilities. Inspired by these works, we propose an LLM-powered multi-agent
collaborative system, named XUAT-Copilot, for automated UAT. The proposed
system mainly consists of three LLM-based agents responsible for action
planning, state checking and parameter selecting, respectively, and two
additional modules for state sensing and case rewriting. The agents interact
with testing device, make human-like decision and generate action command in a
collaborative way. The proposed multi-agent system achieves a close
effectiveness to human testers in our experimental studies and gains a
significant improvement of Pass@1 accuracy compared with single-agent
architecture. More importantly, the proposed system has launched in the formal
testing environment of WeChat Pay mobile app, which saves a considerable amount
of manpower in the daily development work.
| [
{
"version": "v1",
"created": "Fri, 5 Jan 2024 08:24:30 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Jan 2024 12:08:44 GMT"
}
] | 1,704,931,200,000 | [
[
"Wang",
"Zhitao",
""
],
[
"Wang",
"Wei",
""
],
[
"Li",
"Zirao",
""
],
[
"Wang",
"Long",
""
],
[
"Yi",
"Can",
""
],
[
"Xu",
"Xinjie",
""
],
[
"Cao",
"Luyang",
""
],
[
"Su",
"Hanjing",
""
],
[
"Chen",
"Shouzhi",
""
],
[
"Zhou",
"Jun",
""
]
] |
2401.02731 | Haoyuan Wu | Haoyuan Wu, Haisheng Zheng, Zhuolun He, Bei Yu | Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts
for Instruction Tuning on General Tasks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have demonstrated considerable proficiency in
general natural language processing (NLP) tasks. Instruction tuning, a
successful paradigm, enhances the ability of LLMs to follow natural language
instructions and exhibit robust generalization across a wide range of tasks.
However, these models often encounter performance limitations across multiple
tasks due to constrained model capacity. Expanding this capacity during the
instruction tuning phase poses significant challenges. To address this issue,
we introduce a novel approach, Parameter-Efficient Sparsity Crafting (PESC),
which transitions dense models to sparse models using a Mixture of Experts
(MoE) architecture. PESC integrates adapters into the MoE layers of sparse
models, differentiating experts without altering the individual weights within
these layers. This method significantly reduces computational costs and GPU
memory requirements, facilitating model capacity expansion through a minimal
increase in parameters via the inserted adapters. Our empirical evaluation
demonstrates the effectiveness of the PESC method. Using PESC during
instruction tuning, our sparse models, dubbed Camelidae outperform all other
opensource sparse models and exhibit superior general capabilities compared to
GPT3.5.
| [
{
"version": "v1",
"created": "Fri, 5 Jan 2024 09:58:09 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Jan 2024 12:51:21 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Feb 2024 02:20:30 GMT"
}
] | 1,707,782,400,000 | [
[
"Wu",
"Haoyuan",
""
],
[
"Zheng",
"Haisheng",
""
],
[
"He",
"Zhuolun",
""
],
[
"Yu",
"Bei",
""
]
] |
2401.02851 | Akhil Vaid | Akhil Vaid, Joshua Lampert, Juhee Lee, Ashwin Sawant, Donald Apakama,
Ankit Sakhuja, Ali Soroush, Denise Lee, Isotta Landi, Nicole Bussola, Ismail
Nabeel, Robbie Freeman, Patricia Kovatch, Brendan Carr, Benjamin Glicksberg,
Edgar Argulian, Stamatios Lerakis, Monica Kraft, Alexander Charney, Girish
Nadkarni | Generative Large Language Models are autonomous practitioners of
evidence-based medicine | Word count: 4548 words, Figures: 4, Tables: 4 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Background: Evidence-based medicine (EBM) is fundamental to modern clinical
practice, requiring clinicians to continually update their knowledge and apply
the best clinical evidence in patient care. The practice of EBM faces
challenges due to rapid advancements in medical research, leading to
information overload for clinicians. The integration of artificial intelligence
(AI), specifically Generative Large Language Models (LLMs), offers a promising
solution towards managing this complexity.
Methods: This study involved the curation of real-world clinical cases across
various specialties, converting them into .json files for analysis. LLMs,
including proprietary models like ChatGPT 3.5 and 4, Gemini Pro, and
open-source models like LLaMA v2 and Mixtral-8x7B, were employed. These models
were equipped with tools to retrieve information from case files and make
clinical decisions similar to how clinicians must operate in the real world.
Model performance was evaluated based on correctness of final answer, judicious
use of tools, conformity to guidelines, and resistance to hallucinations.
Results: GPT-4 was most capable of autonomous operation in a clinical
setting, being generally more effective in ordering relevant investigations and
conforming to clinical guidelines. Limitations were observed in terms of model
ability to handle complex guidelines and diagnostic nuances. Retrieval
Augmented Generation made recommendations more tailored to patients and
healthcare systems.
Conclusions: LLMs can be made to function as autonomous practitioners of
evidence-based medicine. Their ability to utilize tooling can be harnessed to
interact with the infrastructure of a real-world healthcare system and perform
the tasks of patient management in a guideline directed manner. Prompt
engineering may help to further enhance this potential and transform healthcare
for the clinician and the patient.
| [
{
"version": "v1",
"created": "Fri, 5 Jan 2024 15:09:57 GMT"
}
] | 1,704,672,000,000 | [
[
"Vaid",
"Akhil",
""
],
[
"Lampert",
"Joshua",
""
],
[
"Lee",
"Juhee",
""
],
[
"Sawant",
"Ashwin",
""
],
[
"Apakama",
"Donald",
""
],
[
"Sakhuja",
"Ankit",
""
],
[
"Soroush",
"Ali",
""
],
[
"Lee",
"Denise",
""
],
[
"Landi",
"Isotta",
""
],
[
"Bussola",
"Nicole",
""
],
[
"Nabeel",
"Ismail",
""
],
[
"Freeman",
"Robbie",
""
],
[
"Kovatch",
"Patricia",
""
],
[
"Carr",
"Brendan",
""
],
[
"Glicksberg",
"Benjamin",
""
],
[
"Argulian",
"Edgar",
""
],
[
"Lerakis",
"Stamatios",
""
],
[
"Kraft",
"Monica",
""
],
[
"Charney",
"Alexander",
""
],
[
"Nadkarni",
"Girish",
""
]
] |
2401.03082 | Qingyuan Li | Lin Sun, Kai Zhang, Qingyuan Li, Renze Lou | UMIE: Unified Multimodal Information Extraction with Instruction Tuning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal information extraction (MIE) gains significant attention as the
popularity of multimedia content increases. However, current MIE methods often
resort to using task-specific model structures, which results in limited
generalizability across tasks and underutilizes shared knowledge across MIE
tasks. To address these issues, we propose UMIE, a unified multimodal
information extractor to unify three MIE tasks as a generation problem using
instruction tuning, being able to effectively extract both textual and visual
mentions. Extensive experiments show that our single UMIE outperforms various
state-of-the-art (SoTA) methods across six MIE datasets on three tasks.
Furthermore, in-depth analysis demonstrates UMIE's strong generalization in the
zero-shot setting, robustness to instruction variants, and interpretability.
Our research serves as an initial step towards a unified MIE model and
initiates the exploration into both instruction tuning and large language
models within the MIE domain. Our code, data, and model are available at
https://github.com/ZUCC-AI/UMIE
| [
{
"version": "v1",
"created": "Fri, 5 Jan 2024 22:52:15 GMT"
}
] | 1,704,758,400,000 | [
[
"Sun",
"Lin",
""
],
[
"Zhang",
"Kai",
""
],
[
"Li",
"Qingyuan",
""
],
[
"Lou",
"Renze",
""
]
] |
2401.03128 | Xuran Hu | Xuran Hu, Mingzhe Zhu, Yuanjing Liu, Zhenpeng Feng and LJubisa
Stankovic | Manifold-based Shapley for SAR Recognization Network Explanation | 5 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explainable artificial intelligence (XAI) holds immense significance in
enhancing the deep neural network's transparency and credibility, particularly
in some risky and high-cost scenarios, like synthetic aperture radar (SAR).
Shapley is a game-based explanation technique with robust mathematical
foundations. However, Shapley assumes that model's features are independent,
rendering Shapley explanation invalid for high dimensional models. This study
introduces a manifold-based Shapley method by projecting high-dimensional
features into low-dimensional manifold features and subsequently obtaining
Fusion-Shap, which aims at (1) addressing the issue of erroneous explanations
encountered by traditional Shap; (2) resolving the challenge of
interpretability that traditional Shap faces in complex scenarios.
| [
{
"version": "v1",
"created": "Sat, 6 Jan 2024 05:26:20 GMT"
}
] | 1,704,758,400,000 | [
[
"Hu",
"Xuran",
""
],
[
"Zhu",
"Mingzhe",
""
],
[
"Liu",
"Yuanjing",
""
],
[
"Feng",
"Zhenpeng",
""
],
[
"Stankovic",
"LJubisa",
""
]
] |
2401.03188 | Justus Renkhoff | Justus Renkhoff, Ke Feng, Marc Meier-Doernberg, Alvaro Velasquez,
Houbing Herbert Song | A Survey on Verification and Validation, Testing and Evaluations of
Neurosymbolic Artificial Intelligence | 16 pages, 8 figures | null | 10.1109/TAI.2024.3351798 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neurosymbolic artificial intelligence (AI) is an emerging branch of AI that
combines the strengths of symbolic AI and sub-symbolic AI. A major drawback of
sub-symbolic AI is that it acts as a "black box", meaning that predictions are
difficult to explain, making the testing & evaluation (T&E) and validation &
verification (V&V) processes of a system that uses sub-symbolic AI a challenge.
Since neurosymbolic AI combines the advantages of both symbolic and
sub-symbolic AI, this survey explores how neurosymbolic applications can ease
the V&V process. This survey considers two taxonomies of neurosymbolic AI,
evaluates them, and analyzes which algorithms are commonly used as the symbolic
and sub-symbolic components in current applications. Additionally, an overview
of current techniques for the T&E and V&V processes of these components is
provided. Furthermore, it is investigated how the symbolic part is used for T&E
and V&V purposes in current neurosymbolic applications. Our research shows that
neurosymbolic AI as great potential to ease the T&E and V&V processes of
sub-symbolic AI by leveraging the possibilities of symbolic AI. Additionally,
the applicability of current T&E and V&V methods to neurosymbolic AI is
assessed, and how different neurosymbolic architectures can impact these
methods is explored. It is found that current T&E and V&V techniques are partly
sufficient to test, evaluate, verify, or validate the symbolic and sub-symbolic
part of neurosymbolic applications independently, while some of them use
approaches where current T&E and V&V methods are not applicable by default, and
adjustments or even new approaches are needed. Our research shows that there is
great potential in using symbolic AI to test, evaluate, verify, or validate the
predictions of a sub-symbolic model, making neurosymbolic AI an interesting
research direction for safe, secure, and trustworthy AI.
| [
{
"version": "v1",
"created": "Sat, 6 Jan 2024 10:28:52 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Jan 2024 16:54:11 GMT"
}
] | 1,704,931,200,000 | [
[
"Renkhoff",
"Justus",
""
],
[
"Feng",
"Ke",
""
],
[
"Meier-Doernberg",
"Marc",
""
],
[
"Velasquez",
"Alvaro",
""
],
[
"Song",
"Houbing Herbert",
""
]
] |
2401.03454 | Federico Castagna | Federico Castagna, Nadin Kokciyan, Isabel Sassoon, Simon Parsons,
Elizabeth Sklar | Computational Argumentation-based Chatbots: a Survey | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Chatbots are conversational software applications designed to interact
dialectically with users for a plethora of different purposes. Surprisingly,
these colloquial agents have only recently been coupled with computational
models of arguments (i.e. computational argumentation), whose aim is to
formalise, in a machine-readable format, the ordinary exchange of information
that characterises human communications. Chatbots may employ argumentation with
different degrees and in a variety of manners. The present survey sifts through
the literature to review papers concerning this kind of argumentation-based
bot, drawing conclusions about the benefits and drawbacks that this approach
entails in comparison with standard chatbots, while also envisaging possible
future development and integration with the Transformer-based architecture and
state-of-the-art Large Language models.
| [
{
"version": "v1",
"created": "Sun, 7 Jan 2024 11:20:42 GMT"
}
] | 1,704,758,400,000 | [
[
"Castagna",
"Federico",
""
],
[
"Kokciyan",
"Nadin",
""
],
[
"Sassoon",
"Isabel",
""
],
[
"Parsons",
"Simon",
""
],
[
"Sklar",
"Elizabeth",
""
]
] |
2401.03504 | Robert M\"uller | Robert M\"uller, Hasan Turalic, Thomy Phan, Michael K\"olle, Jonas
N\"u{\ss}lein, Claudia Linnhoff-Popien | ClusterComm: Discrete Communication in Decentralized MARL using Internal
Representation Clustering | Accepted at ICAART 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the realm of Multi-Agent Reinforcement Learning (MARL), prevailing
approaches exhibit shortcomings in aligning with human learning, robustness,
and scalability. Addressing this, we introduce ClusterComm, a fully
decentralized MARL framework where agents communicate discretely without a
central control unit. ClusterComm utilizes Mini-Batch-K-Means clustering on the
last hidden layer's activations of an agent's policy network, translating them
into discrete messages. This approach outperforms no communication and competes
favorably with unbounded, continuous communication and hence poses a simple yet
effective strategy for enhancing collaborative task-solving in MARL.
| [
{
"version": "v1",
"created": "Sun, 7 Jan 2024 14:53:43 GMT"
}
] | 1,704,758,400,000 | [
[
"Müller",
"Robert",
""
],
[
"Turalic",
"Hasan",
""
],
[
"Phan",
"Thomy",
""
],
[
"Kölle",
"Michael",
""
],
[
"Nüßlein",
"Jonas",
""
],
[
"Linnhoff-Popien",
"Claudia",
""
]
] |
2401.03529 | Evan Ryan Gunter | Evan Ryan Gunter (1), Yevgeny Liokumovich (2), Victoria Krakovna (3)
((1) ML Alignment & Theory Scholars (MATS), (2) University of Toronto, (3)
Google DeepMind) | Quantifying stability of non-power-seeking in artificial agents | 37 pages, 5 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We investigate the question: if an AI agent is known to be safe in one
setting, is it also safe in a new setting similar to the first? This is a core
question of AI alignment--we train and test models in a certain environment,
but deploy them in another, and we need to guarantee that models that seem safe
in testing remain so in deployment. Our notion of safety is based on
power-seeking--an agent which seeks power is not safe. In particular, we focus
on a crucial type of power-seeking: resisting shutdown. We model agents as
policies for Markov decision processes, and show (in two cases of interest)
that not resisting shutdown is "stable": if an MDP has certain policies which
don't avoid shutdown, the corresponding policies for a similar MDP also don't
avoid shutdown. We also show that there are natural cases where safety is _not_
stable--arbitrarily small perturbations may result in policies which never shut
down. In our first case of interest--near-optimal policies--we use a
bisimulation metric on MDPs to prove that small perturbations won't make the
agent take longer to shut down. Our second case of interest is policies for
MDPs satisfying certain constraints which hold for various models (including
language models). Here, we demonstrate a quantitative bound on how fast the
probability of not shutting down can increase: by defining a metric on MDPs;
proving that the probability of not shutting down, as a function on MDPs, is
lower semicontinuous; and bounding how quickly this function decreases.
| [
{
"version": "v1",
"created": "Sun, 7 Jan 2024 15:57:38 GMT"
}
] | 1,704,758,400,000 | [
[
"Gunter",
"Evan Ryan",
""
],
[
"Liokumovich",
"Yevgeny",
""
],
[
"Krakovna",
"Victoria",
""
]
] |
2401.03546 | Shivam Goel | Shivam Goel, Yichen Wei, Panagiotis Lymperopoulos, Matthias Scheutz,
Jivko Sinapov | NovelGym: A Flexible Ecosystem for Hybrid Planning and Learning Agents
Designed for Open Worlds | Accepted at AAMAS-2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As AI agents leave the lab and venture into the real world as autonomous
vehicles, delivery robots, and cooking robots, it is increasingly necessary to
design and comprehensively evaluate algorithms that tackle the ``open-world''.
To this end, we introduce NovelGym, a flexible and adaptable ecosystem designed
to simulate gridworld environments, serving as a robust platform for
benchmarking reinforcement learning (RL) and hybrid planning and learning
agents in open-world contexts. The modular architecture of NovelGym facilitates
rapid creation and modification of task environments, including multi-agent
scenarios, with multiple environment transformations, thus providing a dynamic
testbed for researchers to develop open-world AI agents.
| [
{
"version": "v1",
"created": "Sun, 7 Jan 2024 17:13:28 GMT"
}
] | 1,704,758,400,000 | [
[
"Goel",
"Shivam",
""
],
[
"Wei",
"Yichen",
""
],
[
"Lymperopoulos",
"Panagiotis",
""
],
[
"Scheutz",
"Matthias",
""
],
[
"Sinapov",
"Jivko",
""
]
] |
2401.04812 | Zhizhen Qin | Yaoguang Zhai, Zhizhen Qin, Sicun Gao | Sample-and-Bound for Non-Convex Optimization | Published at AAAI 2024. Code is available at
https://github.com/aaucsd/MCIR | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Standard approaches for global optimization of non-convex functions, such as
branch-and-bound, maintain partition trees to systematically prune the domain.
The tree size grows exponentially in the number of dimensions. We propose new
sampling-based methods for non-convex optimization that adapts Monte Carlo Tree
Search (MCTS) to improve efficiency. Instead of the standard use of visitation
count in Upper Confidence Bounds, we utilize numerical overapproximations of
the objective as an uncertainty metric, and also take into account of sampled
estimates of first-order and second-order information. The Monte Carlo tree in
our approach avoids the usual fixed combinatorial patterns in growing the tree,
and aggressively zooms into the promising regions, while still balancing
exploration and exploitation. We evaluate the proposed algorithms on
high-dimensional non-convex optimization benchmarks against competitive
baselines and analyze the effects of the hyper parameters.
| [
{
"version": "v1",
"created": "Tue, 9 Jan 2024 20:45:47 GMT"
},
{
"version": "v2",
"created": "Sat, 13 Jan 2024 21:18:46 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Feb 2024 00:18:16 GMT"
}
] | 1,708,473,600,000 | [
[
"Zhai",
"Yaoguang",
""
],
[
"Qin",
"Zhizhen",
""
],
[
"Gao",
"Sicun",
""
]
] |
2401.05743 | Lorenzo Marconi | Lorenzo Marconi, Riccardo Rosati | Consistent Query Answering for Existential Rules with Closed Predicates | 31 pages. arXiv admin note: text overlap with arXiv:2207.09198 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Consistent Query Answering (CQA) is an inconsistency-tolerant approach to
data access in knowledge bases and databases. The goal of CQA is to provide
meaningful (consistent) answers to queries even in the presence of inconsistent
information, e.g. a database whose data conflict with meta-data (typically the
database integrity constraints). The semantics of CQA is based on the notion of
repair, that is, a consistent version of the initial, inconsistent database
that is obtained through minimal modifications. We study CQA in databases with
data dependencies expressed by existential rules. More specifically, we focus
on the broad class of disjunctive embedded dependencies with inequalities
(DEDs), which extend both tuple-generating dependencies and equality-generated
dependencies. We first focus on the case when the database predicates are
closed, i.e. the database is assumed to have complete knowledge about such
predicates, thus no tuple addition is possible to repair the database. In such
a scenario, we provide a detailed analysis of the data complexity of CQA and
associated tasks (repair checking) under different semantics (AR and IAR) and
for different classes of existential rules. In particular, we consider the
classes of acyclic, linear, full, sticky and guarded DEDs, and their
combinations.
| [
{
"version": "v1",
"created": "Thu, 11 Jan 2024 08:48:40 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Apr 2024 14:14:08 GMT"
}
] | 1,714,003,200,000 | [
[
"Marconi",
"Lorenzo",
""
],
[
"Rosati",
"Riccardo",
""
]
] |
2401.05960 | Xijun Li | Xijun Li, Fangzhou Zhu, Hui-Ling Zhen, Weilin Luo, Meng Lu, Yimin
Huang, Zhenan Fan, Zirui Zhou, Yufei Kuang, Zhihai Wang, Zijie Geng, Yang Li,
Haoyang Liu, Zhiwu An, Muming Yang, Jianshu Li, Jie Wang, Junchi Yan, Defeng
Sun, Tao Zhong, Yong Zhang, Jia Zeng, Mingxuan Yuan, Jianye Hao, Jun Yao, Kun
Mao | Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In an era of digital ubiquity, efficient resource management and
decision-making are paramount across numerous industries. To this end, we
present a comprehensive study on the integration of machine learning (ML)
techniques into Huawei Cloud's OptVerse AI Solver, which aims to mitigate the
scarcity of real-world mathematical programming instances, and to surpass the
capabilities of traditional optimization techniques. We showcase our methods
for generating complex SAT and MILP instances utilizing generative models that
mirror multifaceted structures of real-world problem. Furthermore, we introduce
a training framework leveraging augmentation policies to maintain solvers'
utility in dynamic environments. Besides the data generation and augmentation,
our proposed approaches also include novel ML-driven policies for personalized
solver strategies, with an emphasis on applications like graph convolutional
networks for initial basis selection and reinforcement learning for advanced
presolving and cut selection. Additionally, we detail the incorporation of
state-of-the-art parameter tuning algorithms which markedly elevate solver
performance. Compared with traditional solvers such as Cplex and SCIP, our
ML-augmented OptVerse AI Solver demonstrates superior speed and precision
across both established benchmarks and real-world scenarios, reinforcing the
practical imperative and effectiveness of machine learning techniques in
mathematical programming solvers.
| [
{
"version": "v1",
"created": "Thu, 11 Jan 2024 15:02:15 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Jan 2024 13:26:09 GMT"
}
] | 1,705,536,000,000 | [
[
"Li",
"Xijun",
""
],
[
"Zhu",
"Fangzhou",
""
],
[
"Zhen",
"Hui-Ling",
""
],
[
"Luo",
"Weilin",
""
],
[
"Lu",
"Meng",
""
],
[
"Huang",
"Yimin",
""
],
[
"Fan",
"Zhenan",
""
],
[
"Zhou",
"Zirui",
""
],
[
"Kuang",
"Yufei",
""
],
[
"Wang",
"Zhihai",
""
],
[
"Geng",
"Zijie",
""
],
[
"Li",
"Yang",
""
],
[
"Liu",
"Haoyang",
""
],
[
"An",
"Zhiwu",
""
],
[
"Yang",
"Muming",
""
],
[
"Li",
"Jianshu",
""
],
[
"Wang",
"Jie",
""
],
[
"Yan",
"Junchi",
""
],
[
"Sun",
"Defeng",
""
],
[
"Zhong",
"Tao",
""
],
[
"Zhang",
"Yong",
""
],
[
"Zeng",
"Jia",
""
],
[
"Yuan",
"Mingxuan",
""
],
[
"Hao",
"Jianye",
""
],
[
"Yao",
"Jun",
""
],
[
"Mao",
"Kun",
""
]
] |
2401.06080 | Rui Zheng | Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang
Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, Songyang Gao, Nuo Xu,
Yuhao Zhou, Xiaoran Fan, Zhiheng Xi, Jun Zhao, Xiao Wang, Tao Ji, Hang Yan,
Lixing Shen, Zhan Chen, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang, Zuxuan
Wu, Yu-Gang Jiang | Secrets of RLHF in Large Language Models Part II: Reward Modeling | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement Learning from Human Feedback (RLHF) has become a crucial
technology for aligning language models with human values and intentions,
enabling models to produce more helpful and harmless responses. Reward models
are trained as proxies for human preferences to drive reinforcement learning
optimization. While reward models are often considered central to achieving
high performance, they face the following challenges in practical applications:
(1) Incorrect and ambiguous preference pairs in the dataset may hinder the
reward model from accurately capturing human intent. (2) Reward models trained
on data from a specific distribution often struggle to generalize to examples
outside that distribution and are not suitable for iterative RLHF training.
In this report, we attempt to address these two issues. (1) From a data
perspective, we propose a method to measure the strength of preferences within
the data, based on a voting mechanism of multiple reward models. Experimental
results confirm that data with varying preference strengths have different
impacts on reward model performance. We introduce a series of novel methods to
mitigate the influence of incorrect and ambiguous preferences in the dataset
and fully leverage high-quality preference data. (2) From an algorithmic
standpoint, we introduce contrastive learning to enhance the ability of reward
models to distinguish between chosen and rejected responses, thereby improving
model generalization. Furthermore, we employ meta-learning to enable the reward
model to maintain the ability to differentiate subtle differences in
out-of-distribution samples, and this approach can be utilized for iterative
RLHF optimization.
| [
{
"version": "v1",
"created": "Thu, 11 Jan 2024 17:56:59 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Jan 2024 09:46:10 GMT"
}
] | 1,705,276,800,000 | [
[
"Wang",
"Binghai",
""
],
[
"Zheng",
"Rui",
""
],
[
"Chen",
"Lu",
""
],
[
"Liu",
"Yan",
""
],
[
"Dou",
"Shihan",
""
],
[
"Huang",
"Caishuang",
""
],
[
"Shen",
"Wei",
""
],
[
"Jin",
"Senjie",
""
],
[
"Zhou",
"Enyu",
""
],
[
"Shi",
"Chenyu",
""
],
[
"Gao",
"Songyang",
""
],
[
"Xu",
"Nuo",
""
],
[
"Zhou",
"Yuhao",
""
],
[
"Fan",
"Xiaoran",
""
],
[
"Xi",
"Zhiheng",
""
],
[
"Zhao",
"Jun",
""
],
[
"Wang",
"Xiao",
""
],
[
"Ji",
"Tao",
""
],
[
"Yan",
"Hang",
""
],
[
"Shen",
"Lixing",
""
],
[
"Chen",
"Zhan",
""
],
[
"Gui",
"Tao",
""
],
[
"Zhang",
"Qi",
""
],
[
"Qiu",
"Xipeng",
""
],
[
"Huang",
"Xuanjing",
""
],
[
"Wu",
"Zuxuan",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] |
2401.06256 | Evgeny Belousov | Artem Sukhobokov, Evgeny Belousov, Danila Gromozdov, Anna Zenger and
Ilya Popov | A Universal Knowledge Model and Cognitive Architecture for Prototyping
AGI | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The article identified 42 cognitive architectures for creating general
artificial intelligence (AGI) and proposed a set of interrelated functional
blocks that an agent approaching AGI in its capabilities should possess. Since
the required set of blocks is not found in any of the existing architectures,
the article proposes a new cognitive architecture for intelligent systems
approaching AGI in their capabilities. As one of the key solutions within the
framework of the architecture, a universal method of knowledge representation
is proposed, which allows combining various non-formalized, partially and fully
formalized methods of knowledge representation in a single knowledge base, such
as texts in natural languages, images, audio and video recordings, graphs,
algorithms, databases, neural networks, knowledge graphs, ontologies, frames,
essence-property-relation models, production systems, predicate calculus
models, conceptual models, and others. To combine and structure various
fragments of knowledge, archigraph models are used, constructed as a
development of annotated metagraphs. As components, the cognitive architecture
being developed includes machine consciousness, machine subconsciousness,
blocks of interaction with the external environment, a goal management block,
an emotional control system, a block of social interaction, a block of
reflection, an ethics block and a worldview block, a learning block, a
monitoring block, blocks of statement and solving problems, self-organization
and meta learning block.
| [
{
"version": "v1",
"created": "Thu, 11 Jan 2024 21:05:02 GMT"
},
{
"version": "v2",
"created": "Sat, 20 Jan 2024 15:37:28 GMT"
},
{
"version": "v3",
"created": "Sat, 27 Jan 2024 19:13:03 GMT"
}
] | 1,706,572,800,000 | [
[
"Sukhobokov",
"Artem",
""
],
[
"Belousov",
"Evgeny",
""
],
[
"Gromozdov",
"Danila",
""
],
[
"Zenger",
"Anna",
""
],
[
"Popov",
"Ilya",
""
]
] |
2401.06375 | Gordon Banks | Gordon Banks, Gates Bierhuizen, Katherine McCrum, Ellen Wengert | Cognitive BPM as an Equalizer: Improving Access and Efficiency for
Employees with (and without) Cognitive Disabilities | 7 pages, 2 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We examine ProcessGPT, an AI model designed to automate, augment, and improve
business processes, to study the challenges of managing business processes
within the cognitive limitations of the human workforce, particularly
individuals with cognitive disabilities. ProcessGPT provides a blueprint for
designing efficient business processes that take into account human cognitive
limitations. By viewing this through the lens of cognitive disabilities, we
show that ProcessGPT improves process usability for individuals with and
without cognitive disabilities. We also demonstrate that organizations
implementing ProcessGPT-like capabilities will realize increased productivity,
morale, and inclusion.
| [
{
"version": "v1",
"created": "Fri, 12 Jan 2024 04:54:06 GMT"
}
] | 1,705,276,800,000 | [
[
"Banks",
"Gordon",
""
],
[
"Bierhuizen",
"Gates",
""
],
[
"McCrum",
"Katherine",
""
],
[
"Wengert",
"Ellen",
""
]
] |
2401.06379 | Matthew Daggitt Dr | Matthew L. Daggitt, Wen Kokke, Robert Atkey, Natalia Slusarz, Luca
Arnaboldi, Ekaterina Komendantskaya | Vehicle: Bridging the Embedding Gap in the Verification of
Neuro-Symbolic Programs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Neuro-symbolic programs -- programs containing both machine learning
components and traditional symbolic code -- are becoming increasingly
widespread. However, we believe that there is still a lack of a general
methodology for verifying these programs whose correctness depends on the
behaviour of the machine learning components. In this paper, we identify the
``embedding gap'' -- the lack of techniques for linking semantically-meaningful
``problem-space'' properties to equivalent ``embedding-space'' properties -- as
one of the key issues, and describe Vehicle, a tool designed to facilitate the
end-to-end verification of neural-symbolic programs in a modular fashion.
Vehicle provides a convenient language for specifying ``problem-space''
properties of neural networks and declaring their relationship to the
``embedding-space", and a powerful compiler that automates interpretation of
these properties in the language of a chosen machine-learning training
environment, neural network verifier, and interactive theorem prover. We
demonstrate Vehicle's utility by using it to formally verify the safety of a
simple autonomous car equipped with a neural network controller.
| [
{
"version": "v1",
"created": "Fri, 12 Jan 2024 05:01:47 GMT"
}
] | 1,705,276,800,000 | [
[
"Daggitt",
"Matthew L.",
""
],
[
"Kokke",
"Wen",
""
],
[
"Atkey",
"Robert",
""
],
[
"Slusarz",
"Natalia",
""
],
[
"Arnaboldi",
"Luca",
""
],
[
"Komendantskaya",
"Ekaterina",
""
]
] |
2401.06471 | Yuwei Wang | Yuwei Wang and Yi Zeng | A Brain-inspired Computational Model for Human-like Concept Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Concept learning is a fundamental aspect of human cognition and plays a
critical role in mental processes such as categorization, reasoning, memory,
and decision-making. Researchers across various disciplines have shown
consistent interest in the process of concept acquisition in individuals. To
elucidate the mechanisms involved in human concept learning, this study
examines the findings from computational neuroscience and cognitive psychology.
These findings indicate that the brain's representation of concepts relies on
two essential components: multisensory representation and text-derived
representation. These two types of representations are coordinated by a
semantic control system, ultimately leading to the acquisition of concepts.
Drawing inspiration from this mechanism, the study develops a human-like
computational model for concept learning based on spiking neural networks. By
effectively addressing the challenges posed by diverse sources and imbalanced
dimensionality of the two forms of concept representations, the study
successfully attains human-like concept representations. Tests involving
similar concepts demonstrate that our model, which mimics the way humans learn
concepts, yields representations that closely align with human cognition.
| [
{
"version": "v1",
"created": "Fri, 12 Jan 2024 09:32:51 GMT"
}
] | 1,705,276,800,000 | [
[
"Wang",
"Yuwei",
""
],
[
"Zeng",
"Yi",
""
]
] |
2401.06793 | Mikhail Moshkov | Kerven Durdymyradov and Mikhail Moshkov | Greedy Algorithm for Inference of Decision Trees from Decision Rule
Systems | arXiv admin note: substantial text overlap with arXiv:2305.01721,
arXiv:2302.07063 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision trees and decision rule systems play important roles as classifiers,
knowledge representation tools, and algorithms. They are easily interpretable
models for data analysis, making them widely used and studied in computer
science. Understanding the relationships between these two models is an
important task in this field. There are well-known methods for converting
decision trees into systems of decision rules. In this paper, we consider the
inverse transformation problem, which is not so simple. Instead of constructing
an entire decision tree, our study focuses on a greedy polynomial time
algorithm that simulates the operation of a decision tree on a given tuple of
attribute values.
| [
{
"version": "v1",
"created": "Mon, 8 Jan 2024 09:28:55 GMT"
}
] | 1,705,449,600,000 | [
[
"Durdymyradov",
"Kerven",
""
],
[
"Moshkov",
"Mikhail",
""
]
] |
2401.06801 | Julia Li | Ye Li | Graph-of-Thought: Utilizing Large Language Models to Solve Complex and
Dynamic Business Problems | Keywords: Graph-of-Thought (GoT), Workflow Automation, Large Language
Models (LLMs), Task Execution, Data-Driven Decision Making, Complexity
Management | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | This paper presents Graph-of-Thought (GoT), a new model for workflow
automation that enhances the flexibility and efficiency of Large Language
Models (LLMs) in complex task execution. GoT advances beyond traditional linear
and tree-like cognitive models with a graph structure that enables dynamic path
selection. The open-source engine GoTFlow demonstrates the practical
application of GoT, facilitating automated, data-driven decision-making across
various domains. Despite challenges in complexity and transparency, GoTFlow's
potential for improving business processes is significant, promising
advancements in both efficiency and decision quality with continuous
development.
| [
{
"version": "v1",
"created": "Wed, 10 Jan 2024 05:32:20 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Feb 2024 03:48:01 GMT"
}
] | 1,708,387,200,000 | [
[
"Li",
"Ye",
""
]
] |
2401.06810 | Srishti Gupta | Srishti Gupta, Piyush Kumar Garg, Sourav Kumar Dandapat | TONE: A 3-Tiered ONtology for Emotion analysis | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Emotions have played an important part in many sectors, including psychology,
medicine, mental health, computer science, and so on, and categorizing them has
proven extremely useful in separating one emotion from another. Emotions can be
classified using the following two methods: (1) The supervised method's
efficiency is strongly dependent on the size and domain of the data collected.
A categorization established using relevant data from one domain may not work
well in another. (2) An unsupervised method that uses either domain expertise
or a knowledge base of emotion types already exists. Though this second
approach provides a suitable and generic categorization of emotions and is
cost-effective, the literature doesn't possess a publicly available knowledge
base that can be directly applied to any emotion categorization-related task.
This pushes us to create a knowledge base that can be used for emotion
classification across domains, and ontology is often used for this purpose. In
this study, we provide TONE, an emotion-based ontology that effectively creates
an emotional hierarchy based on Dr. Gerrod Parrot's group of emotions. In
addition to ontology development, we introduce a semi-automated vocabulary
construction process to generate a detailed collection of terms for emotions at
each tier of the hierarchy. We also demonstrate automated methods for
establishing three sorts of dependencies in order to develop linkages between
different emotions. Our human and automatic evaluation results show the
ontology's quality. Furthermore, we describe three distinct use cases that
demonstrate the applicability of our ontology.
| [
{
"version": "v1",
"created": "Thu, 11 Jan 2024 04:23:08 GMT"
}
] | 1,705,449,600,000 | [
[
"Gupta",
"Srishti",
""
],
[
"Garg",
"Piyush Kumar",
""
],
[
"Dandapat",
"Sourav Kumar",
""
]
] |
2401.07426 | Chao Lei | Chao Lei, Nir Lipovetzky, Krista A. Ehinger | Generalized Planning for the Abstraction and Reasoning Corpus | Accepted at AAAI 2024 (extended version) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Abstraction and Reasoning Corpus (ARC) is a general artificial
intelligence benchmark that poses difficulties for pure machine learning
methods due to its requirement for fluid intelligence with a focus on reasoning
and abstraction. In this work, we introduce an ARC solver, Generalized Planning
for Abstract Reasoning (GPAR). It casts an ARC problem as a generalized
planning (GP) problem, where a solution is formalized as a planning program
with pointers. We express each ARC problem using the standard Planning Domain
Definition Language (PDDL) coupled with external functions representing
object-centric abstractions. We show how to scale up GP solvers via domain
knowledge specific to ARC in the form of restrictions over the actions model,
predicates, arguments and valid structure of planning programs. Our experiments
demonstrate that GPAR outperforms the state-of-the-art solvers on the
object-centric tasks of the ARC, showing the effectiveness of GP and the
expressiveness of PDDL to model ARC problems. The challenges provided by the
ARC benchmark motivate research to advance existing GP solvers and understand
new relations with other planning computational models. Code is available at
github.com/you68681/GPAR.
| [
{
"version": "v1",
"created": "Mon, 15 Jan 2024 02:25:00 GMT"
}
] | 1,705,449,600,000 | [
[
"Lei",
"Chao",
""
],
[
"Lipovetzky",
"Nir",
""
],
[
"Ehinger",
"Krista A.",
""
]
] |
2401.07722 | Junlin Lu | Junlin Lu, Patrick Mannion, Karl Mason | Inferring Preferences from Demonstrations in Multi-Objective Residential
Energy Management | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | It is often challenging for a user to articulate their preferences accurately
in multi-objective decision-making problems. Demonstration-based preference
inference (DemoPI) is a promising approach to mitigate this problem.
Understanding the behaviours and values of energy customers is an example of a
scenario where preference inference can be used to gain insights into the
values of energy customers with multiple objectives, e.g. cost and comfort. In
this work, we applied the state-of-art DemoPI method, i.e., the dynamic
weight-based preference inference (DWPI) algorithm in a multi-objective
residential energy consumption setting to infer preferences from energy
consumption demonstrations by simulated users following a rule-based approach.
According to our experimental results, the DWPI model achieves accurate
demonstration-based preference inferring in three scenarios. These advancements
enhance the usability and effectiveness of multi-objective reinforcement
learning (MORL) in energy management, enabling more intuitive and user-friendly
preference specifications, and opening the door for DWPI to be applied in
real-world settings.
| [
{
"version": "v1",
"created": "Mon, 15 Jan 2024 14:36:59 GMT"
}
] | 1,705,449,600,000 | [
[
"Lu",
"Junlin",
""
],
[
"Mannion",
"Patrick",
""
],
[
"Mason",
"Karl",
""
]
] |
2401.08879 | Timotheus Kampik | Timotheus Kampik, Nico Potyka, Xiang Yin, Kristijonas \v{C}yras,
Francesca Toni | Contribution Functions for Quantitative Bipolar Argumentation Graphs: A
Principle-based Analysis | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present a principle-based analysis of contribution functions for
quantitative bipolar argumentation graphs that quantify the contribution of one
argument to another. The introduced principles formalise the intuitions
underlying different contribution functions as well as expectations one would
have regarding the behaviour of contribution functions in general. As none of
the covered contribution functions satisfies all principles, our analysis can
serve as a tool that enables the selection of the most suitable function based
on the requirements of a given use case.
| [
{
"version": "v1",
"created": "Tue, 16 Jan 2024 23:27:42 GMT"
}
] | 1,705,536,000,000 | [
[
"Kampik",
"Timotheus",
""
],
[
"Potyka",
"Nico",
""
],
[
"Yin",
"Xiang",
""
],
[
"Čyras",
"Kristijonas",
""
],
[
"Toni",
"Francesca",
""
]
] |
2401.09444 | Isabelle Kuhlmann | Lars Bengel, Lydia Bl\"umel, Elfia Bezou-Vrakatseli, Federico
Castagna, Giulia D'Agostino, Isabelle Kuhlmann, Jack Mumford, Daphne
Odekerken, Fabrizio Russo, Stefan Sarkadi, Madeleine Waller, Andreas Xydis | Online Handbook of Argumentation for AI: Volume 4 | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This volume contains revised versions of the papers selected for the fourth
volume of the Online Handbook of Argumentation for AI (OHAAI). Previously,
formal theories of argument and argument interaction have been proposed and
studied, and this has led to the more recent study of computational models of
argument. Argumentation, as a field within artificial intelligence (AI), is
highly relevant for researchers interested in symbolic representations of
knowledge and defeasible reasoning. The purpose of this handbook is to provide
an open access and curated anthology for the argumentation research community.
OHAAI is designed to serve as a research hub to keep track of the latest and
upcoming PhD-driven research on the theory and application of argumentation in
all areas related to AI.
| [
{
"version": "v1",
"created": "Wed, 20 Dec 2023 16:11:10 GMT"
}
] | 1,705,622,400,000 | [
[
"Bengel",
"Lars",
""
],
[
"Blümel",
"Lydia",
""
],
[
"Bezou-Vrakatseli",
"Elfia",
""
],
[
"Castagna",
"Federico",
""
],
[
"D'Agostino",
"Giulia",
""
],
[
"Kuhlmann",
"Isabelle",
""
],
[
"Mumford",
"Jack",
""
],
[
"Odekerken",
"Daphne",
""
],
[
"Russo",
"Fabrizio",
""
],
[
"Sarkadi",
"Stefan",
""
],
[
"Waller",
"Madeleine",
""
],
[
"Xydis",
"Andreas",
""
]
] |
2401.09448 | Mark Atkins | Mark A. Atkins | Tumbug: A pictorial, universal knowledge representation method | 346 pages, 334 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Since the key to artificial general intelligence (AGI) is commonly believed
to be commonsense reasoning (CSR) or, roughly equivalently, discovery of a
knowledge representation method (KRM) that is particularly suitable for CSR,
the author developed a custom KRM for CSR. This novel KRM called Tumbug was
designed to be pictorial in nature because there exists increasing evidence
that the human brain uses some pictorial type of KRM, and no well-known prior
research in AGI has researched this KRM possibility. Tumbug is somewhat similar
to Roger Schank's Conceptual Dependency (CD) theory, but Tumbug is pictorial
and uses about 30 components based on fundamental concepts from the sciences
and human life, in contrast to CD theory, which is textual and uses about 17
components (= 6 Primitive Conceptual Categories + 11 Primitive Acts) based
mainly on human-oriented activities. All the Building Blocks of Tumbug were
found to generalize to only five Basic Building Blocks that exactly correspond
to the three components {O, A, V} of traditional Object-Attribute-Value
representation plus two new components {C, S}, which are Change and System.
Collectively this set of five components, called "SCOVA," seems to be a
universal foundation for all knowledge representation.
| [
{
"version": "v1",
"created": "Fri, 22 Dec 2023 05:21:37 GMT"
}
] | 1,705,622,400,000 | [
[
"Atkins",
"Mark A.",
""
]
] |
2401.09491 | Ida Momennejad | Ida Momennejad | Memory, Space, and Planning: Multiscale Predictive Representations | To be published as a chapter in an edited volume by Oxford University
Press (Editors: Sara Aronowitz and Lynn Nadel) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Memory is inherently entangled with prediction and planning. Flexible
behavior in biological and artificial agents depends on the interplay of
learning from the past and predicting the future in ever-changing environments.
This chapter reviews computational, behavioral, and neural evidence suggesting
these processes rely on learning the relational structure of experiences, known
as cognitive maps, and draws two key takeaways. First, that these memory
structures are organized as multiscale, compact predictive representations in
hippocampal and prefrontal cortex, or PFC, hierarchies. Second, we argue that
such predictive memory structures are crucial to the complementary functions of
the hippocampus and PFC, both for enabling a recall of detailed and coherent
past episodes as well as generalizing experiences at varying scales for
efficient prediction and planning. These insights advance our understanding of
memory and planning mechanisms in the brain and hold significant implications
for advancing artificial intelligence systems.
| [
{
"version": "v1",
"created": "Tue, 16 Jan 2024 21:46:43 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Feb 2024 21:01:23 GMT"
}
] | 1,708,473,600,000 | [
[
"Momennejad",
"Ida",
""
]
] |
2401.09851 | Cheng Wang | Cheng Wang, Chuwen Wang, Wang Zhang, Shirong Zeng, Yu Zhao, Ronghui
Ning, Changjun Jiang | Behavioural Rehearsing Illuminates Scientific Problems of Organised
Complexity | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As artificial intelligence becomes increasingly prevalent in scientific
research, data-driven methodologies appear to overshadow traditional methods in
resolving scientific problems. In this Perspective, we revisit a classic
classification of scientific problems and rethink the evolution of scientific
paradigms from the standpoint of data, algorithms, and computational power. We
observe that the strengths of new paradigms have expanded the range of
resolvable scientific problems, but the continued advancement of data,
algorithms, and computational power is unlikely to bring a new paradigm. To
tackle unresolved problems of organised complexity in more intricate systems,
we argue that the integration of paradigms is a promising approach.
Consequently, we propose behavioural rehearsing, checking what will happen in
such systems through multiple times of simulation. One of the methodologies to
realise it, sophisticated behavioural simulation (SBS), represents a higher
level of paradigms integration based on foundational models to simulate complex
social systems involving sophisticated human strategies and behaviours. SBS
extends beyond the capabilities of traditional agent-based modelling simulation
(ABMS), and therefore, makes behavioural rehearsing a potential solution to
problems of organised complexity in complex human systems.
| [
{
"version": "v1",
"created": "Thu, 18 Jan 2024 10:05:52 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Mar 2024 03:24:06 GMT"
},
{
"version": "v3",
"created": "Thu, 9 May 2024 13:19:48 GMT"
}
] | 1,715,299,200,000 | [
[
"Wang",
"Cheng",
""
],
[
"Wang",
"Chuwen",
""
],
[
"Zhang",
"Wang",
""
],
[
"Zeng",
"Shirong",
""
],
[
"Zhao",
"Yu",
""
],
[
"Ning",
"Ronghui",
""
],
[
"Jiang",
"Changjun",
""
]
] |
2401.09966 | Fan Shi | Fan Shi, Bin Li, Xiangyang Xue | Towards Generative Abstract Reasoning: Completing Raven's Progressive
Matrix via Rule Abstraction and Selection | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Endowing machines with abstract reasoning ability has been a long-term
research topic in artificial intelligence. Raven's Progressive Matrix (RPM) is
widely used to probe abstract visual reasoning in machine intelligence, where
models will analyze the underlying rules and select one image from candidates
to complete the image matrix. Participators of RPM tests can show powerful
reasoning ability by inferring and combining attribute-changing rules and
imagining the missing images at arbitrary positions of a matrix. However,
existing solvers can hardly manifest such an ability in realistic RPM tests. In
this paper, we propose a deep latent variable model for answer generation
problems through Rule AbstractIon and SElection (RAISE). RAISE can encode image
attributes into latent concepts and abstract atomic rules that act on the
latent concepts. When generating answers, RAISE selects one atomic rule out of
the global knowledge set for each latent concept to constitute the underlying
rule of an RPM. In the experiments of bottom-right and arbitrary-position
answer generation, RAISE outperforms the compared solvers in most
configurations of realistic RPM datasets. In the odd-one-out task and two
held-out configurations, RAISE can leverage acquired latent concepts and atomic
rules to find the rule-breaking image in a matrix and handle problems with
unseen combinations of rules and attributes.
| [
{
"version": "v1",
"created": "Thu, 18 Jan 2024 13:28:44 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Mar 2024 13:29:26 GMT"
},
{
"version": "v3",
"created": "Sun, 14 Apr 2024 10:53:43 GMT"
}
] | 1,713,225,600,000 | [
[
"Shi",
"Fan",
""
],
[
"Li",
"Bin",
""
],
[
"Xue",
"Xiangyang",
""
]
] |
2401.10420 | Tristan Cazenave | Tristan Cazenave | Generalized Nested Rollout Policy Adaptation with Limited Repetitions | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Generalized Nested Rollout Policy Adaptation (GNRPA) is a Monte Carlo search
algorithm for optimizing a sequence of choices. We propose to improve on GNRPA
by avoiding too deterministic policies that find again and again the same
sequence of choices. We do so by limiting the number of repetitions of the best
sequence found at a given level. Experiments show that it improves the
algorithm for three different combinatorial problems: Inverse RNA Folding, the
Traveling Salesman Problem with Time Windows and the Weak Schur problem.
| [
{
"version": "v1",
"created": "Thu, 18 Jan 2024 23:19:47 GMT"
}
] | 1,705,881,600,000 | [
[
"Cazenave",
"Tristan",
""
]
] |
2401.10431 | Tristan Cazenave | Tristan Cazenave | Learning a Prior for Monte Carlo Search by Replaying Solutions to
Combinatorial Problems | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Monte Carlo Search gives excellent results in multiple difficult
combinatorial problems. Using a prior to perform non uniform playouts during
the search improves a lot the results compared to uniform playouts. Handmade
heuristics tailored to the combinatorial problem are often used as priors. We
propose a method to automatically compute a prior. It uses statistics on solved
problems. It is a simple and general method that incurs no computational cost
at playout time and that brings large performance gains. The method is applied
to three difficult combinatorial problems: Latin Square Completion, Kakuro, and
Inverse RNA Folding.
| [
{
"version": "v1",
"created": "Fri, 19 Jan 2024 00:22:31 GMT"
}
] | 1,705,881,600,000 | [
[
"Cazenave",
"Tristan",
""
]
] |
2401.10568 | Siyuan Qi | Siyuan Qi, Shuo Chen, Yexin Li, Xiangyu Kong, Junqi Wang, Bangcheng
Yang, Pring Wong, Yifan Zhong, Xiaoyuan Zhang, Zhaowei Zhang, Nian Liu, Wei
Wang, Yaodong Yang, Song-Chun Zhu | CivRealm: A Learning and Reasoning Odyssey in Civilization for
Decision-Making Agents | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The generalization of decision-making agents encompasses two fundamental
elements: learning from past experiences and reasoning in novel contexts.
However, the predominant emphasis in most interactive environments is on
learning, often at the expense of complexity in reasoning. In this paper, we
introduce CivRealm, an environment inspired by the Civilization game.
Civilization's profound alignment with human history and society necessitates
sophisticated learning, while its ever-changing situations demand strong
reasoning to generalize. Particularly, CivRealm sets up an
imperfect-information general-sum game with a changing number of players; it
presents a plethora of complex features, challenging the agent to deal with
open-ended stochastic environments that require diplomacy and negotiation
skills. Within CivRealm, we provide interfaces for two typical agent types:
tensor-based agents that focus on learning, and language-based agents that
emphasize reasoning. To catalyze further research, we present initial results
for both paradigms. The canonical RL-based agents exhibit reasonable
performance in mini-games, whereas both RL- and LLM-based agents struggle to
make substantial progress in the full game. Overall, CivRealm stands as a
unique learning and reasoning challenge for decision-making agents. The code is
available at https://github.com/bigai-ai/civrealm.
| [
{
"version": "v1",
"created": "Fri, 19 Jan 2024 09:14:11 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Mar 2024 08:24:37 GMT"
}
] | 1,710,288,000,000 | [
[
"Qi",
"Siyuan",
""
],
[
"Chen",
"Shuo",
""
],
[
"Li",
"Yexin",
""
],
[
"Kong",
"Xiangyu",
""
],
[
"Wang",
"Junqi",
""
],
[
"Yang",
"Bangcheng",
""
],
[
"Wong",
"Pring",
""
],
[
"Zhong",
"Yifan",
""
],
[
"Zhang",
"Xiaoyuan",
""
],
[
"Zhang",
"Zhaowei",
""
],
[
"Liu",
"Nian",
""
],
[
"Wang",
"Wei",
""
],
[
"Yang",
"Yaodong",
""
],
[
"Zhu",
"Song-Chun",
""
]
] |
2401.10589 | Jiongzhi Zheng | Jiongzhi Zheng and Zhuo Chen and Chu-Min Li and Kun He | Rethinking the Soft Conflict Pseudo Boolean Constraint on MaxSAT Local
Search Solvers | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | MaxSAT is an optimization version of the famous NP-complete Satisfiability
problem (SAT). Algorithms for MaxSAT mainly include complete solvers and local
search incomplete solvers. In many complete solvers, once a better solution is
found, a Soft conflict Pseudo Boolean (SPB) constraint will be generated to
enforce the algorithm to find better solutions. In many local search
algorithms, clause weighting is a key technique for effectively guiding the
search directions. In this paper, we propose to transfer the SPB constraint
into the clause weighting system of the local search method, leading the
algorithm to better solutions. We further propose an adaptive clause weighting
strategy that breaks the tradition of using constant values to adjust clause
weights. Based on the above methods, we propose a new local search algorithm
called SPB-MaxSAT that provides new perspectives for clause weighting on MaxSAT
local search solvers. Extensive experiments demonstrate the excellent
performance of the proposed methods.
| [
{
"version": "v1",
"created": "Fri, 19 Jan 2024 09:59:02 GMT"
}
] | 1,705,881,600,000 | [
[
"Zheng",
"Jiongzhi",
""
],
[
"Chen",
"Zhuo",
""
],
[
"Li",
"Chu-Min",
""
],
[
"He",
"Kun",
""
]
] |
2401.10744 | Ziqiang Yuan Mr. | Ziqiang Yuan, Kaiyuan Wang, Shoutai Zhu, Ye Yuan, Jingya Zhou, Yanlin
Zhu, Wenqi Wei | FinLLMs: A Framework for Financial Reasoning Dataset Generation with
Large Language Models | Under submission of IEEE Transactions | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language models (LLMs) usually rely on extensive training datasets. In
the financial domain, creating numerical reasoning datasets that include a mix
of tables and long text often involves substantial manual annotation expenses.
To address the limited data resources and reduce the annotation cost, we
introduce FinLLMs, a method for generating financial question-answering data
based on common financial formulas using Large Language Models. First, we
compile a list of common financial formulas and construct a graph based on the
variables these formulas employ. We then augment the formula set by combining
those that share identical variables as new elements. Specifically, we explore
formulas obtained by manual annotation and merge those formulas with shared
variables by traversing the constructed graph. Finally, utilizing GPT-3.5, we
generate financial question-answering data that encompasses both tabular
information and long textual content, building on the collected formula set.
Our experiments demonstrate that synthetic data generated by FinLLMs
effectively enhances the performance of several large-scale numerical reasoning
models in the financial domain, outperforming two established benchmark
financial question-answering datasets.
| [
{
"version": "v1",
"created": "Fri, 19 Jan 2024 15:09:39 GMT"
}
] | 1,705,881,600,000 | [
[
"Yuan",
"Ziqiang",
""
],
[
"Wang",
"Kaiyuan",
""
],
[
"Zhu",
"Shoutai",
""
],
[
"Yuan",
"Ye",
""
],
[
"Zhou",
"Jingya",
""
],
[
"Zhu",
"Yanlin",
""
],
[
"Wei",
"Wenqi",
""
]
] |
2401.10904 | Florin Leon | Florin Leon | A Review of Findings from Neuroscience and Cognitive Psychology as
Possible Inspiration for the Path to Artificial General Intelligence | 143 pages, 49 figures, 244 references | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This review aims to contribute to the quest for artificial general
intelligence by examining neuroscience and cognitive psychology methods for
potential inspiration. Despite the impressive advancements achieved by deep
learning models in various domains, they still have shortcomings in abstract
reasoning and causal understanding. Such capabilities should be ultimately
integrated into artificial intelligence systems in order to surpass data-driven
limitations and support decision making in a way more similar to human
intelligence. This work is a vertical review that attempts a wide-ranging
exploration of brain function, spanning from lower-level biological neurons,
spiking neural networks, and neuronal ensembles to higher-level concepts such
as brain anatomy, vector symbolic architectures, cognitive and categorization
models, and cognitive architectures. The hope is that these concepts may offer
insights for solutions in artificial general intelligence.
| [
{
"version": "v1",
"created": "Wed, 3 Jan 2024 09:46:36 GMT"
}
] | 1,705,968,000,000 | [
[
"Leon",
"Florin",
""
]
] |
2401.11094 | Xiao Shishi | Shishi Xiao, Liangwei Wang, Xiaojuan Ma, Wei Zeng | TypeDance: Creating Semantic Typographic Logos from Image through
Personalized Generation | 24 pages, 9 figures | null | 10.1145/3613904.3642185 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Semantic typographic logos harmoniously blend typeface and imagery to
represent semantic concepts while maintaining legibility. Conventional methods
using spatial composition and shape substitution are hindered by the
conflicting requirement for achieving seamless spatial fusion between
geometrically dissimilar typefaces and semantics. While recent advances made AI
generation of semantic typography possible, the end-to-end approaches exclude
designer involvement and disregard personalized design. This paper presents
TypeDance, an AI-assisted tool incorporating design rationales with the
generative model for personalized semantic typographic logo design. It
leverages combinable design priors extracted from uploaded image exemplars and
supports type-imagery mapping at various structural granularity, achieving
diverse aesthetic designs with flexible control. Additionally, we instantiate a
comprehensive design workflow in TypeDance, including ideation, selection,
generation, evaluation, and iteration. A two-task user evaluation, including
imitation and creation, confirmed the usability of TypeDance in design across
different usage scenarios
| [
{
"version": "v1",
"created": "Sat, 20 Jan 2024 02:55:11 GMT"
}
] | 1,706,054,400,000 | [
[
"Xiao",
"Shishi",
""
],
[
"Wang",
"Liangwei",
""
],
[
"Ma",
"Xiaojuan",
""
],
[
"Zeng",
"Wei",
""
]
] |
2401.11472 | Bruno Yun | Assaf Libman, Nir Oren, Bruno Yun | Abstract Weighted Based Gradual Semantics in Argumentation Theory | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Weighted gradual semantics provide an acceptability degree to each argument
representing the strength of the argument, computed based on factors including
background evidence for the argument, and taking into account interactions
between this argument and others. We introduce four important problems linking
gradual semantics and acceptability degrees. First, we reexamine the inverse
problem, seeking to identify the argument weights of the argumentation
framework which lead to a specific final acceptability degree. Second, we ask
whether the function mapping between argument weights and acceptability degrees
is injective or a homeomorphism onto its image. Third, we ask whether argument
weights can be found when preferences, rather than acceptability degrees for
arguments are considered. Fourth, we consider the topology of the space of
valid acceptability degrees, asking whether "gaps" exist in this space. While
different gradual semantics have been proposed in the literature, in this
paper, we identify a large family of weighted gradual semantics, called
abstract weighted based gradual semantics. These generalise many of the
existing semantics while maintaining desirable properties such as convergence
to a unique fixed point. We also show that a sub-family of the weighted gradual
semantics, called abstract weighted (L^p,\lambda,\mu)-based gradual semantics
and which include well-known semantics, solve all four of the aforementioned
problems.
| [
{
"version": "v1",
"created": "Sun, 21 Jan 2024 12:22:48 GMT"
},
{
"version": "v2",
"created": "Thu, 30 May 2024 16:16:50 GMT"
}
] | 1,717,113,600,000 | [
[
"Libman",
"Assaf",
""
],
[
"Oren",
"Nir",
""
],
[
"Yun",
"Bruno",
""
]
] |
2401.11553 | Sascha Ossowski | Holger Billhardt, Alberto Fern\'andez, Sascha Ossowski, Javier
Palanca, Javier Bajo | Taxi dispatching strategies with compensations | null | Expert Systems with Applications, Volume 122 (2019) | 10.1016/j.eswa.2019.01.001 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Urban mobility efficiency is of utmost importance in big cities. Taxi
vehicles are key elements in daily traffic activity. The advance of ICT and
geo-positioning systems has given rise to new opportunities for improving the
efficiency of taxi fleets in terms of waiting times of passengers, cost and
time for drivers, traffic density, CO2 emissions, etc., by using more informed,
intelligent dispatching. Still, the explicit spatial and temporal components,
as well as the scale and, in particular, the dynamicity of the problem of
pairing passengers and taxis in big towns, render traditional approaches for
solving standard assignment problem useless for this purpose, and call for
intelligent approximation strategies based on domain-specific heuristics.
Furthermore, taxi drivers are often autonomous actors and may not agree to
participate in assignments that, though globally efficient, may not be
sufficently beneficial for them individually. This paper presents a new
heuristic algorithm for taxi assignment to customers that considers taxi
reassignments if this may lead to globally better solutions. In addition, as
such new assignments may reduce the expected revenues of individual drivers, we
propose an economic compensation scheme to make individually rational drivers
agree to proposed modifications in their assigned clients. We carried out a set
of experiments, where several commonly used assignment strategies are compared
to three different instantiations of our heuristic algorithm. The results
indicate that our proposal has the potential to reduce customer waiting times
in fleets of autonomous taxis, while being also beneficial from an economic
point of view.
| [
{
"version": "v1",
"created": "Sun, 21 Jan 2024 17:54:46 GMT"
}
] | 1,705,968,000,000 | [
[
"Billhardt",
"Holger",
""
],
[
"Fernández",
"Alberto",
""
],
[
"Ossowski",
"Sascha",
""
],
[
"Palanca",
"Javier",
""
],
[
"Bajo",
"Javier",
""
]
] |
2401.11848 | Idoia Berges | V\'ictor Julio Ram\'irez-Dur\'an, Idoia Berges, Arantza Illarramendi | ExtruOnt: An ontology for describing a type of manufacturing machine for
Industry 4.0 systems | This is the accepted manuscript. The definitive, peer reviewed and
edited version of this article is published in Semantic Web 11(6): 887-909
(2020) https://doi.org/10.3233/sw-200376 | Semantic Web 11(6): 887-909 (2020) | 10.3233/sw-200376 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantically rich descriptions of manufacturing machines, offered in a
machine-interpretable code, can provide interesting benefits in Industry 4.0
scenarios. However, the lack of that type of descriptions is evident. In this
paper we present the development effort made to build an ontology, called
ExtruOnt, for describing a type of manufacturing machine, more precisely, a
type that performs an extrusion process (extruder). Although the scope of the
ontology is restricted to a concrete domain, it could be used as a model for
the development of other ontologies for describing manufacturing machines in
Industry 4.0 scenarios. The terms of the ExtruOnt ontology provide different
types of information related with an extruder, which are reflected in distinct
modules that constitute the ontology. Thus, it contains classes and properties
for expressing descriptions about components of an extruder, spatial
connections, features, and 3D representations of those components, and finally
the sensors used to capture indicators about the performance of this type of
machine. The ontology development process has been carried out in close
collaboration with domain experts.
| [
{
"version": "v1",
"created": "Mon, 22 Jan 2024 11:05:54 GMT"
}
] | 1,705,968,000,000 | [
[
"Ramírez-Durán",
"Víctor Julio",
""
],
[
"Berges",
"Idoia",
""
],
[
"Illarramendi",
"Arantza",
""
]
] |
2401.11865 | Idoia Berges | Idoia Berges, Jes\'us Berm\'udez, Arantza Illarramendi | Toward Semantic Interoperability of Electronic Health Records | This is the Accepted Manuscript. The definitive, peer reviewed and
edited version of this article is: Idoia Berges, Jes\'us Berm\'udez, Arantza
Illarramendi: Toward Semantic Interoperability of Electronic Health Records.
IEEE Trans. Inf. Technol. Biomed. 16(3): 424-431 (2012).
DOI:10.1109/TITB.2011.2180917. Copyright 2011 IEEE | IEEE Trans. Inf. Technol. Biomed. 16(3): 424-431 (2012) | 10.1109/TITB.2011.2180917 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although the goal of achieving semantic interoperability of electronic health
records (EHRs) is pursued by many researchers, it has not been accomplished
yet. In this paper, we present a proposal that smoothes out the way toward the
achievement of that goal. In particular, our study focuses on medical diagnoses
statements. In summary, the main contributions of our ontology-based proposal
are the following: first, it includes a canonical ontology whose EHR-related
terms focus on semantic aspects. As a result, their descriptions are
independent of languages and technology aspects used in different organizations
to represent EHRs. Moreover, those terms are related to their corresponding
codes in well-known medical terminologies. Second, it deals with modules that
allow obtaining rich ontological representations of EHR information managed by
proprietary models of health information systems. The features of one specific
module are shown as reference. Third, it considers the necessary mapping axioms
between ontological terms enhanced with so-called path mappings. This feature
smoothes out structural differences between heterogeneous EHR representations,
allowing proper alignment of information.
| [
{
"version": "v1",
"created": "Mon, 22 Jan 2024 11:39:55 GMT"
}
] | 1,705,968,000,000 | [
[
"Berges",
"Idoia",
""
],
[
"Bermúdez",
"Jesús",
""
],
[
"Illarramendi",
"Arantza",
""
]
] |
2401.11903 | EPTCS | Milan Bankovi\'c (Faculty of Mathematics, University of Belgrade,
Serbia) | Automation of Triangle Ruler-and-Compass Constructions Using Constraint
Solvers | In Proceedings ADG 2023, arXiv:2401.10725 | EPTCS 398, 2024, pp. 62-72 | 10.4204/EPTCS.398.10 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we present an approach to automated solving of triangle
ruler-and-compass construction problems using finite-domain constraint solvers.
The constraint model is described in the MiniZinc modeling language, and is
based on the automated planning. The main benefit of using general constraint
solvers for such purpose, instead of developing dedicated tools, is that we can
rely on the efficient search that is already implemented within the solver,
enabling us to focus on geometric aspects of the problem. We may also use the
solver's built-in optimization capabilities to search for the shortest possible
constructions. We evaluate our approach on 74 solvable problems from the
Wernick's list, and compare it to the dedicated triangle construction solver
ArgoTriCS. The results show that our approach is comparable to dedicated tools,
while it requires much less effort to implement. Also, our model often finds
shorter constructions, thanks to the optimization capabilities offered by the
constraint solvers.
| [
{
"version": "v1",
"created": "Mon, 22 Jan 2024 12:50:46 GMT"
}
] | 1,705,968,000,000 | [
[
"Banković",
"Milan",
"",
"Faculty of Mathematics, University of Belgrade,\n Serbia"
]
] |
2401.12247 | Alex Zarifis | Xusen Cheng, Ying Bao, Alex Zarifis, Wankun Gong and Jian Mou | Exploring consumers response to text-based chatbots in e-commerce: The
moderating role of task complexity and chatbot disclosure | Internet Research (2021) | null | 10.1108/INTR-08-2020-0460 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial intelligence based chatbots have brought unprecedented business
potential. This study aims to explore consumers trust and response to a
text-based chatbot in ecommerce, involving the moderating effects of task
complexity and chatbot identity disclosure. A survey method with 299 useable
responses was conducted in this research. This study adopted the ordinary least
squares regression to test the hypotheses. First, the consumers perception of
both the empathy and friendliness of the chatbot positively impacts their trust
in it. Second, task complexity negatively moderates the relationship between
friendliness and consumers trust. Third, disclosure of the text based chatbot
negatively moderates the relationship between empathy and consumers trust,
while it positively moderates the relationship between friendliness and
consumers trust. Fourth, consumers trust in the chatbot increases their
reliance on the chatbot and decreases their resistance to the chatbot in future
interactions. Adopting the stimulus organism response framework, this study
provides important insights on consumers perception and response to the
text-based chatbot. The findings of this research also make suggestions that
can increase consumers positive responses to text based chatbots. Extant
studies have investigated the effects of automated bots attributes on consumers
perceptions. However, the boundary conditions of these effects are largely
ignored. This research is one of the first attempts to provide a deep
understanding of consumers responses to a chatbot.
| [
{
"version": "v1",
"created": "Sat, 20 Jan 2024 15:17:50 GMT"
}
] | 1,706,054,400,000 | [
[
"Cheng",
"Xusen",
""
],
[
"Bao",
"Ying",
""
],
[
"Zarifis",
"Alex",
""
],
[
"Gong",
"Wankun",
""
],
[
"Mou",
"Jian",
""
]
] |
2401.12322 | Sascha Ossowski | Holger Billhardt, Alberto Fern\'andez, Sascha Ossowski | Smart Recommendations for Renting Bikes in Bike Sharing Systems | null | Applied Sciences, Volume 11, Issue 20 (2021) | 10.3390/app11209654 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Vehicle-sharing systems -- such as bike-, car-, or motorcycle-sharing systems
-- have become increasingly popular in big cities in recent years. On the one
hand, they provide a cheaper and environmentally friendlier means of
transportation than private cars, and on the other hand, they satisfy the
individual mobility demands of citizens better than traditional public
transport systems. One of their advantages in this regard is their
availability, e.g., the possibility of taking (or leaving) a vehicle almost
anywhere in a city. This availability obviously depends on different strategic
and operational management decisions and policies, such as the dimension of the
fleet or the (re)distribution of vehicles. Agglutination problems -- where, due
to usage patterns, available vehicles are concentrated in certain areas,
whereas no vehicles are available in others -- are quite common in such
systems, and need to be dealt with. Research has been dedicated to this
problem, specifying different techniques to reduce imbalanced situations. In
this paper, we present and compare strategies for recommending stations to
users who wish to rent or return bikes in station-based bike-sharing systems.
Our first contribution is a novel recommendation strategy based on queuing
theory that recommends stations based on their utility to the user in terms of
lower distance and higher probability of finding a bike or slot. Then, we go
one step further, defining a strategy that recommends stations by combining the
utility of a particular user with the utility of the global system, measured in
terms of the improvement in the distribution of bikes and slots with respect to
the expected future demand, with the aim of implicitly avoiding or alleviating
balancing problems. We present several experiments to evaluate our proposal
with real data from the bike sharing system BiciMAD in Madrid.
| [
{
"version": "v1",
"created": "Mon, 22 Jan 2024 19:29:33 GMT"
}
] | 1,706,054,400,000 | [
[
"Billhardt",
"Holger",
""
],
[
"Fernández",
"Alberto",
""
],
[
"Ossowski",
"Sascha",
""
]
] |
2401.12324 | Sascha Ossowski | Holger Billhardt, Jos\'e-Antonio Santos, Alberto Fern\'andez, Mar
Moreno, Sascha Ossowski, Jos\'e A. Rodr\'iguez | Streamlining Advanced Taxi Assignment Strategies based on Legal Analysis | null | Neurocomputing, Volume 438 (2022) | 10.1016/j.neucom.2021.10.085 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In recent years many novel applications have appeared that promote the
provision of services and activities in a collaborative manner. The key idea
behind such systems is to take advantage of idle or underused capacities of
existing resources, in order to provide improved services that assist people in
their daily tasks, with additional functionality, enhanced efficiency, and/or
reduced cost. Particularly in the domain of urban transportation, many
researchers have put forward novel ideas, which are then implemented and
evaluated through prototypes that usually draw upon AI methods and tools.
However, such proposals also bring up multiple non-technical issues that need
to be identified and addressed adequately if such systems are ever meant to be
applied to the real world. While, in practice, legal and ethical aspects
related to such AI-based systems are seldomly considered in the beginning of
the research and development process, we argue that they not only restrict
design decisions, but can also help guiding them. In this manuscript, we set
out from a prototype of a taxi coordination service that mediates between
individual (and autonomous) taxis and potential customers. After representing
key aspects of its operation in a semi-structured manner, we analyse its
viability from the viewpoint of current legal restrictions and constraints, so
as to identify additional non-functional requirements as well as options to
address them. Then, we go one step ahead, and actually modify the existing
prototype to incorporate the previously identified recommendations. Performing
experiments with this improved system helps us identify the most adequate
option among several legally admissible alternatives.
| [
{
"version": "v1",
"created": "Mon, 22 Jan 2024 19:35:28 GMT"
}
] | 1,706,054,400,000 | [
[
"Billhardt",
"Holger",
""
],
[
"Santos",
"José-Antonio",
""
],
[
"Fernández",
"Alberto",
""
],
[
"Moreno",
"Mar",
""
],
[
"Ossowski",
"Sascha",
""
],
[
"Rodríguez",
"José A.",
""
]
] |
2401.12459 | Zhaoyue Wang | Zhaoyue Wang | Towards Socially and Morally Aware RL agent: Reward Design With LLM | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | When we design and deploy an Reinforcement Learning (RL) agent, reward
functions motivates agents to achieve an objective. An incorrect or incomplete
specification of the objective can result in behavior that does not align with
human values - failing to adhere with social and moral norms that are ambiguous
and context dependent, and cause undesired outcomes such as negative side
effects and exploration that is unsafe. Previous work have manually defined
reward functions to avoid negative side effects, use human oversight for safe
exploration, or use foundation models as planning tools. This work studies the
ability of leveraging Large Language Models (LLM)' understanding of morality
and social norms on safe exploration augmented RL methods. This work evaluates
language model's result against human feedbacks and demonstrates language
model's capability as direct reward signals.
| [
{
"version": "v1",
"created": "Tue, 23 Jan 2024 03:00:03 GMT"
},
{
"version": "v2",
"created": "Thu, 30 May 2024 20:40:30 GMT"
}
] | 1,717,372,800,000 | [
[
"Wang",
"Zhaoyue",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.