id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
listlengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.16180 | Alban Grastien | Alban Grastien and Patrik Haslum and Sylvie Thi\'ebaux | A More General Theory of Diagnosis from First Principles | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Model-based diagnosis has been an active research topic in different
communities including artificial intelligence, formal methods, and control.
This has led to a set of disparate approaches addressing different classes of
systems and seeking different forms of diagnoses. In this paper, we resolve
such disparities by generalising Reiter's theory to be agnostic to the types of
systems and diagnoses considered. This more general theory of diagnosis from
first principles defines the minimal diagnosis as the set of preferred
diagnosis candidates in a search space of hypotheses. Computing the minimal
diagnosis is achieved by exploring the space of diagnosis hypotheses, testing
sets of hypotheses for consistency with the system's model and the observation,
and generating conflicts that rule out successors and other portions of the
search space. Under relatively mild assumptions, our algorithms correctly
compute the set of preferred diagnosis candidates. The main difficulty here is
that the search space is no longer a powerset as in Reiter's theory, and that,
as consequence, many of the implicit properties (such as finiteness of the
search space) no longer hold. The notion of conflict also needs to be
generalised and we present such a more general notion. We present two
implementations of these algorithms, using test solvers based on satisfiability
and heuristic search, respectively, which we evaluate on instances from two
real world discrete event problems. Despite the greater generality of our
theory, these implementations surpass the special purpose algorithms designed
for discrete event systems, and enable solving instances that were out of reach
of existing diagnosis approaches.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 05:47:52 GMT"
}
] | 1,695,945,600,000 | [
[
"Grastien",
"Alban",
""
],
[
"Haslum",
"Patrik",
""
],
[
"Thiébaux",
"Sylvie",
""
]
] |
2309.16344 | Andrea Formisano | Stefania Costantini, Andrea Formisano | Epistemic Logic Programs: a study of some properties | Under consideration in Theory and Practice of Logic Programming
(TPLP) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Epistemic Logic Programs (ELPs), extend Answer Set Programming (ASP) with
epistemic operators. The semantics of such programs is provided in terms of
world views, which are sets of belief sets, i.e., syntactically, sets of sets
of atoms. Different semantic approaches propose different characterizations of
world views. Recent work has introduced semantic properties that should be met
by any semantics for ELPs, like the Epistemic Splitting Property, that, if
satisfied, allows to modularly compute world views in a bottom-up fashion,
analogously to ``traditional'' ASP. We analyze the possibility of changing the
perspective, shifting from a bottom-up to a top-down approach to splitting. We
propose a basic top-down approach, which we prove to be equivalent to the
bottom-up one. We then propose an extended approach, where our new definition:
(i) is provably applicable to many of the existing semantics; (ii) operates
similarly to ``traditional'' ASP; (iii) provably coincides under any semantics
with the bottom-up notion of splitting at least on the class of Epistemically
Stratified Programs (which are, intuitively, those where the use of epistemic
operators is stratified); (iv) better adheres to common ASP programming
methodology.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 11:08:37 GMT"
}
] | 1,695,945,600,000 | [
[
"Costantini",
"Stefania",
""
],
[
"Formisano",
"Andrea",
""
]
] |
2309.16960 | Mikihisa Yuasa | Mikihisa Yuasa, Huy T. Tran, Ramavarapu S. Sreenivas | On Generating Explanations for Reinforcement Learning Policies: An
Empirical Study | This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Understanding a \textit{reinforcement learning} policy, which guides
state-to-action mappings to maximize rewards, necessitates an accompanying
explanation for human comprehension. In this paper, we introduce a set of
\textit{linear temporal logic} (LTL) formulae designed to provide explanations
for policies, and an algorithm for searching through those formulae for the one
that best explains a given policy. Our focus is on crafting explanations that
elucidate both the ultimate objectives accomplished by the policy and the
prerequisite conditions it upholds throughout its execution. These LTL-based
explanations feature a structured representation, which is particularly
well-suited for local-search techniques. The effectiveness of our proposed
approach is illustrated through a simulated game of capture the flag and a
car-parking environment. The paper concludes with suggested directions for
future
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 03:57:39 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Mar 2024 02:02:29 GMT"
}
] | 1,709,769,600,000 | [
[
"Yuasa",
"Mikihisa",
""
],
[
"Tran",
"Huy T.",
""
],
[
"Sreenivas",
"Ramavarapu S.",
""
]
] |
2309.17057 | James Hinns | David Martens, Camille Dams, James Hinns, and Mark Vergouwen | Tell Me a Story! Narrative-Driven XAI with Large Language Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In today's critical domains, the predominance of black-box machine learning
models amplifies the demand for Explainable AI (XAI). The widely used SHAP
values, while quantifying feature importance, are often too intricate and lack
human-friendly explanations. Furthermore, counterfactual (CF) explanations
present `what ifs' but leave users grappling with the 'why'. To bridge this
gap, we introduce XAIstories. Leveraging Large Language Models, XAIstories
provide narratives that shed light on AI predictions: SHAPstories do so based
on SHAP explanations to explain a prediction score, while CFstories do so for
CF explanations to explain a decision. Our results are striking: over 90% of
the surveyed general audience finds the narrative generated by SHAPstories
convincing. Data scientists primarily see the value of SHAPstories in
communicating explanations to a general audience, with 92% of data scientists
indicating that it will contribute to the ease and confidence of nonspecialists
in understanding AI predictions. Additionally, 83% of data scientists indicate
they are likely to use SHAPstories for this purpose. In image classification,
CFstories are considered more or equally convincing as users own crafted
stories by over 75% of lay user participants. CFstories also bring a tenfold
speed gain in creating a narrative, and improves accuracy by over 20% compared
to manually created narratives. The results thereby suggest that XAIstories may
provide the missing link in truly explaining and understanding AI predictions.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 08:40:08 GMT"
}
] | 1,696,204,800,000 | [
[
"Martens",
"David",
""
],
[
"Dams",
"Camille",
""
],
[
"Hinns",
"James",
""
],
[
"Vergouwen",
"Mark",
""
]
] |
2309.17252 | Adrian Groza | Marco Pop-Mihali and Adrian Groza | Forest Mixing: investigating the impact of multiple search trees and a
shared refinements pool on ontology learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We aim at development white-box machine learning algorithms. We focus here on
algorithms for learning axioms in description logic. We extend the Class
Expression Learning for Ontology Engineering (CELOE) algorithm contained in the
DL-Learner tool. The approach uses multiple search trees and a shared pool of
refinements in order to split the search space in smaller subspaces. We
introduce the conjunction operation of best class expressions from each tree,
keeping the results which give the most information. The aim is to foster
exploration from a diverse set of starting classes and to streamline the
process of finding class expressions in ontologies. %, particularly in large
search spaces. The current implementation and settings indicated that the
Forest Mixing approach did not outperform the traditional CELOE. Despite these
results, the conceptual proposal brought forward by this approach may stimulate
future improvements in class expression finding in ontologies. % and influence.
% the way we traverse search spaces in general.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 14:02:34 GMT"
}
] | 1,696,204,800,000 | [
[
"Pop-Mihali",
"Marco",
""
],
[
"Groza",
"Adrian",
""
]
] |
2309.17277 | JiaXian Guo | Jiaxian Guo, Bo Yang, Paul Yoo, Bill Yuchen Lin, Yusuke Iwasawa,
Yutaka Matsuo | Suspicion-Agent: Playing Imperfect Information Games with Theory of Mind
Aware GPT-4 | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Unlike perfect information games, where all elements are known to every
player, imperfect information games emulate the real-world complexities of
decision-making under uncertain or incomplete information. GPT-4, the recent
breakthrough in large language models (LLMs) trained on massive passive data,
is notable for its knowledge retrieval and reasoning abilities. This paper
delves into the applicability of GPT-4's learned knowledge for imperfect
information games. To achieve this, we introduce \textbf{Suspicion-Agent}, an
innovative agent that leverages GPT-4's capabilities for performing in
imperfect information games. With proper prompt engineering to achieve
different functions, Suspicion-Agent based on GPT-4 demonstrates remarkable
adaptability across a range of imperfect information card games. Importantly,
GPT-4 displays a strong high-order theory of mind (ToM) capacity, meaning it
can understand others and intentionally impact others' behavior. Leveraging
this, we design a planning strategy that enables GPT-4 to competently play
against different opponents, adapting its gameplay style as needed, while
requiring only the game rules and descriptions of observations as input. In the
experiments, we qualitatively showcase the capabilities of Suspicion-Agent
across three different imperfect information games and then quantitatively
evaluate it in Leduc Hold'em. The results show that Suspicion-Agent can
potentially outperform traditional algorithms designed for imperfect
information games, without any specialized training or examples. In order to
encourage and foster deeper insights within the community, we make our
game-related data publicly available.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 14:30:03 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Oct 2023 04:03:55 GMT"
}
] | 1,696,809,600,000 | [
[
"Guo",
"Jiaxian",
""
],
[
"Yang",
"Bo",
""
],
[
"Yoo",
"Paul",
""
],
[
"Lin",
"Bill Yuchen",
""
],
[
"Iwasawa",
"Yusuke",
""
],
[
"Matsuo",
"Yutaka",
""
]
] |
2309.17288 | Guangyao Chen | Guangyao Chen, Siwei Dong, Yu Shu, Ge Zhang, Jaward Sesay, B\"orje F.
Karlsson, Jie Fu, Yemin Shi | AutoAgents: A Framework for Automatic Agent Generation | IJCAI 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have enabled remarkable advances in automated
task-solving with multi-agent systems. However, most existing LLM-based
multi-agent approaches rely on predefined agents to handle simple tasks,
limiting the adaptability of multi-agent collaboration to different scenarios.
Therefore, we introduce AutoAgents, an innovative framework that adaptively
generates and coordinates multiple specialized agents to build an AI team
according to different tasks. Specifically, AutoAgents couples the relationship
between tasks and roles by dynamically generating multiple required agents
based on task content and planning solutions for the current task based on the
generated expert agents. Multiple specialized agents collaborate with each
other to efficiently accomplish tasks. Concurrently, an observer role is
incorporated into the framework to reflect on the designated plans and agents'
responses and improve upon them. Our experiments on various benchmarks
demonstrate that AutoAgents generates more coherent and accurate solutions than
the existing multi-agent methods. This underscores the significance of
assigning different roles to different tasks and of team cooperation, offering
new perspectives for tackling complex tasks. The repository of this project is
available at https://github.com/Link-AGI/AutoAgents.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 14:46:30 GMT"
},
{
"version": "v2",
"created": "Sun, 15 Oct 2023 13:36:06 GMT"
},
{
"version": "v3",
"created": "Mon, 29 Apr 2024 18:38:26 GMT"
}
] | 1,714,521,600,000 | [
[
"Chen",
"Guangyao",
""
],
[
"Dong",
"Siwei",
""
],
[
"Shu",
"Yu",
""
],
[
"Zhang",
"Ge",
""
],
[
"Sesay",
"Jaward",
""
],
[
"Karlsson",
"Börje F.",
""
],
[
"Fu",
"Jie",
""
],
[
"Shi",
"Yemin",
""
]
] |
2309.17319 | Jinmeng Rao | Jinmeng Rao, Song Gao, Gengchen Mai, Krzysztof Janowicz | Building Privacy-Preserving and Secure Geospatial Artificial
Intelligence Foundation Models | 1 figure | ACM SIGSPATIAL 2023 | 10.1145/3589132.3625611 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In recent years we have seen substantial advances in foundation models for
artificial intelligence, including language, vision, and multimodal models.
Recent studies have highlighted the potential of using foundation models in
geospatial artificial intelligence, known as GeoAI Foundation Models, for
geographic question answering, remote sensing image understanding, map
generation, and location-based services, among others. However, the development
and application of GeoAI foundation models can pose serious privacy and
security risks, which have not been fully discussed or addressed to date. This
paper introduces the potential privacy and security risks throughout the
lifecycle of GeoAI foundation models and proposes a comprehensive blueprint for
research directions and preventative and control strategies. Through this
vision paper, we hope to draw the attention of researchers and policymakers in
geospatial domains to these privacy and security risks inherent in GeoAI
foundation models and advocate for the development of privacy-preserving and
secure GeoAI foundation models.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 15:25:31 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Oct 2023 08:40:07 GMT"
}
] | 1,697,155,200,000 | [
[
"Rao",
"Jinmeng",
""
],
[
"Gao",
"Song",
""
],
[
"Mai",
"Gengchen",
""
],
[
"Janowicz",
"Krzysztof",
""
]
] |
2310.00013 | Senkang Hu | Senkang Hu, Zhengru Fang, Haonan An, Guowen Xu, Yuan Zhou, Xianhao
Chen, Yuguang Fang | Adaptive Communications in Collaborative Perception with Domain
Alignment for Autonomous Driving | 6 pages, 6 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative perception among multiple connected and autonomous vehicles can
greatly enhance perceptive capabilities by allowing vehicles to exchange
supplementary information via communications. Despite advances in previous
approaches, challenges still remain due to channel variations and data
heterogeneity among collaborative vehicles. To address these issues, we propose
ACC-DA, a channel-aware collaborative perception framework to dynamically
adjust the communication graph and minimize the average transmission delay
while mitigating the side effects from the data heterogeneity. Our novelties
lie in three aspects. We first design a transmission delay minimization method,
which can construct the communication graph and minimize the transmission delay
according to different channel information state. We then propose an adaptive
data reconstruction mechanism, which can dynamically adjust the rate-distortion
trade-off to enhance perception efficiency. Moreover, it minimizes the temporal
redundancy during data transmissions. Finally, we conceive a domain alignment
scheme to align the data distribution from different vehicles, which can
mitigate the domain gap between different vehicles and improve the performance
of the target task. Comprehensive experiments demonstrate the effectiveness of
our method in comparison to the existing state-of-the-art works.
| [
{
"version": "v1",
"created": "Fri, 15 Sep 2023 03:53:35 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Oct 2023 12:51:49 GMT"
},
{
"version": "v3",
"created": "Sat, 16 Mar 2024 15:20:43 GMT"
}
] | 1,710,806,400,000 | [
[
"Hu",
"Senkang",
""
],
[
"Fang",
"Zhengru",
""
],
[
"An",
"Haonan",
""
],
[
"Xu",
"Guowen",
""
],
[
"Zhou",
"Yuan",
""
],
[
"Chen",
"Xianhao",
""
],
[
"Fang",
"Yuguang",
""
]
] |
2310.00656 | Huajian Xin | Haiming Wang, Huajian Xin, Chuanyang Zheng, Lin Li, Zhengying Liu,
Qingxing Cao, Yinya Huang, Jing Xiong, Han Shi, Enze Xie, Jian Yin, Zhenguo
Li, Heng Liao, Xiaodan Liang | LEGO-Prover: Neural Theorem Proving with Growing Libraries | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Despite the success of large language models (LLMs), the task of theorem
proving still remains one of the hardest reasoning tasks that is far from being
fully solved. Prior methods using language models have demonstrated promising
results, but they still struggle to prove even middle school level theorems.
One common limitation of these methods is that they assume a fixed theorem
library during the whole theorem proving process. However, as we all know,
creating new useful theorems or even new theories is not only helpful but
crucial and necessary for advancing mathematics and proving harder and deeper
results. In this work, we present LEGO-Prover, which employs a growing skill
library containing verified lemmas as skills to augment the capability of LLMs
used in theorem proving. By constructing the proof modularly, LEGO-Prover
enables LLMs to utilize existing skills retrieved from the library and to
create new skills during the proving process. These skills are further evolved
(by prompting an LLM) to enrich the library on another scale. Modular and
reusable skills are constantly added to the library to enable tackling
increasingly intricate mathematical problems. Moreover, the learned library
further bridges the gap between human proofs and formal proofs by making it
easier to impute missing steps. LEGO-Prover advances the state-of-the-art pass
rate on miniF2F-valid (48.0% to 57.0%) and miniF2F-test (45.5% to 47.1%).
During the proving process, LEGO-Prover also manages to generate over 20,000
skills (theorems/lemmas) and adds them to the growing library. Our ablation
study indicates that these newly added skills are indeed helpful for proving
theorems, resulting in an improvement from a success rate of 47.1% to 50.4%. We
also release our code and all the generated skills.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 12:47:59 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Oct 2023 03:01:27 GMT"
},
{
"version": "v3",
"created": "Fri, 27 Oct 2023 12:44:32 GMT"
}
] | 1,698,624,000,000 | [
[
"Wang",
"Haiming",
""
],
[
"Xin",
"Huajian",
""
],
[
"Zheng",
"Chuanyang",
""
],
[
"Li",
"Lin",
""
],
[
"Liu",
"Zhengying",
""
],
[
"Cao",
"Qingxing",
""
],
[
"Huang",
"Yinya",
""
],
[
"Xiong",
"Jing",
""
],
[
"Shi",
"Han",
""
],
[
"Xie",
"Enze",
""
],
[
"Yin",
"Jian",
""
],
[
"Li",
"Zhenguo",
""
],
[
"Liao",
"Heng",
""
],
[
"Liang",
"Xiaodan",
""
]
] |
2310.00804 | Yuriy Marykovskiy | Yuriy Marykovskiy, Thomas Clark, Justin Day, Marcus Wiens, Charles
Henderson, Julian Quick, Imad Abdallah, Anna Maria Sempreviva, Jean-Paul
Calbimonte, Eleni Chatzi and Sarah Barber | Knowledge Engineering for Wind Energy | null | Wind Energ. Sci. 9 (2024) 883-917 | 10.5194/wes-9-883-2024 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | With the rapid evolution of the wind energy sector, there is an
ever-increasing need to create value from the vast amounts of data made
available both from within the domain, as well as from other sectors. This
article addresses the challenges faced by wind energy domain experts in
converting data into domain knowledge, connecting and integrating it with other
sources of knowledge, and making it available for use in next generation
artificially intelligent systems. To this end, this article highlights the role
that knowledge engineering can play in the process of digital transformation of
the wind energy sector. It presents the main concepts underpinning
Knowledge-Based Systems and summarises previous work in the areas of knowledge
engineering and knowledge representation in a manner that is relevant and
accessible to domain experts. A systematic analysis of the current
state-of-the-art on knowledge engineering in the wind energy domain is
performed, with available tools put into perspective by establishing the main
domain actors and their needs and identifying key problematic areas. Finally,
guidelines for further development and improvement are provided.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 22:06:10 GMT"
}
] | 1,713,225,600,000 | [
[
"Marykovskiy",
"Yuriy",
""
],
[
"Clark",
"Thomas",
""
],
[
"Day",
"Justin",
""
],
[
"Wiens",
"Marcus",
""
],
[
"Henderson",
"Charles",
""
],
[
"Quick",
"Julian",
""
],
[
"Abdallah",
"Imad",
""
],
[
"Sempreviva",
"Anna Maria",
""
],
[
"Calbimonte",
"Jean-Paul",
""
],
[
"Chatzi",
"Eleni",
""
],
[
"Barber",
"Sarah",
""
]
] |
2310.01011 | Sidney Bender | Sidney Bender, Christopher J. Anders, Pattarawatt Chormai, Heike
Marxfeld, Jan Herrmann, Gr\'egoire Montavon | Towards Fixing Clever-Hans Predictors with Counterfactual Knowledge
Distillation | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | This paper introduces a novel technique called counterfactual knowledge
distillation (CFKD) to detect and remove reliance on confounders in deep
learning models with the help of human expert feedback. Confounders are
spurious features that models tend to rely on, which can result in unexpected
errors in regulated or safety-critical domains. The paper highlights the
benefit of CFKD in such domains and shows some advantages of counterfactual
explanations over other types of explanations. We propose an experiment scheme
to quantitatively evaluate the success of CFKD and different teachers that can
give feedback to the model. We also introduce a new metric that is better
correlated with true test performance than validation accuracy. The paper
demonstrates the effectiveness of CFKD on synthetically augmented datasets and
on real-world histopathological datasets.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 09:02:51 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Oct 2023 18:10:48 GMT"
}
] | 1,696,464,000,000 | [
[
"Bender",
"Sidney",
""
],
[
"Anders",
"Christopher J.",
""
],
[
"Chormai",
"Pattarawatt",
""
],
[
"Marxfeld",
"Heike",
""
],
[
"Herrmann",
"Jan",
""
],
[
"Montavon",
"Grégoire",
""
]
] |
2310.01065 | Luca Costabello | Vasileios Baltatzis, Luca Costabello | KGEx: Explaining Knowledge Graph Embeddings via Subgraph Sampling and
Knowledge Distillation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Despite being the go-to choice for link prediction on knowledge graphs,
research on interpretability of knowledge graph embeddings (KGE) has been
relatively unexplored. We present KGEx, a novel post-hoc method that explains
individual link predictions by drawing inspiration from surrogate models
research. Given a target triple to predict, KGEx trains surrogate KGE models
that we use to identify important training triples. To gauge the impact of a
training triple, we sample random portions of the target triple neighborhood
and we train multiple surrogate KGE models on each of them. To ensure
faithfulness, each surrogate is trained by distilling knowledge from the
original KGE model. We then assess how well surrogates predict the target
triple being explained, the intuition being that those leading to faithful
predictions have been trained on impactful neighborhood samples. Under this
assumption, we then harvest triples that appear frequently across impactful
neighborhoods. We conduct extensive experiments on two publicly available
datasets, to demonstrate that KGEx is capable of providing explanations
faithful to the black-box model.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 10:20:24 GMT"
}
] | 1,696,291,200,000 | [
[
"Baltatzis",
"Vasileios",
""
],
[
"Costabello",
"Luca",
""
]
] |
2310.01378 | Joan Espasa Arxer | Miquel Bofill, Cristina Borralleras, Joan Espasa, and Mateu Villaret | On Grid Graph Reachability and Puzzle Games | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Many puzzle video games, like Sokoban, involve moving some agent in a maze.
The reachable locations are usually apparent for a human player, and the
difficulty of the game is mainly related to performing actions on objects, such
as pushing (reachable) boxes. For this reason, the difficulty of a particular
level is often measured as the number of actions on objects, other than agent
walking, needed to find a solution. In this paper we study CP and SAT
approaches for solving these kind of problems. We review some reachability
encodings and propose a new one. We empirically show that the new encoding is
well-suited for solving puzzle problems in the planning as SAT paradigm,
especially when considering the execution of several actions in parallel.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 17:41:35 GMT"
}
] | 1,696,291,200,000 | [
[
"Bofill",
"Miquel",
""
],
[
"Borralleras",
"Cristina",
""
],
[
"Espasa",
"Joan",
""
],
[
"Villaret",
"Mateu",
""
]
] |
2310.01470 | Joan Espasa Arxer | Joan Espasa, Ian Miguel, Peter Nightingale, Andr\'as Z. Salamon, Mateu
Villaret | Challenges in Modelling and Solving Plotting with PDDL | arXiv admin note: text overlap with arXiv:2110.14397 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We study a planning problem based on Plotting, a tile-matching puzzle video
game published by Taito in 1989. The objective of this game is to remove a
target number of coloured blocks from a grid by sequentially shooting blocks
into the grid. Plotting features complex transitions after every shot: various
blocks are affected directly, while others can be indirectly affected by
gravity. We highlight the challenges of modelling Plotting with PDDL and of
solving it with a grounding-based state-of-the-art planner.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 17:46:44 GMT"
}
] | 1,696,377,600,000 | [
[
"Espasa",
"Joan",
""
],
[
"Miguel",
"Ian",
""
],
[
"Nightingale",
"Peter",
""
],
[
"Salamon",
"András Z.",
""
],
[
"Villaret",
"Mateu",
""
]
] |
2310.01471 | Joan Espasa Arxer | Miquel Bofill, Cristina Borralleras, Joan Espasa, Gerard Mart\'in,
Gustavo Patow, Mateu Villaret | A Good Snowman is Hard to Plan | arXiv admin note: text overlap with arXiv:2310.01378 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this work we face a challenging puzzle video game: A Good Snowman is Hard
to Build. The objective of the game is to build snowmen by moving and stacking
snowballs on a discrete grid. For the sake of player engagement with the game,
it is interesting to avoid that a player finds a much easier solution than the
one the designer expected. Therefore, having tools that are able to certify the
optimality of solutions is crucial.
Although the game can be stated as a planning problem and can be naturally
modelled in PDDL, we show that a direct translation to SAT clearly outperforms
off-the-shelf state-of-the-art planners. As we show, this is mainly due to the
fact that reachability properties can be easily modelled in SAT, allowing for
shorter plans, whereas using axioms to express a reachability derived predicate
in PDDL does not result in any significant reduction of solving time with the
considered planners. We deal with a set of 51 levels, both original and
crafted, solving 43 and with 8 challenging instances still remaining to be
solved.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 17:50:31 GMT"
}
] | 1,696,377,600,000 | [
[
"Bofill",
"Miquel",
""
],
[
"Borralleras",
"Cristina",
""
],
[
"Espasa",
"Joan",
""
],
[
"Martín",
"Gerard",
""
],
[
"Patow",
"Gustavo",
""
],
[
"Villaret",
"Mateu",
""
]
] |
2310.01503 | Joan Espasa Arxer | Joan Espasa and Ian P. Gent and Ian Miguel and Peter Nightingale and
Andr\'as Z. Salamon and Mateu Villaret | Towards a Model of Puzznic | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We report on progress in modelling and solving Puzznic, a video game
requiring the player to plan sequences of moves to clear a grid by matching
blocks. We focus here on levels with no moving blocks. We compare a planning
approach and three constraint programming approaches on a small set of
benchmark instances. The planning approach is at present superior to the
constraint programming approaches, but we outline proposals for improving the
constraint models.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 18:00:59 GMT"
}
] | 1,696,377,600,000 | [
[
"Espasa",
"Joan",
""
],
[
"Gent",
"Ian P.",
""
],
[
"Miguel",
"Ian",
""
],
[
"Nightingale",
"Peter",
""
],
[
"Salamon",
"András Z.",
""
],
[
"Villaret",
"Mateu",
""
]
] |
2310.01505 | Joan Espasa Arxer | Sean Patterson and Joan Espasa and Mun See Chang and Ruth Hoffmann | Towards Automatic Design of Factorio Blueprints | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Factorio is a 2D construction and management simulation video game about
building automated factories to produce items of increasing complexity. A core
feature of the game is its blueprint system, which allows players to easily
save and replicate parts of their designs. Blueprints can reproduce any layout
of objects in the game, but are typically used to encapsulate a complex
behaviour, such as the production of a non-basic object. Once created, these
blueprints are then used as basic building blocks, allowing the player to
create a layer of abstraction. The usage of blueprints not only eases the
expansion of the factory but also allows the sharing of designs with the game's
community. The layout in a blueprint can be optimised using various criteria,
such as the total space used or the final production throughput. The design of
an optimal blueprint is a hard combinatorial problem, interleaving elements of
many well-studied problems such as bin-packing, routing or network design. This
work presents a new challenging problem and explores the feasibility of a
constraint model to optimise Factorio blueprints, balancing correctness,
optimality, and performance.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 18:01:43 GMT"
}
] | 1,696,377,600,000 | [
[
"Patterson",
"Sean",
""
],
[
"Espasa",
"Joan",
""
],
[
"Chang",
"Mun See",
""
],
[
"Hoffmann",
"Ruth",
""
]
] |
2310.01520 | Joan Espasa Arxer | Mustafa F. Abdelwahed, Joan Espasa, Alice Toniolo, Ian P. Gent | Bridging the Gap between Structural and Semantic Similarity in Diverse
Planning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Diverse planning is the problem of finding multiple plans for a given problem
specification, which is at the core of many real-world applications. For
example, diverse planning is a critical piece for the efficiency of plan
recognition systems when dealing with noisy and missing observations. Providing
diverse solutions can also benefit situations where constraints are too
expensive or impossible to model. Current diverse planners operate by
generating multiple plans and then applying a selection procedure to extract
diverse solutions using a similarity metric. Generally, current similarity
metrics only consider the structural properties of the given plans. We argue
that this approach is a limitation that sometimes prevents such metrics from
capturing why two plans differ. In this work, we propose two new
domain-independent metrics which are able to capture relevant information on
the difference between two given plans from a domain-dependent viewpoint. We
showcase their utility in various situations where the currently used metrics
fail to capture the similarity between plans, failing to capture some
structural symmetries.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 18:11:37 GMT"
}
] | 1,696,377,600,000 | [
[
"Abdelwahed",
"Mustafa F.",
""
],
[
"Espasa",
"Joan",
""
],
[
"Toniolo",
"Alice",
""
],
[
"Gent",
"Ian P.",
""
]
] |
2310.01536 | Esther Mondrag\'on | Alexander Dean, Eduardo Alonso and Esther Mondragon | Algebras of actions in an agent's representations of the world | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a framework to extract the algebra of the
transformations of worlds from the perspective of an agent. As a starting
point, we use our framework to reproduce the symmetry-based representations
from the symmetry-based disentangled representation learning (SBDRL) formalism
proposed by [1]; only the algebra of transformations of worlds that form groups
can be described using symmetry-based representations. We then study the
algebras of the transformations of worlds with features that occur in simple
reinforcement learning scenarios. Using computational methods, that we
developed, we extract the algebras of the transformations of these worlds and
classify them according to their properties. Finally, we generalise two
important results of SBDRL - the equivariance condition and the disentangling
definition - from only working with symmetry-based representations to working
with representations capturing the transformation properties of worlds with
transformations for any algebra. Finally, we combine our generalised
equivariance condition and our generalised disentangling definition to show
that disentangled sub-algebras can each have their own individual equivariance
conditions, which can be treated independently.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 18:24:51 GMT"
}
] | 1,696,377,600,000 | [
[
"Dean",
"Alexander",
""
],
[
"Alonso",
"Eduardo",
""
],
[
"Mondragon",
"Esther",
""
]
] |
2310.01805 | Hongyi Duan | Hongyi Duan and Qingyang Li and Yuchen Li and Jianan Zhang and Yuming
Xie | Comparative study of microgrid optimal scheduling under
multi-optimization algorithm fusion | 11 pages, 6 fiures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As global attention on renewable and clean energy grows, the research and
implementation of microgrids become paramount. This paper delves into the
methodology of exploring the relationship between the operational and
environmental costs of microgrids through multi-objective optimization models.
By integrating various optimization algorithms like Genetic Algorithm,
Simulated Annealing, Ant Colony Optimization, and Particle Swarm Optimization,
we propose an integrated approach for microgrid optimization. Simulation
results depict that these algorithms provide different dispatch results under
economic and environmental dispatch, revealing distinct roles of diesel
generators and micro gas turbines in microgrids. Overall, this study offers
in-depth insights and practical guidance for microgrid design and operation.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 05:35:42 GMT"
}
] | 1,696,377,600,000 | [
[
"Duan",
"Hongyi",
""
],
[
"Li",
"Qingyang",
""
],
[
"Li",
"Yuchen",
""
],
[
"Zhang",
"Jianan",
""
],
[
"Xie",
"Yuming",
""
]
] |
2310.02005 | Mohamed-Bachir Belaid | Mohamed-Bachir Belaid, Jivitesh Sharma, Lei Jiao, Ole-Christoffer
Granmo, Per-Arne Andersen, Anis Yazidi | Generalized Convergence Analysis of Tsetlin Machines: A Probabilistic
Approach to Concept Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tsetlin Machines (TMs) have garnered increasing interest for their ability to
learn concepts via propositional formulas and their proven efficiency across
various application domains. Despite this, the convergence proof for the TMs,
particularly for the AND operator (\emph{conjunction} of literals), in the
generalized case (inputs greater than two bits) remains an open problem. This
paper aims to fill this gap by presenting a comprehensive convergence analysis
of Tsetlin automaton-based Machine Learning algorithms. We introduce a novel
framework, referred to as Probabilistic Concept Learning (PCL), which
simplifies the TM structure while incorporating dedicated feedback mechanisms
and dedicated inclusion/exclusion probabilities for literals. Given $n$
features, PCL aims to learn a set of conjunction clauses $C_i$ each associated
with a distinct inclusion probability $p_i$. Most importantly, we establish a
theoretical proof confirming that, for any clause $C_k$, PCL converges to a
conjunction of literals when $0.5<p_k<1$. This result serves as a stepping
stone for future research on the convergence properties of Tsetlin
automaton-based learning algorithms. Our findings not only contribute to the
theoretical understanding of Tsetlin Machines but also have implications for
their practical application, potentially leading to more robust and
interpretable machine learning models.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 12:21:41 GMT"
}
] | 1,696,377,600,000 | [
[
"Belaid",
"Mohamed-Bachir",
""
],
[
"Sharma",
"Jivitesh",
""
],
[
"Jiao",
"Lei",
""
],
[
"Granmo",
"Ole-Christoffer",
""
],
[
"Andersen",
"Per-Arne",
""
],
[
"Yazidi",
"Anis",
""
]
] |
2310.02019 | Pedram Salimi | Pedram Salimi, Nirmalie Wiratunga, David Corsar, Anjana Wijekoon | Towards Feasible Counterfactual Explanations: A Taxonomy Guided
Template-based NLG Method | null | Volume 372: ECAI 2023 | 10.3233/FAIA230499 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Counterfactual Explanations (cf-XAI) describe the smallest changes in feature
values necessary to change an outcome from one class to another. However, many
cf-XAI methods neglect the feasibility of those changes. In this paper, we
introduce a novel approach for presenting cf-XAI in natural language
(Natural-XAI), giving careful consideration to actionable and comprehensible
aspects while remaining cognizant of immutability and ethical concerns. We
present three contributions to this endeavor. Firstly, through a user study, we
identify two types of themes present in cf-XAI composed by humans:
content-related, focusing on how features and their values are included from
both the counterfactual and the query perspectives; and structure-related,
focusing on the structure and terminology used for describing necessary value
changes. Secondly, we introduce a feature actionability taxonomy with four
clearly defined categories, to streamline the explanation presentation process.
Using insights from the user study and our taxonomy, we created a generalisable
template-based natural language generation (NLG) method compatible with
existing explainers like DICE, NICE, and DisCERN, to produce counterfactuals
that address the aforementioned limitations of existing approaches. Finally, we
conducted a second user study to assess the performance of our taxonomy-guided
NLG templates on three domains. Our findings show that the taxonomy-guided
Natural-XAI approach (n-XAI^T) received higher user ratings across all
dimensions, with significantly improved results in the majority of the domains
assessed for articulation, acceptability, feasibility, and sensitivity
dimensions.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 12:48:57 GMT"
}
] | 1,696,377,600,000 | [
[
"Salimi",
"Pedram",
""
],
[
"Wiratunga",
"Nirmalie",
""
],
[
"Corsar",
"David",
""
],
[
"Wijekoon",
"Anjana",
""
]
] |
2310.02054 | Zibin Dong | Zibin Dong, Yifu Yuan, Jianye Hao, Fei Ni, Yao Mu, Yan Zheng, Yujing
Hu, Tangjie Lv, Changjie Fan and Zhipeng Hu | AlignDiff: Aligning Diverse Human Preferences via Behavior-Customisable
Diffusion Model | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Aligning agent behaviors with diverse human preferences remains a challenging
problem in reinforcement learning (RL), owing to the inherent abstractness and
mutability of human preferences. To address these issues, we propose AlignDiff,
a novel framework that leverages RL from Human Feedback (RLHF) to quantify
human preferences, covering abstractness, and utilizes them to guide diffusion
planning for zero-shot behavior customizing, covering mutability. AlignDiff can
accurately match user-customized behaviors and efficiently switch from one to
another. To build the framework, we first establish the multi-perspective human
feedback datasets, which contain comparisons for the attributes of diverse
behaviors, and then train an attribute strength model to predict quantified
relative strengths. After relabeling behavioral datasets with relative
strengths, we proceed to train an attribute-conditioned diffusion model, which
serves as a planner with the attribute strength model as a director for
preference aligning at the inference phase. We evaluate AlignDiff on various
locomotion tasks and demonstrate its superior performance on preference
matching, switching, and covering compared to other baselines. Its capability
of completing unseen downstream tasks under human instructions also showcases
the promising potential for human-AI collaboration. More visualization videos
are released on https://aligndiff.github.io/.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 13:53:08 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Feb 2024 10:48:30 GMT"
}
] | 1,707,177,600,000 | [
[
"Dong",
"Zibin",
""
],
[
"Yuan",
"Yifu",
""
],
[
"Hao",
"Jianye",
""
],
[
"Ni",
"Fei",
""
],
[
"Mu",
"Yao",
""
],
[
"Zheng",
"Yan",
""
],
[
"Hu",
"Yujing",
""
],
[
"Lv",
"Tangjie",
""
],
[
"Fan",
"Changjie",
""
],
[
"Hu",
"Zhipeng",
""
]
] |
2310.02167 | Carlos N\'u\~nez Molina | Carlos N\'u\~nez-Molina, Pablo Mesejo, Juan Fern\'andez-Olivares | Towards a Unified Framework for Sequential Decision Making | 10 pages, 0 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, the integration of Automated Planning (AP) and Reinforcement
Learning (RL) has seen a surge of interest. To perform this integration, a
general framework for Sequential Decision Making (SDM) would prove immensely
useful, as it would help us understand how AP and RL fit together. In this
preliminary work, we attempt to provide such a framework, suitable for any
method ranging from Classical Planning to Deep RL, by drawing on concepts from
Probability Theory and Bayesian inference. We formulate an SDM task as a set of
training and test Markov Decision Processes (MDPs), to account for
generalization. We provide a general algorithm for SDM which we hypothesize
every SDM method is based on. According to it, every SDM algorithm can be seen
as a procedure that iteratively improves its solution estimate by leveraging
the task knowledge available. Finally, we derive a set of formulas and
algorithms for calculating interesting properties of SDM tasks and methods,
which make possible their empirical evaluation and comparison.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 16:01:06 GMT"
}
] | 1,696,377,600,000 | [
[
"Núñez-Molina",
"Carlos",
""
],
[
"Mesejo",
"Pablo",
""
],
[
"Fernández-Olivares",
"Juan",
""
]
] |
2310.02345 | EPTCS | Oded Blumenthal, Guy Shani | Rollout Heuristics for Online Stochastic Contingent Planning | In Proceedings AREA 2023, arXiv:2310.00333 | EPTCS 391, 2023, pp. 89-101 | 10.4204/EPTCS.391.11 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Partially observable Markov decision processes (POMDP) are a useful model for
decision-making under partial observability and stochastic actions. Partially
Observable Monte-Carlo Planning is an online algorithm for deciding on the next
action to perform, using a Monte-Carlo tree search approach, based on the UCT
(UCB applied to trees) algorithm for fully observable Markov-decision
processes. POMCP develops an action-observation tree, and at the leaves, uses a
rollout policy to provide a value estimate for the leaf. As such, POMCP is
highly dependent on the rollout policy to compute good estimates, and hence
identify good actions. Thus, many practitioners who use POMCP are required to
create strong, domain-specific heuristics.
In this paper, we model POMDPs as stochastic contingent planning problems.
This allows us to leverage domain-independent heuristics that were developed in
the planning community. We suggest two heuristics, the first is based on the
well-known h_add heuristic from classical planning, and the second is computed
in belief space, taking the value of information into account.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 18:24:47 GMT"
}
] | 1,696,464,000,000 | [
[
"Blumenthal",
"Oded",
""
],
[
"Shani",
"Guy",
""
]
] |
2310.02360 | Finn Rietz | Finn Rietz, Erik Schaffernicht, Stefan Heinrich, Johannes Andreas
Stork | Prioritized Soft Q-Decomposition for Lexicographic Reinforcement
Learning | Camera ready version | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning (RL) for complex tasks remains a challenge, primarily
due to the difficulties of engineering scalar reward functions and the inherent
inefficiency of training models from scratch. Instead, it would be better to
specify complex tasks in terms of elementary subtasks and to reuse subtask
solutions whenever possible. In this work, we address continuous space
lexicographic multi-objective RL problems, consisting of prioritized subtasks,
which are notoriously difficult to solve. We show that these can be scalarized
with a subtask transformation and then solved incrementally using value
decomposition. Exploiting this insight, we propose prioritized soft
Q-decomposition (PSQD), a novel algorithm for learning and adapting subtask
solutions under lexicographic priorities in continuous state-action spaces.
PSQD offers the ability to reuse previously learned subtask solutions in a
zero-shot composition, followed by an adaptation step. Its ability to use
retained subtask training data for offline learning eliminates the need for new
environment interaction during adaptation. We demonstrate the efficacy of our
approach by presenting successful learning, reuse, and adaptation results for
both low- and high-dimensional simulated robot control tasks, as well as
offline learning results. In contrast to baseline approaches, PSQD does not
trade off between conflicting subtasks or priority constraints and satisfies
subtask priorities during learning. PSQD provides an intuitive framework for
tackling complex RL problems, offering insights into the inner workings of the
subtask composition.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 18:36:21 GMT"
},
{
"version": "v2",
"created": "Thu, 2 May 2024 10:01:56 GMT"
}
] | 1,714,694,400,000 | [
[
"Rietz",
"Finn",
""
],
[
"Schaffernicht",
"Erik",
""
],
[
"Heinrich",
"Stefan",
""
],
[
"Stork",
"Johannes Andreas",
""
]
] |
2310.02593 | Hongxin Ding | Hongxin Ding, Peinie Zou, Zhiyuan Wang, Junfeng Zhao, Yasha Wang and
Qiang Zhou | A ModelOps-based Framework for Intelligent Medical Knowledge Extraction | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Extracting medical knowledge from healthcare texts enhances downstream tasks
like medical knowledge graph construction and clinical decision-making.
However, the construction and application of knowledge extraction models lack
automation, reusability and unified management, leading to inefficiencies for
researchers and high barriers for non-AI experts such as doctors, to utilize
knowledge extraction. To address these issues, we propose a ModelOps-based
intelligent medical knowledge extraction framework that offers a low-code
system for model selection, training, evaluation and optimization.
Specifically, the framework includes a dataset abstraction mechanism based on
multi-layer callback functions, a reusable model training, monitoring and
management mechanism. We also propose a model recommendation method based on
dataset similarity, which helps users quickly find potentially suitable models
for a given dataset. Our framework provides convenience for researchers to
develop models and simplifies model access for non-AI experts such as doctors.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 05:35:16 GMT"
}
] | 1,696,464,000,000 | [
[
"Ding",
"Hongxin",
""
],
[
"Zou",
"Peinie",
""
],
[
"Wang",
"Zhiyuan",
""
],
[
"Zhao",
"Junfeng",
""
],
[
"Wang",
"Yasha",
""
],
[
"Zhou",
"Qiang",
""
]
] |
2310.02658 | Sebastian Lubos | Benjamin Ritz, Alexander Felfernig, Viet-Man Le, Sebastian Lubos | Solving Multi-Configuration Problems: A Performance Analysis with Choco
Solver | The paper was presented at ConfWS'23: 25th International Workshop on
Configuration, September 6-7, 2023, M\'alaga, Spain and is published in the
conference proceedings: https://ceur-ws.org/Vol-3509/ | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many scenarios, configurators support the configuration of a solution that
satisfies the preferences of a single user. The concept of
\emph{multi-configuration} is based on the idea of configuring a set of
configurations. Such a functionality is relevant in scenarios such as the
configuration of personalized exams, the configuration of project teams, and
the configuration of different trips for individual members of a tourist group
(e.g., when visiting a specific city). In this paper, we exemplify the
application of multi-configuration for generating individualized exams. We also
provide a constraint solver performance analysis which helps to gain some
insights into corresponding performance issues.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 08:34:32 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Oct 2023 12:51:53 GMT"
}
] | 1,697,760,000,000 | [
[
"Ritz",
"Benjamin",
""
],
[
"Felfernig",
"Alexander",
""
],
[
"Le",
"Viet-Man",
""
],
[
"Lubos",
"Sebastian",
""
]
] |
2310.03131 | Vignesh Viswanathan | Gagan Biradar, Yacine Izza, Elita Lobo, Vignesh Viswanathan, Yair Zick | Axiomatic Aggregations of Abductive Explanations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent criticisms of the robustness of post hoc model approximation
explanation methods (like LIME and SHAP) have led to the rise of model-precise
abductive explanations. For each data point, abductive explanations provide a
minimal subset of features that are sufficient to generate the outcome. While
theoretically sound and rigorous, abductive explanations suffer from a major
issue -- there can be several valid abductive explanations for the same data
point. In such cases, providing a single abductive explanation can be
insufficient; on the other hand, providing all valid abductive explanations can
be incomprehensible due to their size. In this work, we solve this issue by
aggregating the many possible abductive explanations into feature importance
scores. We propose three aggregation methods: two based on power indices from
cooperative game theory and a third based on a well-known measure of causal
strength. We characterize these three methods axiomatically, showing that each
of them uniquely satisfies a set of desirable properties. We also evaluate them
on multiple datasets and show that these explanations are robust to the attacks
that fool SHAP and LIME.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 04:06:10 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Oct 2023 00:42:48 GMT"
},
{
"version": "v3",
"created": "Thu, 12 Oct 2023 17:02:59 GMT"
}
] | 1,697,155,200,000 | [
[
"Biradar",
"Gagan",
""
],
[
"Izza",
"Yacine",
""
],
[
"Lobo",
"Elita",
""
],
[
"Viswanathan",
"Vignesh",
""
],
[
"Zick",
"Yair",
""
]
] |
2310.03188 | Zhe Zhao | Zhe Zhao, Qingyun Liu, Huan Gui, Bang An, Lichan Hong, Ed H. Chi | Talking Models: Distill Pre-trained Knowledge to Downstream Models via
Interactive Communication | 19 pages, 3 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Many recent breakthroughs in machine learning have been enabled by the
pre-trained foundation models. By scaling up model parameters, training data,
and computation resources, foundation models have significantly advanced the
state-of-the-art in many applications. However, it is still an open question of
how to use these models to perform downstream tasks efficiently. Knowledge
distillation (KD) has been explored to tackle this challenge. KD transfers
knowledge from a large teacher model to a smaller student model. While KD has
been successful in improving student model performance, recent research has
discovered that a powerful teacher does not necessarily lead to a powerful
student, due to their huge capacity gap. In addition, the potential
distribution shifts between the pre-training data and downstream tasks can make
knowledge transfer in KD sub-optimal for improving downstream task performance.
In this paper, we extend KD with an interactive communication process to help
students of downstream tasks learn effectively from pre-trained foundation
models. Our design is inspired by the way humans learn from teachers who can
explain knowledge in a way that meets the students' needs. Specifically, we let
each model (i.e., student and teacher) train two components: (1) an encoder
encoding the model's hidden states to a message and (2) a decoder decoding any
messages to its own hidden states. With encoder and decoder, not only can the
teacher transfer rich information by encoding its hidden states, but also the
student can send messages with information of downstream tasks to the teacher.
Therefore, knowledge passing from teacher to student can be tailored to the
student's capacity and downstream tasks' distributions. We conducted
experiments on benchmark datasets to show that our communication mechanism
outperforms state-of-the-art distillation techniques.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 22:22:21 GMT"
}
] | 1,696,550,400,000 | [
[
"Zhao",
"Zhe",
""
],
[
"Liu",
"Qingyun",
""
],
[
"Gui",
"Huan",
""
],
[
"An",
"Bang",
""
],
[
"Hong",
"Lichan",
""
],
[
"Chi",
"Ed H.",
""
]
] |
2310.03352 | David Huber | David Huber, Yizuo Chen, Alessandro Antonucci, Adnan Darwiche, Marco
Zaffalon | Tractable Bounding of Counterfactual Queries by Knowledge Compilation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We discuss the problem of bounding partially identifiable queries, such as
counterfactuals, in Pearlian structural causal models. A recently proposed
iterated EM scheme yields an inner approximation of those bounds by sampling
the initialisation parameters. Such a method requires multiple (Bayesian
network) queries over models sharing the same structural equations and
topology, but different exogenous probabilities. This setup makes a compilation
of the underlying model to an arithmetic circuit advantageous, thus inducing a
sizeable inferential speed-up. We show how a single symbolic knowledge
compilation allows us to obtain the circuit structure with symbolic parameters
to be replaced by their actual values when computing the different queries. We
also discuss parallelisation techniques to further speed up the bound
computation. Experiments against standard Bayesian network inference show clear
computational advantages with up to an order of magnitude of speed-up.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 07:10:40 GMT"
}
] | 1,696,550,400,000 | [
[
"Huber",
"David",
""
],
[
"Chen",
"Yizuo",
""
],
[
"Antonucci",
"Alessandro",
""
],
[
"Darwiche",
"Adnan",
""
],
[
"Zaffalon",
"Marco",
""
]
] |
2310.03780 | Adish Singla | Tung Phung, Victor-Alexandru P\u{a}durean, Anjali Singh, Christopher
Brooks, Jos\'e Cambronero, Sumit Gulwani, Adish Singla, Gustavo Soares | Automating Human Tutor-Style Programming Feedback: Leveraging GPT-4
Tutor Model for Hint Generation and GPT-3.5 Student Model for Hint Validation | Published in Learning Analytics and Knowledge Conference (LAK) 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative AI and large language models hold great promise in enhancing
programming education by automatically generating individualized feedback for
students. We investigate the role of generative AI models in providing human
tutor-style programming hints to help students resolve errors in their buggy
programs. Recent works have benchmarked state-of-the-art models for various
feedback generation scenarios; however, their overall quality is still inferior
to human tutors and not yet ready for real-world deployment. In this paper, we
seek to push the limits of generative AI models toward providing high-quality
programming hints and develop a novel technique, GPT4Hints-GPT3.5Val. As a
first step, our technique leverages GPT-4 as a ``tutor'' model to generate
hints -- it boosts the generative quality by using symbolic information of
failing test cases and fixes in prompts. As a next step, our technique
leverages GPT-3.5, a weaker model, as a ``student'' model to further validate
the hint quality -- it performs an automatic quality validation by simulating
the potential utility of providing this feedback. We show the efficacy of our
technique via extensive evaluation using three real-world datasets of Python
programs covering a variety of concepts ranging from basic algorithms to
regular expressions and data analysis using pandas library.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 17:02:59 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Dec 2023 02:34:30 GMT"
},
{
"version": "v3",
"created": "Thu, 21 Dec 2023 13:43:55 GMT"
}
] | 1,703,203,200,000 | [
[
"Phung",
"Tung",
""
],
[
"Pădurean",
"Victor-Alexandru",
""
],
[
"Singh",
"Anjali",
""
],
[
"Brooks",
"Christopher",
""
],
[
"Cambronero",
"José",
""
],
[
"Gulwani",
"Sumit",
""
],
[
"Singla",
"Adish",
""
],
[
"Soares",
"Gustavo",
""
]
] |
2310.04835 | Xuhui Jiang | Xuhui Jiang, Chengjin Xu, Yinghan Shen, Xun Sun, Lumingyuan Tang,
Saizhuo Wang, Zhongwu Chen, Yuanzhuo Wang, Jian Guo | On the Evolution of Knowledge Graphs: A Survey and Perspective | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge graphs (KGs) are structured representations of diversified
knowledge. They are widely used in various intelligent applications. In this
article, we provide a comprehensive survey on the evolution of various types of
knowledge graphs (i.e., static KGs, dynamic KGs, temporal KGs, and event KGs)
and techniques for knowledge extraction and reasoning. Furthermore, we
introduce the practical applications of different types of KGs, including a
case study in financial analysis. Finally, we propose our perspective on the
future directions of knowledge engineering, including the potential of
combining the power of knowledge graphs and large language models (LLMs), and
the evolution of knowledge extraction, reasoning, and representation.
| [
{
"version": "v1",
"created": "Sat, 7 Oct 2023 14:46:51 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Oct 2023 05:15:08 GMT"
}
] | 1,696,982,400,000 | [
[
"Jiang",
"Xuhui",
""
],
[
"Xu",
"Chengjin",
""
],
[
"Shen",
"Yinghan",
""
],
[
"Sun",
"Xun",
""
],
[
"Tang",
"Lumingyuan",
""
],
[
"Wang",
"Saizhuo",
""
],
[
"Chen",
"Zhongwu",
""
],
[
"Wang",
"Yuanzhuo",
""
],
[
"Guo",
"Jian",
""
]
] |
2310.04836 | Luoming Zhang | Luoming Zhang, Wen Fei, Weijia Wu, Yefei He, Zhenyu Lou, Hong Zhou | Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM | 15 pages, 2 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) pose significant hardware challenges related to
memory requirements and computational ability. There are two mainstream
quantization schemes for LLMs: coarse-grained ($\textit{e.g.,}$ channel-wise)
quantization and fine-grained ($\textit{e.g.,}$ group-wise) quantization.
Fine-grained quantization has smaller quantization loss, consequently achieving
superior performance. However, when applied to weight-activation quantization,
it disrupts continuous integer matrix multiplication, leading to inefficient
inference. In this paper, we introduce Dual Grained Quantization (DGQ), a novel
A8W4 quantization for LLM that maintains superior performance while ensuring
fast inference speed. DSQ dequantizes the fine-grained INT4 weight into
coarse-grained INT8 representation and preform matrix multiplication using INT8
kernels. Besides, we develop a two-phase grid search algorithm to simplify the
determination of fine-grained and coarse-grained quantization scales. We also
devise a percentile clipping schema for smoothing the activation outliers
without the need for complex optimization techniques. Experimental results
demonstrate that DGQ consistently outperforms prior methods across various LLM
architectures and a wide range of tasks. Remarkably, by our implemented
efficient CUTLASS kernel, we achieve $\textbf{1.12}$ $\times$ memory reduction
and $\textbf{3.24}$ $\times$ speed gains comparing A16W4 implementation. These
advancements enable efficient deployment of A8W4 LLMs for real-world
applications.
| [
{
"version": "v1",
"created": "Sat, 7 Oct 2023 14:50:28 GMT"
}
] | 1,696,896,000,000 | [
[
"Zhang",
"Luoming",
""
],
[
"Fei",
"Wen",
""
],
[
"Wu",
"Weijia",
""
],
[
"He",
"Yefei",
""
],
[
"Lou",
"Zhenyu",
""
],
[
"Zhou",
"Hong",
""
]
] |
2310.04852 | Max Taylor-Davies | Max Taylor-Davies and Christopher G. Lucas | Balancing utility and cognitive cost in social representation | Workshop on Information-Theoretic Principles in Cognitive Systems,
NeurIPS 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | To successfully navigate its environment, an agent must construct and
maintain representations of the other agents that it encounters. Such
representations are useful for many tasks, but they are not without cost. As a
result, agents must make decisions regarding how much information they choose
to store about the agents in their environment. Using selective social learning
as an example task, we motivate the problem of finding agent representations
that optimally trade off between downstream utility and information cost, and
illustrate two example approaches to resource-constrained social
representation.
| [
{
"version": "v1",
"created": "Sat, 7 Oct 2023 15:27:01 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Dec 2023 22:19:28 GMT"
}
] | 1,702,252,800,000 | [
[
"Taylor-Davies",
"Max",
""
],
[
"Lucas",
"Christopher G.",
""
]
] |
2310.04918 | Lei You PhD | Lei You and Hei Victor Cheng | SWAP: Sparse Entropic Wasserstein Regression for Robust Network Pruning | Published as a conference paper at ICLR 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This study addresses the challenge of inaccurate gradients in computing the
empirical Fisher Information Matrix during neural network pruning. We introduce
SWAP, a formulation of Entropic Wasserstein regression (EWR) for pruning,
capitalizing on the geometric properties of the optimal transport problem. The
``swap'' of the commonly used linear regression with the EWR in optimization is
analytically demonstrated to offer noise mitigation effects by incorporating
neighborhood interpolation across data points with only marginal additional
computational cost. The unique strength of SWAP is its intrinsic ability to
balance noise reduction and covariance information preservation effectively.
Extensive experiments performed on various networks and datasets show
comparable performance of SWAP with state-of-the-art (SoTA) network pruning
algorithms. Our proposed method outperforms the SoTA when the network size or
the target sparsity is large, the gain is even larger with the existence of
noisy gradients, possibly from noisy data, analog memory, or adversarial
attacks. Notably, our proposed method achieves a gain of 6% improvement in
accuracy and 8% improvement in testing loss for MobileNetV1 with less than
one-fourth of the network parameters remaining.
| [
{
"version": "v1",
"created": "Sat, 7 Oct 2023 21:15:32 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Nov 2023 14:28:53 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Feb 2024 18:22:41 GMT"
},
{
"version": "v4",
"created": "Tue, 20 Feb 2024 08:29:13 GMT"
}
] | 1,708,473,600,000 | [
[
"You",
"Lei",
""
],
[
"Cheng",
"Hei Victor",
""
]
] |
2310.04963 | Sunita Chandrasekaran | Christian Munley, Aaron Jarmusch and Sunita Chandrasekaran | LLM4VV: Developing LLM-Driven Testsuite for Compiler Validation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) are a new and powerful tool for a wide span of
applications involving natural language and demonstrate impressive code
generation abilities. The goal of this work is to automatically generate tests
and use these tests to validate and verify compiler implementations of a
directive-based parallel programming paradigm, OpenACC. To do so, in this
paper, we explore the capabilities of state-of-the-art LLMs, including
open-source LLMs -- Meta Codellama, Phind fine-tuned version of Codellama,
Deepseek Deepseek Coder and closed-source LLMs -- OpenAI GPT-3.5-Turbo and
GPT-4-Turbo. We further fine-tuned the open-source LLMs and GPT-3.5-Turbo using
our own testsuite dataset along with using the OpenACC specification. We also
explored these LLMs using various prompt engineering techniques that include
code template, template with retrieval-augmented generation (RAG), one-shot
example, one-shot with RAG, expressive prompt with code template and RAG. This
paper highlights our findings from over 5000 tests generated via all the above
mentioned methods. Our contributions include: (a) exploring the capabilities of
the latest and relevant LLMs for code generation, (b) investigating fine-tuning
and prompt methods, and (c) analyzing the outcome of LLMs generated tests
including manually analysis of representative set of tests. We found the LLM
Deepseek-Coder-33b-Instruct produced the most passing tests followed by
GPT-4-Turbo.
| [
{
"version": "v1",
"created": "Sun, 8 Oct 2023 01:43:39 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Nov 2023 20:53:13 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Mar 2024 21:05:28 GMT"
}
] | 1,710,201,600,000 | [
[
"Munley",
"Christian",
""
],
[
"Jarmusch",
"Aaron",
""
],
[
"Chandrasekaran",
"Sunita",
""
]
] |
2310.04988 | Vipula Rawte | Vipula Rawte, Swagata Chakraborty, Agnibh Pathak, Anubhav Sarkar, S.M
Towhidul Islam Tonmoy, Aman Chadha, Amit P. Sheth, Amitava Das | The Troubling Emergence of Hallucination in Large Language Models -- An
Extensive Definition, Quantification, and Prescriptive Remediations | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The recent advancements in Large Language Models (LLMs) have garnered
widespread acclaim for their remarkable emerging capabilities. However, the
issue of hallucination has parallelly emerged as a by-product, posing
significant concerns. While some recent endeavors have been made to identify
and mitigate different types of hallucination, there has been a limited
emphasis on the nuanced categorization of hallucination and associated
mitigation methods. To address this gap, we offer a fine-grained discourse on
profiling hallucination based on its degree, orientation, and category, along
with offering strategies for alleviation. As such, we define two overarching
orientations of hallucination: (i) factual mirage (FM) and (ii) silver lining
(SL). To provide a more comprehensive understanding, both orientations are
further sub-categorized into intrinsic and extrinsic, with three degrees of
severity - (i) mild, (ii) moderate, and (iii) alarming. We also meticulously
categorize hallucination into six types: (i) acronym ambiguity, (ii) numeric
nuisance, (iii) generated golem, (iv) virtual voice, (v) geographic erratum,
and (vi) time wrap. Furthermore, we curate HallucInation eLiciTation (HILT), a
publicly available dataset comprising of 75,000 samples generated using 15
contemporary LLMs along with human annotations for the aforementioned
categories. Finally, to establish a method for quantifying and to offer a
comparative spectrum that allows us to evaluate and rank LLMs based on their
vulnerability to producing hallucinations, we propose Hallucination
Vulnerability Index (HVI). We firmly believe that HVI holds significant value
as a tool for the wider NLP community, with the potential to serve as a rubric
in AI-related policy-making. In conclusion, we propose two solution strategies
for mitigating hallucinations.
| [
{
"version": "v1",
"created": "Sun, 8 Oct 2023 03:31:29 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Oct 2023 03:37:34 GMT"
}
] | 1,698,105,600,000 | [
[
"Rawte",
"Vipula",
""
],
[
"Chakraborty",
"Swagata",
""
],
[
"Pathak",
"Agnibh",
""
],
[
"Sarkar",
"Anubhav",
""
],
[
"Tonmoy",
"S. M Towhidul Islam",
""
],
[
"Chadha",
"Aman",
""
],
[
"Sheth",
"Amit P.",
""
],
[
"Das",
"Amitava",
""
]
] |
2310.05015 | Li Lyna Zhang | Song Guo, Jiahang Xu, Li Lyna Zhang, Mao Yang | Compresso: Structured Pruning with Collaborative Prompting Learns
Compact Large Language Models | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Despite the remarkable success of Large Language Models (LLMs), the massive
size poses significant deployment challenges, particularly on
resource-constrained hardware. While existing LLM compression methods focus on
quantization, pruning remains relatively unexplored due to the high cost of
training-based approaches and data collection challenges. One-shot pruning
methods, although cost-effective and data-free, have become dominant in LLM
pruning, but lead to performance decline under the structured pruning setting.
In this work, we introduce a new paradigm for structurally pruning LLMs, called
Compresso. Our approach, through the collaboration of the proposed
resource-efficient pruning algorithm and the LLM itself, learns optimal pruning
decisions during the training process. Compresso addresses the challenges of
expensive training costs and data collection by incorporating Low-Rank
Adaptation (LoRA) into the $L_0$ regularization during the instruction tuning
process. Then, we further augment the pruning algorithm by introducing a
collaborative prompt that fosters collaboration between the LLM and the pruning
algorithm, significantly boosting the overall performance. To this end,
Compresso prunes LLaMA-7B to 5.4B, maintaining original performance and even
surpassing LLaMA-7B in reading comprehension by 2.62%. Extensive experiments
demonstrate that Compresso significantly outperforms one-shot pruning baselines
across various sparsity ratios, achieving up to 2.21%, 11.43%, 7.04%, and 4.81%
higher scores on the commonsense reasoning, reading comprehension, MMLU, and
BBH benchmarks, respectively.
| [
{
"version": "v1",
"created": "Sun, 8 Oct 2023 05:16:28 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Oct 2023 01:46:35 GMT"
}
] | 1,697,068,800,000 | [
[
"Guo",
"Song",
""
],
[
"Xu",
"Jiahang",
""
],
[
"Zhang",
"Li Lyna",
""
],
[
"Yang",
"Mao",
""
]
] |
2310.05086 | Sili Huang | Sili Huang, Yanchao Sun, Jifeng Hu, Siyuan Guo, Hechang Chen, Yi
Chang, Lichao Sun, Bo Yang | Learning Generalizable Agents via Saliency-Guided Features Decorrelation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In visual-based Reinforcement Learning (RL), agents often struggle to
generalize well to environmental variations in the state space that were not
observed during training. The variations can arise in both task-irrelevant
features, such as background noise, and task-relevant features, such as robot
configurations, that are related to the optimal decisions. To achieve
generalization in both situations, agents are required to accurately understand
the impact of changed features on the decisions, i.e., establishing the true
associations between changed features and decisions in the policy model.
However, due to the inherent correlations among features in the state space,
the associations between features and decisions become entangled, making it
difficult for the policy to distinguish them. To this end, we propose
Saliency-Guided Features Decorrelation (SGFD) to eliminate these correlations
through sample reweighting. Concretely, SGFD consists of two core techniques:
Random Fourier Functions (RFF) and the saliency map. RFF is utilized to
estimate the complex non-linear correlations in high-dimensional images, while
the saliency map is designed to identify the changed features. Under the
guidance of the saliency map, SGFD employs sample reweighting to minimize the
estimated correlations related to changed features, thereby achieving
decorrelation in visual RL tasks. Our experimental results demonstrate that
SGFD can generalize well on a wide range of test environments and significantly
outperforms state-of-the-art methods in handling both task-irrelevant
variations and task-relevant variations.
| [
{
"version": "v1",
"created": "Sun, 8 Oct 2023 09:24:43 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Dec 2023 09:36:17 GMT"
}
] | 1,703,462,400,000 | [
[
"Huang",
"Sili",
""
],
[
"Sun",
"Yanchao",
""
],
[
"Hu",
"Jifeng",
""
],
[
"Guo",
"Siyuan",
""
],
[
"Chen",
"Hechang",
""
],
[
"Chang",
"Yi",
""
],
[
"Sun",
"Lichao",
""
],
[
"Yang",
"Bo",
""
]
] |
2310.05123 | Zijing Wang | Zi Jing Wang, Ye Zhu, Kai Ming Ting | Distribution-Based Trajectory Clustering | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trajectory clustering enables the discovery of common patterns in trajectory
data. Current methods of trajectory clustering rely on a distance measure
between two points in order to measure the dissimilarity between two
trajectories. The distance measures employed have two challenges: high
computational cost and low fidelity. Independent of the distance measure
employed, existing clustering algorithms have another challenge: either
effectiveness issues or high time complexity. In this paper, we propose to use
a recent Isolation Distributional Kernel (IDK) as the main tool to meet all
three challenges. The new IDK-based clustering algorithm, called TIDKC, makes
full use of the distributional kernel for trajectory similarity measuring and
clustering. TIDKC identifies non-linearly separable clusters with irregular
shapes and varied densities in linear time. It does not rely on random
initialisation and is robust to outliers. An extensive evaluation on 7 large
real-world trajectory datasets confirms that IDK is more effective in capturing
complex structures in trajectories than traditional and deep learning-based
distance measures. Furthermore, the proposed TIDKC has superior clustering
performance and efficiency to existing trajectory clustering algorithms.
| [
{
"version": "v1",
"created": "Sun, 8 Oct 2023 11:28:34 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Oct 2023 07:26:44 GMT"
}
] | 1,698,710,400,000 | [
[
"Wang",
"Zi Jing",
""
],
[
"Zhu",
"Ye",
""
],
[
"Ting",
"Kai Ming",
""
]
] |
2310.05129 | Jiajun He | Jiajun He, Zekun Yang, Tomoki Toda | ed-cec: improving rare word recognition using asr postprocessing based
on error detection and context-aware error correction | 6 pages, 5 figures, conference | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic speech recognition (ASR) systems often encounter difficulties in
accurately recognizing rare words, leading to errors that can have a negative
impact on downstream tasks such as keyword spotting, intent detection, and text
summarization. To address this challenge, we present a novel ASR postprocessing
method that focuses on improving the recognition of rare words through error
detection and context-aware error correction. Our method optimizes the decoding
process by targeting only the predicted error positions, minimizing unnecessary
computations. Moreover, we leverage a rare word list to provide additional
contextual knowledge, enabling the model to better correct rare words.
Experimental results across five datasets demonstrate that our proposed method
achieves significantly lower word error rates (WERs) than previous approaches
while maintaining a reasonable inference speed. Furthermore, our approach
exhibits promising robustness across different ASR systems.
| [
{
"version": "v1",
"created": "Sun, 8 Oct 2023 11:40:30 GMT"
}
] | 1,696,896,000,000 | [
[
"He",
"Jiajun",
""
],
[
"Yang",
"Zekun",
""
],
[
"Toda",
"Tomoki",
""
]
] |
2310.05167 | Paul Mattes | Paul Mattes, Rainer Schlosser, Ralf Herbrich | Hieros: Hierarchical Imagination on Structured State Space Sequence
World Models | Submitted to ICML 2024, 23 pages, 11 figures, code available at:
https://github.com/Snagnar/Hieros | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the biggest challenges to modern deep reinforcement learning (DRL)
algorithms is sample efficiency. Many approaches learn a world model in order
to train an agent entirely in imagination, eliminating the need for direct
environment interaction during training. However, these methods often suffer
from either a lack of imagination accuracy, exploration capabilities, or
runtime efficiency. We propose Hieros, a hierarchical policy that learns time
abstracted world representations and imagines trajectories at multiple time
scales in latent space. Hieros uses an S5 layer-based world model, which
predicts next world states in parallel during training and iteratively during
environment interaction. Due to the special properties of S5 layers, our method
can train in parallel and predict next world states iteratively during
imagination. This allows for more efficient training than RNN-based world
models and more efficient imagination than Transformer-based world models.
We show that our approach outperforms the state of the art in terms of mean
and median normalized human score on the Atari 100k benchmark, and that our
proposed world model is able to predict complex dynamics very accurately. We
also show that Hieros displays superior exploration capabilities compared to
existing approaches.
| [
{
"version": "v1",
"created": "Sun, 8 Oct 2023 13:52:40 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Oct 2023 08:18:30 GMT"
},
{
"version": "v3",
"created": "Sun, 18 Feb 2024 13:42:53 GMT"
}
] | 1,708,387,200,000 | [
[
"Mattes",
"Paul",
""
],
[
"Schlosser",
"Rainer",
""
],
[
"Herbrich",
"Ralf",
""
]
] |
2310.05186 | Yan Zhang | Yan Zhang, Hao Hao, Xiao He, Shuanhu Gao, Aimin Zhou | Evolutionary Retrosynthetic Route Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecular retrosynthesis is a significant and complex problem in the field of
chemistry, however, traditional manual synthesis methods not only need
well-trained experts but also are time-consuming. With the development of big
data and machine learning, artificial intelligence (AI) based retrosynthesis is
attracting more attention and is becoming a valuable tool for molecular
retrosynthesis. At present, Monte Carlo tree search is a mainstream search
framework employed to address this problem. Nevertheless, its search efficiency
is compromised by its large search space. Therefore, we propose a novel
approach for retrosynthetic route planning based on evolutionary optimization,
marking the first use of Evolutionary Algorithm (EA) in the field of multi-step
retrosynthesis. The proposed method involves modeling the retrosynthetic
problem into an optimization problem, defining the search space and operators.
Additionally, to improve the search efficiency, a parallel strategy is
implemented. The new approach is applied to four case products, and is compared
with Monte Carlo tree search. The experimental results show that, in comparison
to the Monte Carlo tree search algorithm, EA significantly reduces the number
of calling single-step model by an average of 53.9%. The time required to
search three solutions decreased by an average of 83.9%, and the number of
feasible search routes increases by 5 times.
| [
{
"version": "v1",
"created": "Sun, 8 Oct 2023 14:47:41 GMT"
}
] | 1,696,896,000,000 | [
[
"Zhang",
"Yan",
""
],
[
"Hao",
"Hao",
""
],
[
"He",
"Xiao",
""
],
[
"Gao",
"Shuanhu",
""
],
[
"Zhou",
"Aimin",
""
]
] |
2310.05410 | Trang Nguyen | Trang Nguyen, Naoaki Okazaki | Causal Reasoning through Two Layers of Cognition for Improving
Generalization in Visual Question Answering | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Generalization in Visual Question Answering (VQA) requires models to answer
questions about images with contexts beyond the training distribution. Existing
attempts primarily refine unimodal aspects, overlooking enhancements in
multimodal aspects. Besides, diverse interpretations of the input lead to
various modes of answer generation, highlighting the role of causal reasoning
between interpreting and answering steps in VQA. Through this lens, we propose
Cognitive pathways VQA (CopVQA) improving the multimodal predictions by
emphasizing causal reasoning factors. CopVQA first operates a pool of pathways
that capture diverse causal reasoning flows through interpreting and answering
stages. Mirroring human cognition, we decompose the responsibility of each
stage into distinct experts and a cognition-enabled component (CC). The two CCs
strategically execute one expert for each stage at a time. Finally, we
prioritize answer predictions governed by pathways involving both CCs while
disregarding answers produced by either CC, thereby emphasizing causal
reasoning and supporting generalization. Our experiments on real-life and
medical data consistently verify that CopVQA improves VQA performance and
generalization across baselines and domains. Notably, CopVQA achieves a new
state-of-the-art (SOTA) on PathVQA dataset and comparable accuracy to the
current SOTA on VQA-CPv2, VQAv2, and VQA RAD, with one-fourth of the model
size.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2023 05:07:58 GMT"
}
] | 1,696,896,000,000 | [
[
"Nguyen",
"Trang",
""
],
[
"Okazaki",
"Naoaki",
""
]
] |
2310.05499 | Yizhen Zheng | Shirui Pan, Yizhen Zheng, Yixin Liu | Integrating Graphs with Large Language Models: Methods and Prospects | null | IEEE Intelligent System (2023) | 10.1109/MIS.2023.3332242 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) such as GPT-4 have emerged as frontrunners,
showcasing unparalleled prowess in diverse applications, including answering
queries, code generation, and more. Parallelly, graph-structured data, an
intrinsic data type, is pervasive in real-world scenarios. Merging the
capabilities of LLMs with graph-structured data has been a topic of keen
interest. This paper bifurcates such integrations into two predominant
categories. The first leverages LLMs for graph learning, where LLMs can not
only augment existing graph algorithms but also stand as prediction models for
various graph tasks. Conversely, the second category underscores the pivotal
role of graphs in advancing LLMs. Mirroring human cognition, we solve complex
tasks by adopting graphs in either reasoning or collaboration. Integrating with
such structures can significantly boost the performance of LLMs in various
complicated tasks. We also discuss and propose open questions for integrating
LLMs with graph-structured data for the future direction of the field.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2023 07:59:34 GMT"
}
] | 1,699,920,000,000 | [
[
"Pan",
"Shirui",
""
],
[
"Zheng",
"Yizhen",
""
],
[
"Liu",
"Yixin",
""
]
] |
2310.05563 | Yuwei Wang | Yuwei Wang, Enmeng Lu, Zizhe Ruan, Yao Liang, Yi Zeng | STREAM: Social data and knowledge collective intelligence platform for
TRaining Ethical AI Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents Social data and knowledge collective intelligence
platform for TRaining Ethical AI Models (STREAM) to address the challenge of
aligning AI models with human moral values, and to provide ethics datasets and
knowledge bases to help promote AI models "follow good advice as naturally as a
stream follows its course". By creating a comprehensive and representative
platform that accurately mirrors the moral judgments of diverse groups
including humans and AIs, we hope to effectively portray cultural and group
variations, and capture the dynamic evolution of moral judgments over time,
which in turn will facilitate the Establishment, Evaluation, Embedding,
Embodiment, Ensemble, and Evolvement (6Es) of the moral capabilities of AI
models. Currently, STREAM has already furnished a comprehensive collection of
ethical scenarios, and amassed substantial moral judgment data annotated by
volunteers and various popular Large Language Models (LLMs), collectively
portraying the moral preferences and performances of both humans and AIs across
a range of moral contexts. This paper will outline the current structure and
construction of STREAM, explore its potential applications, and discuss its
future prospects.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2023 09:40:11 GMT"
}
] | 1,696,896,000,000 | [
[
"Wang",
"Yuwei",
""
],
[
"Lu",
"Enmeng",
""
],
[
"Ruan",
"Zizhe",
""
],
[
"Liang",
"Yao",
""
],
[
"Zeng",
"Yi",
""
]
] |
2310.05680 | Procheta Sen | Oscar Tuvey, Procheta Sen | Automated Argument Generation from Legal Facts | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The count of pending cases has shown an exponential rise across nations
(e.g., with more than 10 million pending cases in India alone). The main issue
lies in the fact that the number of cases submitted to the law system is far
greater than the available number of legal professionals present in a country.
Given this worldwide context, the utilization of AI technology has gained
paramount importance to enhance the efficiency and speed of legal procedures.
In this study we partcularly focus on helping legal professionals in the
process of analyzing a legal case. Our specific investigation delves into
harnessing the generative capabilities of open-sourced large language models to
create arguments derived from the facts present in legal cases. Experimental
results show that the generated arguments from the best performing method have
on average 63% overlap with the benchmark set gold standard annotations.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2023 12:49:35 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Oct 2023 21:31:18 GMT"
},
{
"version": "v3",
"created": "Thu, 12 Oct 2023 04:47:45 GMT"
}
] | 1,697,155,200,000 | [
[
"Tuvey",
"Oscar",
""
],
[
"Sen",
"Procheta",
""
]
] |
2310.05690 | Christopher Healey | Sengjie Liu, Christopher G. Healey | Abstractive Summarization of Large Document Collections Using GPT | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper proposes a method of abstractive summarization designed to scale
to document collections instead of individual documents. Our approach applies a
combination of semantic clustering, document size reduction within topic
clusters, semantic chunking of a cluster's documents, GPT-based summarization
and concatenation, and a combined sentiment and text visualization of each
topic to support exploratory data analysis. Statistical comparison of our
results to existing state-of-the-art systems BART, BRIO, PEGASUS, and MoCa
using ROGUE summary scores showed statistically equivalent performance with
BART and PEGASUS on the CNN/Daily Mail test dataset, and with BART on the
Gigaword test dataset. This finding is promising since we view document
collection summarization as more challenging than individual document
summarization. We conclude with a discussion of how issues of scale are
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2023 13:06:21 GMT"
}
] | 1,696,896,000,000 | [
[
"Liu",
"Sengjie",
""
],
[
"Healey",
"Christopher G.",
""
]
] |
2310.05692 | Cheng Kang | Cheng Kang and Xujing Yao | Based on What We Can Control Artificial Neural Networks | 23 pages, | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | How can the stability and efficiency of Artificial Neural Networks (ANNs) be
ensured through a systematic analysis method? This paper seeks to address that
query. While numerous factors can influence the learning process of ANNs,
utilizing knowledge from control systems allows us to analyze its system
function and simulate system responses. Although the complexity of most ANNs is
extremely high, we still can analyze each factor (e.g., optimiser,
hyperparameters) by simulating their system response. This new method also can
potentially benefit the development of new optimiser and learning system,
especially when discerning which components adversely affect ANNs. Controlling
ANNs can benefit from the design of optimiser and learning system, as (1) all
optimisers act as controllers, (2) all learning systems operate as control
systems with inputs and outputs, and (3) the optimiser should match the
learning system. Please find codes:
\url{https://github.com/RandomUserName2023/Control-ANNs}.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2023 13:09:38 GMT"
}
] | 1,696,896,000,000 | [
[
"Kang",
"Cheng",
""
],
[
"Yao",
"Xujing",
""
]
] |
2310.05751 | Esther Taiwo | Esther Taiwo, Ahmed Akinsola, Edward Tella, Kolade Makinde, Mayowa
Akinwande | A Review of the Ethics of Artificial Intelligence and its Applications
in the United States | International Journal on Cybernetics & Informatics (IJCI) Vol.12,
No.6, December 2023 | Volume 12, Number 6 - International Conference on Computer Science
and Information Technology Advances (CCSITA 2023) | 10.5121/ijci.2023.1206010 | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | This study is focused on the ethics of Artificial Intelligence and its
application in the United States, the paper highlights the impact AI has in
every sector of the US economy and multiple facets of the technological space
and the resultant effect on entities spanning businesses, government, academia,
and civil society. There is a need for ethical considerations as these entities
are beginning to depend on AI for delivering various crucial tasks, which
immensely influence their operations, decision-making, and interactions with
each other. The adoption of ethical principles, guidelines, and standards of
work is therefore required throughout the entire process of AI development,
deployment, and usage to ensure responsible and ethical AI practices. Our
discussion explores eleven fundamental 'ethical principles' structured as
overarching themes. These encompass Transparency, Justice, Fairness, Equity,
Non- Maleficence, Responsibility, Accountability, Privacy, Beneficence,
Freedom, Autonomy, Trust, Dignity, Sustainability, and Solidarity. These
principles collectively serve as a guiding framework, directing the ethical
path for the responsible development, deployment, and utilization of artificial
intelligence (AI) technologies across diverse sectors and entities within the
United States. The paper also discusses the revolutionary impact of AI
applications, such as Machine Learning, and explores various approaches used to
implement AI ethics. This examination is crucial to address the growing
concerns surrounding the inherent risks associated with the widespread use of
artificial intelligence.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2023 14:29:00 GMT"
}
] | 1,696,896,000,000 | [
[
"Taiwo",
"Esther",
""
],
[
"Akinsola",
"Ahmed",
""
],
[
"Tella",
"Edward",
""
],
[
"Makinde",
"Kolade",
""
],
[
"Akinwande",
"Mayowa",
""
]
] |
2310.05753 | Zheli Xiong | Zheli Xiong, Defu Lian, Enhong Chen, Gang Chen and Xiaomin Cheng | Large-Scale OD Matrix Estimation with A Deep Learning Method | 12 pages,25 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The estimation of origin-destination (OD) matrices is a crucial aspect of
Intelligent Transport Systems (ITS). It involves adjusting an initial OD matrix
by regressing the current observations like traffic counts of road sections
(e.g., using least squares). However, the OD estimation problem lacks
sufficient constraints and is mathematically underdetermined. To alleviate this
problem, some researchers incorporate a prior OD matrix as a target in the
regression to provide more structural constraints. However, this approach is
highly dependent on the existing prior matrix, which may be outdated. Others
add structural constraints through sensor data, such as vehicle trajectory and
speed, which can reflect more current structural constraints in real-time. Our
proposed method integrates deep learning and numerical optimization algorithms
to infer matrix structure and guide numerical optimization. This approach
combines the advantages of both deep learning and numerical optimization
algorithms. The neural network(NN) learns to infer structural constraints from
probe traffic flows, eliminating dependence on prior information and providing
real-time performance. Additionally, due to the generalization capability of
NN, this method is economical in engineering. We conducted tests to demonstrate
the good generalization performance of our method on a large-scale synthetic
dataset. Subsequently, we verified the stability of our method on real traffic
data. Our experiments provided confirmation of the benefits of combining NN and
numerical optimization.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2023 14:30:06 GMT"
}
] | 1,696,896,000,000 | [
[
"Xiong",
"Zheli",
""
],
[
"Lian",
"Defu",
""
],
[
"Chen",
"Enhong",
""
],
[
"Chen",
"Gang",
""
],
[
"Cheng",
"Xiaomin",
""
]
] |
2310.05876 | Fazl Barez | Kayla Matteucci, Shahar Avin, Fazl Barez, Se\'an \'O h\'Eigeartaigh | AI Systems of Concern | 9 pages, 1 figure, 2 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Concerns around future dangers from advanced AI often centre on systems
hypothesised to have intrinsic characteristics such as agent-like behaviour,
strategic awareness, and long-range planning. We label this cluster of
characteristics as "Property X". Most present AI systems are low in "Property
X"; however, in the absence of deliberate steering, current research directions
may rapidly lead to the emergence of highly capable AI systems that are also
high in "Property X". We argue that "Property X" characteristics are
intrinsically dangerous, and when combined with greater capabilities will
result in AI systems for which safety and control is difficult to guarantee.
Drawing on several scholars' alternative frameworks for possible AI research
trajectories, we argue that most of the proposed benefits of advanced AI can be
obtained by systems designed to minimise this property. We then propose
indicators and governance interventions to identify and limit the development
of systems with risky "Property X" characteristics.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2023 17:15:22 GMT"
}
] | 1,696,896,000,000 | [
[
"Matteucci",
"Kayla",
""
],
[
"Avin",
"Shahar",
""
],
[
"Barez",
"Fazl",
""
],
[
"hÉigeartaigh",
"Seán Ó",
""
]
] |
2310.05993 | Adrian Groza | Adrian Groza | Measuring reasoning capabilities of ChatGPT | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | I shall quantify the logical faults generated by ChatGPT when applied to
reasoning tasks. For experiments, I use the 144 puzzles from the library
\url{https://users.utcluj.ro/~agroza/puzzles/maloga}~\cite{groza:fol}. The
library contains puzzles of various types, including arithmetic puzzles,
logical equations, Sudoku-like puzzles, zebra-like puzzles, truth-telling
puzzles, grid puzzles, strange numbers, or self-reference puzzles. The correct
solutions for these puzzles were checked using the theorem prover
Prover9~\cite{mccune2005release} and the finite models finder
Mace4~\cite{mccune2003mace4} based on human-modelling in Equational First Order
Logic. A first output of this study is the benchmark of 100 logical puzzles.
For this dataset ChatGPT provided both correct answer and justification for 7\%
only. %, while BARD for 5\%. Since the dataset seems challenging, the
researchers are invited to test the dataset on more advanced or tuned models
than ChatGPT3.5 with more crafted prompts. A second output is the
classification of reasoning faults conveyed by ChatGPT. This classification
forms a basis for a taxonomy of reasoning faults generated by large language
models. I have identified 67 such logical faults, among which: inconsistencies,
implication does not hold, unsupported claim, lack of commonsense, wrong
justification. The 100 solutions generated by ChatGPT contain 698 logical
faults. That is on average, 7 fallacies for each reasoning task. A third ouput
is the annotated answers of the ChatGPT with the corresponding logical faults.
Each wrong statement within the ChatGPT answer was manually annotated, aiming
to quantify the amount of faulty text generated by the language model. On
average, 26.03\% from the generated text was a logical fault.
| [
{
"version": "v1",
"created": "Sun, 8 Oct 2023 20:18:50 GMT"
}
] | 1,696,982,400,000 | [
[
"Groza",
"Adrian",
""
]
] |
2310.06089 | Ching Fang | Ching Fang, Kimberly L Stachenfeld | Predictive auxiliary objectives in deep RL mimic learning in the brain | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The ability to predict upcoming events has been hypothesized to comprise a
key aspect of natural and machine cognition. This is supported by trends in
deep reinforcement learning (RL), where self-supervised auxiliary objectives
such as prediction are widely used to support representation learning and
improve task performance. Here, we study the effects predictive auxiliary
objectives have on representation learning across different modules of an RL
system and how these mimic representational changes observed in the brain. We
find that predictive objectives improve and stabilize learning particularly in
resource-limited architectures, and we identify settings where longer
predictive horizons better support representational transfer. Furthermore, we
find that representational changes in this RL system bear a striking
resemblance to changes in neural activity observed in the brain across various
experiments. Specifically, we draw a connection between the auxiliary
predictive model of the RL system and hippocampus, an area thought to learn a
predictive model to support memory-guided behavior. We also connect the encoder
network and the value learning network of the RL system to visual cortex and
striatum in the brain, respectively. This work demonstrates how representation
learning in deep RL systems can provide an interpretable framework for modeling
multi-region interactions in the brain. The deep RL perspective taken here also
suggests an additional role of the hippocampus in the brain -- that of an
auxiliary learning system that benefits representation learning in other
regions.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2023 19:06:25 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Dec 2023 18:44:31 GMT"
}
] | 1,702,252,800,000 | [
[
"Fang",
"Ching",
""
],
[
"Stachenfeld",
"Kimberly L",
""
]
] |
2310.06114 | Mengjiao Yang | Mengjiao Yang, Yilun Du, Kamyar Ghasemipour, Jonathan Tompson, Leslie
Kaelbling, Dale Schuurmans, Pieter Abbeel | Learning Interactive Real-World Simulators | https://universal-simulator.github.io | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Generative models trained on internet data have revolutionized how text,
image, and video content can be created. Perhaps the next milestone for
generative models is to simulate realistic experience in response to actions
taken by humans, robots, and other interactive agents. Applications of a
real-world simulator range from controllable content creation in games and
movies, to training embodied agents purely in simulation that can be directly
deployed in the real world. We explore the possibility of learning a universal
simulator of real-world interaction through generative modeling. We first make
the important observation that natural datasets available for learning a
real-world simulator are often rich along different dimensions (e.g., abundant
objects in image data, densely sampled actions in robotics data, and diverse
movements in navigation data). With careful orchestration of diverse datasets,
each providing a different aspect of the overall experience, we can simulate
the visual outcome of both high-level instructions such as ``open the drawer''
and low-level controls such as "move by x, y" from otherwise static scenes and
objects. We use the simulator to train both high-level vision-language policies
and low-level reinforcement learning policies, each of which can be deployed in
the real world in zero shot after training purely in simulation. We also show
that other types of intelligence such as video captioning models can benefit
from training with simulated experience, opening up even wider applications.
Video demos can be found at https://universal-simulator.github.io.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2023 19:42:22 GMT"
},
{
"version": "v2",
"created": "Sat, 13 Jan 2024 00:42:24 GMT"
}
] | 1,705,449,600,000 | [
[
"Yang",
"Mengjiao",
""
],
[
"Du",
"Yilun",
""
],
[
"Ghasemipour",
"Kamyar",
""
],
[
"Tompson",
"Jonathan",
""
],
[
"Kaelbling",
"Leslie",
""
],
[
"Schuurmans",
"Dale",
""
],
[
"Abbeel",
"Pieter",
""
]
] |
2310.06116 | Ali AhmadiTeshnizi | Ali AhmadiTeshnizi, Wenzhi Gao, Madeleine Udell | OptiMUS: Optimization Modeling Using MIP Solvers and large language
models | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Optimization problems are pervasive across various sectors, from
manufacturing and distribution to healthcare. However, most such problems are
still solved heuristically by hand rather than optimally by state-of-the-art
solvers, as the expertise required to formulate and solve these problems limits
the widespread adoption of optimization tools and techniques. We introduce
OptiMUS, a Large Language Model (LLM)-based agent designed to formulate and
solve MILP problems from their natural language descriptions. OptiMUS is
capable of developing mathematical models, writing and debugging solver code,
developing tests, and checking the validity of generated solutions. To
benchmark our agent, we present NLP4LP, a novel dataset of linear programming
(LP) and mixed integer linear programming (MILP) problems. Our experiments
demonstrate that OptiMUS solves nearly twice as many problems as a basic LLM
prompting strategy. OptiMUS code and NLP4LP dataset are available at
\href{https://github.com/teshnizi/OptiMUS}{https://github.com/teshnizi/OptiMUS}
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2023 19:47:03 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Oct 2023 18:23:45 GMT"
}
] | 1,698,796,800,000 | [
[
"AhmadiTeshnizi",
"Ali",
""
],
[
"Gao",
"Wenzhi",
""
],
[
"Udell",
"Madeleine",
""
]
] |
2310.06167 | Lexin Zhou | Lexin Zhou, Pablo A. Moreno-Casares, Fernando Mart\'inez-Plumed, John
Burden, Ryan Burnell, Lucy Cheke, C\`esar Ferri, Alexandru Marcoci, Behzad
Mehrbakhsh, Yael Moros-Daval, Se\'an \'O h\'Eigeartaigh, Danaja Rutar, Wout
Schellaert, Konstantinos Voudouris, Jos\'e Hern\'andez-Orallo | Predictable Artificial Intelligence | 11 pages excluding references, 4 figures, and 2 tables. Paper Under
Review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce the fundamental ideas and challenges of Predictable AI, a
nascent research area that explores the ways in which we can anticipate key
indicators of present and future AI ecosystems. We argue that achieving
predictability is crucial for fostering trust, liability, control, alignment
and safety of AI ecosystems, and thus should be prioritised over performance.
While distinctive from other areas of technical and non-technical AI research,
the questions, hypotheses and challenges relevant to Predictable AI were yet to
be clearly described. This paper aims to elucidate them, calls for identifying
paths towards AI predictability and outlines the potential impact of this
emergent field.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2023 21:36:21 GMT"
}
] | 1,696,982,400,000 | [
[
"Zhou",
"Lexin",
""
],
[
"Moreno-Casares",
"Pablo A.",
""
],
[
"Martínez-Plumed",
"Fernando",
""
],
[
"Burden",
"John",
""
],
[
"Burnell",
"Ryan",
""
],
[
"Cheke",
"Lucy",
""
],
[
"Ferri",
"Cèsar",
""
],
[
"Marcoci",
"Alexandru",
""
],
[
"Mehrbakhsh",
"Behzad",
""
],
[
"Moros-Daval",
"Yael",
""
],
[
"hÉigeartaigh",
"Seán Ó",
""
],
[
"Rutar",
"Danaja",
""
],
[
"Schellaert",
"Wout",
""
],
[
"Voudouris",
"Konstantinos",
""
],
[
"Hernández-Orallo",
"José",
""
]
] |
2310.06176 | Yinlam Chow | Jihwan Jeong, Yinlam Chow, Guy Tennenholtz, Chih-Wei Hsu, Azamat
Tulepbergenov, Mohammad Ghavamzadeh, Craig Boutilier | Factual and Personalized Recommendations using Language Models and
Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems (RSs) play a central role in connecting users to content,
products, and services, matching candidate items to users based on their
preferences. While traditional RSs rely on implicit user feedback signals,
conversational RSs interact with users in natural language. In this work, we
develop a comPelling, Precise, Personalized, Preference-relevant language model
(P4LM) that recommends items to users while putting emphasis on explaining item
characteristics and their relevance. P4LM uses the embedding space
representation of a user's preferences to generate compelling responses that
are factually-grounded and relevant w.r.t. the user's preferences. Moreover, we
develop a joint reward function that measures precision, appeal, and
personalization, which we use as AI-based feedback in a reinforcement
learning-based language model framework. Using the MovieLens 25M dataset, we
demonstrate that P4LM delivers compelling, personalized movie narratives to
users.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2023 21:58:55 GMT"
}
] | 1,696,982,400,000 | [
[
"Jeong",
"Jihwan",
""
],
[
"Chow",
"Yinlam",
""
],
[
"Tennenholtz",
"Guy",
""
],
[
"Hsu",
"Chih-Wei",
""
],
[
"Tulepbergenov",
"Azamat",
""
],
[
"Ghavamzadeh",
"Mohammad",
""
],
[
"Boutilier",
"Craig",
""
]
] |
2310.06326 | Yusheng Huang | Yusheng Huang, Zhouhan Lin | I2SRM: Intra- and Inter-Sample Relationship Modeling for Multimodal
Information Extraction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal information extraction is attracting research attention nowadays,
which requires aggregating representations from different modalities. In this
paper, we present the Intra- and Inter-Sample Relationship Modeling (I2SRM)
method for this task, which contains two modules. Firstly, the intra-sample
relationship modeling module operates on a single sample and aims to learn
effective representations. Embeddings from textual and visual modalities are
shifted to bridge the modality gap caused by distinct pre-trained language and
image models. Secondly, the inter-sample relationship modeling module considers
relationships among multiple samples and focuses on capturing the interactions.
An AttnMixup strategy is proposed, which not only enables collaboration among
samples but also augments data to improve generalization. We conduct extensive
experiments on the multimodal named entity recognition datasets Twitter-2015
and Twitter-2017, and the multimodal relation extraction dataset MNRE. Our
proposed method I2SRM achieves competitive results, 77.12% F1-score on
Twitter-2015, 88.40% F1-score on Twitter-2017, and 84.12% F1-score on MNRE.
| [
{
"version": "v1",
"created": "Tue, 10 Oct 2023 05:50:25 GMT"
}
] | 1,696,982,400,000 | [
[
"Huang",
"Yusheng",
""
],
[
"Lin",
"Zhouhan",
""
]
] |
2310.06383 | Chenzhuang Du | Siting Li, Chenzhuang Du, Yue Zhao, Yu Huang, Hang Zhao | What Makes for Robust Multi-Modal Models in the Face of Missing
Modalities? | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the growing success of multi-modal learning, research on the robustness
of multi-modal models, especially when facing situations with missing
modalities, is receiving increased attention. Nevertheless, previous studies in
this domain exhibit certain limitations, as they often lack theoretical
insights or their methodologies are tied to specific network architectures or
modalities. We model the scenarios of multi-modal models encountering missing
modalities from an information-theoretic perspective and illustrate that the
performance ceiling in such scenarios can be approached by efficiently
utilizing the information inherent in non-missing modalities. In practice,
there are two key aspects: (1) The encoder should be able to extract
sufficiently good features from the non-missing modality; (2) The extracted
features should be robust enough not to be influenced by noise during the
fusion process across modalities. To this end, we introduce Uni-Modal Ensemble
with Missing Modality Adaptation (UME-MMA). UME-MMA employs uni-modal
pre-trained weights for the multi-modal model to enhance feature extraction and
utilizes missing modality data augmentation techniques to better adapt to
situations with missing modalities. Apart from that, UME-MMA, built on a
late-fusion learning framework, allows for the plug-and-play use of various
encoders, making it suitable for a wide range of modalities and enabling
seamless integration of large-scale pre-trained encoders to further enhance
performance. And we demonstrate UME-MMA's effectiveness in audio-visual
datasets~(e.g., AV-MNIST, Kinetics-Sound, AVE) and vision-language
datasets~(e.g., MM-IMDB, UPMC Food101).
| [
{
"version": "v1",
"created": "Tue, 10 Oct 2023 07:47:57 GMT"
}
] | 1,696,982,400,000 | [
[
"Li",
"Siting",
""
],
[
"Du",
"Chenzhuang",
""
],
[
"Zhao",
"Yue",
""
],
[
"Huang",
"Yu",
""
],
[
"Zhao",
"Hang",
""
]
] |
2310.06441 | Jerome Euzenat | J\'er\^ome Euzenat (MOEX ) | Stepwise functional refoundation of relational concept analysis | euzenat2023a | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relational concept analysis (RCA) is an extension of formal concept analysis
allowing to deal with several related contexts simultaneously. It has been
designed for learning description logic theories from data and used within
various applications. A puzzling observation about RCA is that it returns a
single family of concept lattices although, when the data feature circular
dependencies, other solutions may be considered acceptable. The semantics of
RCA, provided in an operational way, does not shed light on this issue. In this
report, we define these acceptable solutions as those families of concept
lattices which belong to the space determined by the initial contexts
(well-formed), cannot scale new attributes (saturated), and refer only to
concepts of the family (self-supported). We adopt a functional view on the RCA
process by defining the space of well-formed solutions and two functions on
that space: one expansive and the other contractive. We show that the
acceptable solutions are the common fixed points of both functions. This is
achieved step-by-step by starting from a minimal version of RCA that considers
only one single context defined on a space of contexts and a space of lattices.
These spaces are then joined into a single space of context-lattice pairs,
which is further extended to a space of indexed families of context-lattice
pairs representing the objects manippulated by RCA. We show that RCA returns
the least element of the set of acceptable solutions. In addition, it is
possible to build dually an operation that generates its greatest element. The
set of acceptable solutions is a complete sublattice of the interval between
these two elements. Its structure and how the defined functions traverse it are
studied in detail.
| [
{
"version": "v1",
"created": "Tue, 10 Oct 2023 09:13:46 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Jan 2024 14:36:42 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Jan 2024 12:41:53 GMT"
}
] | 1,704,844,800,000 | [
[
"Euzenat",
"Jérôme",
"",
"MOEX"
]
] |
2310.06484 | Xuan Luo | Xuan Luo, Mingqing Huang, Rui Lv, Hui Zhao | Memory efficient location recommendation through proximity-aware
representation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequential location recommendation plays a huge role in modern life, which
can enhance user experience, bring more profit to businesses and assist in
government administration. Although methods for location recommendation have
evolved significantly thanks to the development of recommendation systems,
there is still limited utilization of geographic information, along with the
ongoing challenge of addressing data sparsity. In response, we introduce a
Proximity-aware based region representation for Sequential Recommendation (PASR
for short), built upon the Self-Attention Network architecture. We tackle the
sparsity issue through a novel loss function employing importance sampling,
which emphasizes informative negative samples during optimization. Moreover,
PASR enhances the integration of geographic information by employing a
self-attention-based geography encoder to the hierarchical grid and proximity
grid at each GPS point. To further leverage geographic information, we utilize
the proximity-aware negative samplers to enhance the quality of negative
samples. We conducted evaluations using three real-world Location-Based Social
Networking (LBSN) datasets, demonstrating that PASR surpasses state-of-the-art
sequential location recommendation methods
| [
{
"version": "v1",
"created": "Tue, 10 Oct 2023 09:53:07 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Oct 2023 11:46:52 GMT"
}
] | 1,698,192,000,000 | [
[
"Luo",
"Xuan",
""
],
[
"Huang",
"Mingqing",
""
],
[
"Lv",
"Rui",
""
],
[
"Zhao",
"Hui",
""
]
] |
2310.06500 | Yuan Li | Yuan Li, Yixuan Zhang, and Lichao Sun | MetaAgents: Simulating Interactions of Human Behaviors for LLM-based
Task-oriented Coordination via Collaborative Generative Agents | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Significant advancements have occurred in the application of Large Language
Models (LLMs) for various tasks and social simulations. Despite this, their
capacities to coordinate within task-oriented social contexts are
under-explored. Such capabilities are crucial if LLMs are to effectively mimic
human-like social behavior and produce meaningful results. To bridge this gap,
we introduce collaborative generative agents, endowing LLM-based Agents with
consistent behavior patterns and task-solving abilities. We situate these
agents in a simulated job fair environment as a case study to scrutinize their
coordination skills. We propose a novel framework that equips collaborative
generative agents with human-like reasoning abilities and specialized skills.
Our evaluation demonstrates that these agents show promising performance.
However, we also uncover limitations that hinder their effectiveness in more
complex coordination tasks. Our work provides valuable insights into the role
and evolution of LLMs in task-oriented social simulations.
| [
{
"version": "v1",
"created": "Tue, 10 Oct 2023 10:17:58 GMT"
}
] | 1,696,982,400,000 | [
[
"Li",
"Yuan",
""
],
[
"Zhang",
"Yixuan",
""
],
[
"Sun",
"Lichao",
""
]
] |
2310.06513 | Ming Sun | Yangqing Fu, Ming Sun, Buqing Nie, Yue Gao | Accelerating Monte Carlo Tree Search with Probability Tree State
Abstraction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monte Carlo Tree Search (MCTS) algorithms such as AlphaGo and MuZero have
achieved superhuman performance in many challenging tasks. However, the
computational complexity of MCTS-based algorithms is influenced by the size of
the search space. To address this issue, we propose a novel probability tree
state abstraction (PTSA) algorithm to improve the search efficiency of MCTS. A
general tree state abstraction with path transitivity is defined. In addition,
the probability tree state abstraction is proposed for fewer mistakes during
the aggregation step. Furthermore, the theoretical guarantees of the
transitivity and aggregation error bound are justified. To evaluate the
effectiveness of the PTSA algorithm, we integrate it with state-of-the-art
MCTS-based algorithms, such as Sampled MuZero and Gumbel MuZero. Experimental
results on different tasks demonstrate that our method can accelerate the
training process of state-of-the-art algorithms with 10%-45% search space
reduction.
| [
{
"version": "v1",
"created": "Tue, 10 Oct 2023 10:55:12 GMT"
}
] | 1,696,982,400,000 | [
[
"Fu",
"Yangqing",
""
],
[
"Sun",
"Ming",
""
],
[
"Nie",
"Buqing",
""
],
[
"Gao",
"Yue",
""
]
] |
2310.06541 | Soohyun Park | Gyu Seon Kim, JaeHyun Chung, and Soohyun Park | Realizing Stabilized Landing for Computation-Limited Reusable Rockets: A
Quantum Reinforcement Learning Approach | 5 pages, 5 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The advent of reusable rockets has heralded a new era in space exploration,
reducing the costs of launching satellites by a significant factor. Traditional
rockets were disposable, but the design of reusable rockets for repeated use
has revolutionized the financial dynamics of space missions. The most critical
phase of reusable rockets is the landing stage, which involves managing the
tremendous speed and attitude for safe recovery. The complexity of this task
presents new challenges for control systems, specifically in terms of precision
and adaptability. Classical control systems like the
proportional-integral-derivative (PID) controller lack the flexibility to adapt
to dynamic system changes, making them costly and time-consuming to redesign of
controller. This paper explores the integration of quantum reinforcement
learning into the control systems of reusable rockets as a promising
alternative. Unlike classical reinforcement learning, quantum reinforcement
learning uses quantum bits that can exist in superposition, allowing for more
efficient information encoding and reducing the number of parameters required.
This leads to increased computational efficiency, reduced memory requirements,
and more stable and predictable performance. Due to the nature of reusable
rockets, which must be light, heavy computers cannot fit into them. In the
reusable rocket scenario, quantum reinforcement learning, which has reduced
memory requirements due to fewer parameters, is a good solution.
| [
{
"version": "v1",
"created": "Tue, 10 Oct 2023 11:40:20 GMT"
}
] | 1,696,982,400,000 | [
[
"Kim",
"Gyu Seon",
""
],
[
"Chung",
"JaeHyun",
""
],
[
"Park",
"Soohyun",
""
]
] |
2310.06624 | Anna Sztyber-Betley | Anna Sztyber-Betley, Filip Ko{\l}odziej, Jan Betley, Piotr Duszak | BridgeHand2Vec Bridge Hand Representation | null | Frontiers in Artificial Intelligence and Applications, Volume 372:
ECAI 2023, Pages 2274 - 2281 | 10.3233/FAIA230526 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Contract bridge is a game characterized by incomplete information, posing an
exciting challenge for artificial intelligence methods. This paper proposes the
BridgeHand2Vec approach, which leverages a neural network to embed a bridge
player's hand (consisting of 13 cards) into a vector space. The resulting
representation reflects the strength of the hand in the game and enables
interpretable distances to be determined between different hands. This
representation is derived by training a neural network to estimate the number
of tricks that a pair of players can take. In the remainder of this paper, we
analyze the properties of the resulting vector space and provide examples of
its application in reinforcement learning, and opening bid classification.
Although this was not our main goal, the neural network used for the
vectorization achieves SOTA results on the DDBP2 problem (estimating the number
of tricks for two given hands).
| [
{
"version": "v1",
"created": "Tue, 10 Oct 2023 13:41:41 GMT"
}
] | 1,696,982,400,000 | [
[
"Sztyber-Betley",
"Anna",
""
],
[
"Kołodziej",
"Filip",
""
],
[
"Betley",
"Jan",
""
],
[
"Duszak",
"Piotr",
""
]
] |
2310.06824 | Samuel Marks | Samuel Marks and Max Tegmark | The Geometry of Truth: Emergent Linear Structure in Large Language Model
Representations of True/False Datasets | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have impressive capabilities, but are also prone
to outputting falsehoods. Recent work has developed techniques for inferring
whether a LLM is telling the truth by training probes on the LLM's internal
activations. However, this line of work is controversial, with some authors
pointing out failures of these probes to generalize in basic ways, among other
conceptual issues. In this work, we curate high-quality datasets of true/false
statements and use them to study in detail the structure of LLM representations
of truth, drawing on three lines of evidence: 1. Visualizations of LLM
true/false statement representations, which reveal clear linear structure. 2.
Transfer experiments in which probes trained on one dataset generalize to
different datasets. 3. Causal evidence obtained by surgically intervening in a
LLM's forward pass, causing it to treat false statements as true and vice
versa. Overall, we present evidence that language models linearly represent the
truth or falsehood of factual statements. We also introduce a novel technique,
mass-mean probing, which generalizes better and is more causally implicated in
model outputs than other probing techniques.
| [
{
"version": "v1",
"created": "Tue, 10 Oct 2023 17:54:39 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Dec 2023 19:57:14 GMT"
}
] | 1,702,339,200,000 | [
[
"Marks",
"Samuel",
""
],
[
"Tegmark",
"Max",
""
]
] |
2310.07156 | Conrad Sanderson | Majid Namazi, M.A. Hakim Newton, Conrad Sanderson, Abdul Sattar | Solving Travelling Thief Problems using Coordination Based Methods | expanded and revised version of arXiv:1911.03124 | Journal of Heuristics, Vol. 29, No. 4-6, pp. 487-544, 2023 | 10.1007/s10732-023-09518-7 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A travelling thief problem (TTP) is a proxy to real-life problems such as
postal collection. TTP comprises an entanglement of a travelling salesman
problem (TSP) and a knapsack problem (KP) since items of KP are scattered over
cities of TSP, and a thief has to visit cities to collect items. In TTP, city
selection and item selection decisions need close coordination since the
thief's travelling speed depends on the knapsack's weight and the order of
visiting cities affects the order of item collection. Existing TTP solvers deal
with city selection and item selection separately, keeping decisions for one
type unchanged while dealing with the other type. This separation essentially
means very poor coordination between two types of decision. In this paper, we
first show that a simple local search based coordination approach does not work
in TTP. Then, to address the aforementioned problems, we propose a human
designed coordination heuristic that makes changes to collection plans during
exploration of cyclic tours. We further propose another human designed
coordination heuristic that explicitly exploits the cyclic tours in item
selections during collection plan exploration. Lastly, we propose a machine
learning based coordination heuristic that captures characteristics of the two
human designed coordination heuristics. Our proposed coordination based
approaches help our TTP solver significantly outperform existing
state-of-the-art TTP solvers on a set of benchmark problems. Our solver is
named Cooperation Coordination (CoCo) and its source code is available from
https://github.com/majid75/CoCo
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2023 03:03:50 GMT"
}
] | 1,698,796,800,000 | [
[
"Namazi",
"Majid",
""
],
[
"Newton",
"M. A. Hakim",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Sattar",
"Abdul",
""
]
] |
2310.07348 | Erkan Karabulut | Erkan Karabulut, Victoria Degeler, Paul Groth | Semantic Association Rule Learning from Time Series Data and Knowledge
Graphs | This paper is accepted to SemIIM23: 2nd International Workshop on
Semantic Industrial Information Modelling, 7th November 2023, Athens, Greece,
co-located with 22nd International Semantic Web Conference (ISWC 2023) | null | null | https://ceur-ws.org/Vol-3647/SemIIM2023_paper_3.pdf | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Digital Twins (DT) are a promising concept in cyber-physical systems research
due to their advanced features including monitoring and automated reasoning.
Semantic technologies such as Knowledge Graphs (KG) are recently being utilized
in DTs especially for information modelling. Building on this move, this paper
proposes a pipeline for semantic association rule learning in DTs using KGs and
time series data. In addition to this initial pipeline, we also propose new
semantic association rule criterion. The approach is evaluated on an industrial
water network scenario. Initial evaluation shows that the proposed approach is
able to learn a high number of association rules with semantic information
which are more generalizable. The paper aims to set a foundation for further
work on using semantic association rule learning especially in the context of
industrial applications.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2023 09:57:56 GMT"
}
] | 1,710,201,600,000 | [
[
"Karabulut",
"Erkan",
""
],
[
"Degeler",
"Victoria",
""
],
[
"Groth",
"Paul",
""
]
] |
2310.07354 | Raj Mani Shukla | Lochana Telugu Rajesh, Tapadhir Das, Raj Mani Shukla, and Shamik
Sengupta | Give and Take: Federated Transfer Learning for Industrial IoT Network
Intrusion Detection | Accepted in IEEE International Conference on Trust, Security and
Privacy in Computing and Communications (TrustCom) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The rapid growth in Internet of Things (IoT) technology has become an
integral part of today's industries forming the Industrial IoT (IIoT)
initiative, where industries are leveraging IoT to improve communication and
connectivity via emerging solutions like data analytics and cloud computing.
Unfortunately, the rapid use of IoT has made it an attractive target for
cybercriminals. Therefore, protecting these systems is of utmost importance. In
this paper, we propose a federated transfer learning (FTL) approach to perform
IIoT network intrusion detection. As part of the research, we also propose a
combinational neural network as the centerpiece for performing FTL. The
proposed technique splits IoT data between the client and server devices to
generate corresponding models, and the weights of the client models are
combined to update the server model. Results showcase high performance for the
FTL setup between iterations on both the IIoT clients and the server.
Additionally, the proposed FTL setup achieves better overall performance than
contemporary machine learning algorithms at performing network intrusion
detection.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2023 10:11:54 GMT"
}
] | 1,697,068,800,000 | [
[
"Rajesh",
"Lochana Telugu",
""
],
[
"Das",
"Tapadhir",
""
],
[
"Shukla",
"Raj Mani",
""
],
[
"Sengupta",
"Shamik",
""
]
] |
2310.07389 | Nikolina Covic | Nikolina \v{C}ovi\'c, Jochen Cremer and Hrvoje Pand\v{z}i\'c | Learning a Reward Function for User-Preferred Appliance Scheduling | Submitted to PSCC 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accelerated development of demand response service provision by the
residential sector is crucial for reducing carbon-emissions in the power
sector. Along with the infrastructure advancement, encouraging the end users to
participate is crucial. End users highly value their privacy and control, and
want to be included in the service design and decision-making process when
creating the daily appliance operation schedules. Furthermore, unless they are
financially or environmentally motivated, they are generally not prepared to
sacrifice their comfort to help balance the power system. In this paper, we
present an inverse-reinforcement-learning-based model that helps create the end
users' daily appliance schedules without asking them to explicitly state their
needs and wishes. By using their past consumption data, the end consumers will
implicitly participate in the creation of those decisions and will thus be
motivated to continue participating in the provision of demand response
services.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2023 11:09:44 GMT"
}
] | 1,697,068,800,000 | [
[
"Čović",
"Nikolina",
""
],
[
"Cremer",
"Jochen",
""
],
[
"Pandžić",
"Hrvoje",
""
]
] |
2310.07478 | Minji Yoon | Minji Yoon, Jing Yu Koh, Bryan Hooi, Ruslan Salakhutdinov | Multimodal Graph Learning for Generative Tasks | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Multimodal learning combines multiple data modalities, broadening the types
and complexity of data our models can utilize: for example, from plain text to
image-caption pairs. Most multimodal learning algorithms focus on modeling
simple one-to-one pairs of data from two modalities, such as image-caption
pairs, or audio-text pairs. However, in most real-world settings, entities of
different modalities interact with each other in more complex and multifaceted
ways, going beyond one-to-one mappings. We propose to represent these complex
relationships as graphs, allowing us to capture data with any number of
modalities, and with complex relationships between modalities that can flexibly
vary from one sample to another. Toward this goal, we propose Multimodal Graph
Learning (MMGL), a general and systematic framework for capturing information
from multiple multimodal neighbors with relational structures among them. In
particular, we focus on MMGL for generative tasks, building upon pretrained
Language Models (LMs), aiming to augment their text generation with multimodal
neighbor contexts. We study three research questions raised by MMGL: (1) how
can we infuse multiple neighbor information into the pretrained LMs, while
avoiding scalability issues? (2) how can we infuse the graph structure
information among multimodal neighbors into the LMs? and (3) how can we
finetune the pretrained LMs to learn from the neighbor context in a
parameter-efficient manner? We conduct extensive experiments to answer these
three questions on MMGL and analyze the empirical results to pave the way for
future MMGL research.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2023 13:25:03 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Oct 2023 17:07:24 GMT"
}
] | 1,697,155,200,000 | [
[
"Yoon",
"Minji",
""
],
[
"Koh",
"Jing Yu",
""
],
[
"Hooi",
"Bryan",
""
],
[
"Salakhutdinov",
"Ruslan",
""
]
] |
2310.07493 | Finn Rietz | Finn Rietz and Johannes Andreas Stork | Diversity for Contingency: Learning Diverse Behaviors for Efficient
Adaptation and Transfer | Presented at the third RL-Conform workshop at IROS 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Discovering all useful solutions for a given task is crucial for transferable
RL agents, to account for changes in the task or transition dynamics. This is
not considered by classical RL algorithms that are only concerned with finding
the optimal policy, given the current task and dynamics. We propose a simple
method for discovering all possible solutions of a given task, to obtain an
agent that performs well in the transfer setting and adapts quickly to changes
in the task or transition dynamics. Our method iteratively learns a set of
policies, while each subsequent policy is constrained to yield a solution that
is unlikely under all previous policies. Unlike prior methods, our approach
does not require learning additional models for novelty detection and avoids
balancing task and novelty reward signals, by directly incorporating the
constraint into the action selection and optimization steps.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2023 13:39:35 GMT"
}
] | 1,697,068,800,000 | [
[
"Rietz",
"Finn",
""
],
[
"Stork",
"Johannes Andreas",
""
]
] |
2310.07589 | Luiza Pozzobon | Luiza Pozzobon, Beyza Ermis, Patrick Lewis, Sara Hooker | Goodtriever: Adaptive Toxicity Mitigation with Retrieval-augmented
Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Considerable effort has been dedicated to mitigating toxicity, but existing
methods often require drastic modifications to model parameters or the use of
computationally intensive auxiliary models. Furthermore, previous approaches
have often neglected the crucial factor of language's evolving nature over
time. In this work, we present a comprehensive perspective on toxicity
mitigation that takes into account its changing nature. We introduce
Goodtriever, a flexible methodology that matches the current state-of-the-art
toxicity mitigation while achieving 43% relative latency reduction during
inference and being more computationally efficient. By incorporating a
retrieval-based approach at decoding time, Goodtriever enables
toxicity-controlled text generation. Our research advocates for an increased
focus on adaptable mitigation techniques, which better reflect the data drift
models face when deployed in the wild. Code and data are available at
https://github.com/for-ai/goodtriever.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2023 15:30:35 GMT"
}
] | 1,697,068,800,000 | [
[
"Pozzobon",
"Luiza",
""
],
[
"Ermis",
"Beyza",
""
],
[
"Lewis",
"Patrick",
""
],
[
"Hooker",
"Sara",
""
]
] |
2310.07653 | Zeqiang Lai | Zeqiang Lai, Xizhou Zhu, Jifeng Dai, Yu Qiao, Wenhai Wang | Mini-DALLE3: Interactive Text to Image by Prompting Large Language
Models | Technical report. Project page at https://minidalle3.github.io/ | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The revolution of artificial intelligence content generation has been rapidly
accelerated with the booming text-to-image (T2I) diffusion models. Within just
two years of development, it was unprecedentedly of high-quality, diversity,
and creativity that the state-of-the-art models could generate. However, a
prevalent limitation persists in the effective communication with these popular
T2I models, such as Stable Diffusion, using natural language descriptions. This
typically makes an engaging image hard to obtain without expertise in prompt
engineering with complex word compositions, magic tags, and annotations.
Inspired by the recently released DALLE3 - a T2I model directly built-in
ChatGPT that talks human language, we revisit the existing T2I systems
endeavoring to align human intent and introduce a new task - interactive text
to image (iT2I), where people can interact with LLM for interleaved
high-quality image generation/edit/refinement and question answering with
stronger images and text correspondences using natural language. In addressing
the iT2I problem, we present a simple approach that augments LLMs for iT2I with
prompting techniques and off-the-shelf T2I models. We evaluate our approach for
iT2I in a variety of common-used scenarios under different LLMs, e.g., ChatGPT,
LLAMA, Baichuan, and InternLM. We demonstrate that our approach could be a
convenient and low-cost way to introduce the iT2I ability for any existing LLMs
and any text-to-image models without any training while bringing little
degradation on LLMs' inherent capabilities in, e.g., question answering and
code generation. We hope this work could draw broader attention and provide
inspiration for boosting user experience in human-machine interactions
alongside the image quality of the next-generation T2I systems.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2023 16:53:40 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Oct 2023 00:54:56 GMT"
}
] | 1,697,414,400,000 | [
[
"Lai",
"Zeqiang",
""
],
[
"Zhu",
"Xizhou",
""
],
[
"Dai",
"Jifeng",
""
],
[
"Qiao",
"Yu",
""
],
[
"Wang",
"Wenhai",
""
]
] |
2310.07871 | Xiaochen Wang | Xiaochen Wang, Junyu Luo, Jiaqi Wang, Ziyi Yin, Suhan Cui, Yuan Zhong,
Yaqing Wang, Fenglong Ma | Hierarchical Pretraining on Multimodal Electronic Health Records | Accepted by EMNLP 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Pretraining has proven to be a powerful technique in natural language
processing (NLP), exhibiting remarkable success in various NLP downstream
tasks. However, in the medical domain, existing pretrained models on electronic
health records (EHR) fail to capture the hierarchical nature of EHR data,
limiting their generalization capability across diverse downstream tasks using
a single pretrained model. To tackle this challenge, this paper introduces a
novel, general, and unified pretraining framework called MEDHMP, specifically
designed for hierarchically multimodal EHR data. The effectiveness of the
proposed MEDHMP is demonstrated through experimental results on eight
downstream tasks spanning three levels. Comparisons against eighteen baselines
further highlight the efficacy of our approach.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2023 20:23:33 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Oct 2023 05:31:51 GMT"
}
] | 1,698,019,200,000 | [
[
"Wang",
"Xiaochen",
""
],
[
"Luo",
"Junyu",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"Yin",
"Ziyi",
""
],
[
"Cui",
"Suhan",
""
],
[
"Zhong",
"Yuan",
""
],
[
"Wang",
"Yaqing",
""
],
[
"Ma",
"Fenglong",
""
]
] |
2310.07944 | Hongxu Pu | Hongxu Pu, Xincong Yang, Jing Li, Runhao Guo, Heng Li | AutoRepo: A general framework for multi-modal LLM-based automated
construction reporting | We believe that keeping this version of the paper publicly available
may lead to confusion or misinterpretation regarding our current research
direction and findings | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensuring the safety, quality, and timely completion of construction projects
is paramount, with construction inspections serving as a vital instrument
towards these goals. Nevertheless, the predominantly manual approach of
present-day inspections frequently results in inefficiencies and inadequate
information management. Such methods often fall short of providing holistic,
exhaustive assessments, consequently engendering regulatory oversights and
potential safety hazards. To address this issue, this paper presents a novel
framework named AutoRepo for automated generation of construction inspection
reports. The unmanned vehicles efficiently perform construction inspections and
collect scene information, while the multimodal large language models (LLMs)
are leveraged to automatically generate the inspection reports. The framework
was applied and tested on a real-world construction site, demonstrating its
potential to expedite the inspection process, significantly reduce resource
allocation, and produce high-quality, regulatory standard-compliant inspection
reports. This research thus underscores the immense potential of multimodal
large language models in revolutionizing construction inspection practices,
signaling a significant leap forward towards a more efficient and safer
construction management paradigm.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2023 23:42:00 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Dec 2023 18:13:15 GMT"
}
] | 1,701,734,400,000 | [
[
"Pu",
"Hongxu",
""
],
[
"Yang",
"Xincong",
""
],
[
"Li",
"Jing",
""
],
[
"Guo",
"Runhao",
""
],
[
"Li",
"Heng",
""
]
] |
2310.07998 | Tinghui Ouyang | Tinghui Ouyang, Isao Echizen, Yoshiki Seo | A Novel Statistical Measure for Out-of-Distribution Detection in Data
Quality Assurance | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Data outside the problem domain poses significant threats to the security of
AI-based intelligent systems. Aiming to investigate the data domain and
out-of-distribution (OOD) data in AI quality management (AIQM) study, this
paper proposes to use deep learning techniques for feature representation and
develop a novel statistical measure for OOD detection. First, to extract
low-dimensional representative features distinguishing normal and OOD data, the
proposed research combines the deep auto-encoder (AE) architecture and neuron
activation status for feature engineering. Then, using local conditional
probability (LCP) in data reconstruction, a novel and superior statistical
measure is developed to calculate the score of OOD detection. Experiments and
evaluations are conducted on image benchmark datasets and an industrial
dataset. Through comparative analysis with other common statistical measures in
OOD detection, the proposed research is validated as feasible and effective in
OOD and AIQM studies.
| [
{
"version": "v1",
"created": "Thu, 12 Oct 2023 02:59:49 GMT"
}
] | 1,697,155,200,000 | [
[
"Ouyang",
"Tinghui",
""
],
[
"Echizen",
"Isao",
""
],
[
"Seo",
"Yoshiki",
""
]
] |
2310.08008 | Aparna Elangovan | Aparna Elangovan, Jiayuan He, Yuan Li, Karin Verspoor | Effects of Human Adversarial and Affable Samples on BERT Generalization | To appear at EMNLP Findings 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | BERT-based models have had strong performance on leaderboards, yet have been
demonstrably worse in real-world settings requiring generalization. Limited
quantities of training data is considered a key impediment to achieving
generalizability in machine learning. In this paper, we examine the impact of
training data quality, not quantity, on a model's generalizability. We consider
two characteristics of training data: the portion of human-adversarial
(h-adversarial), i.e., sample pairs with seemingly minor differences but
different ground-truth labels, and human-affable (h-affable) training samples,
i.e., sample pairs with minor differences but the same ground-truth label. We
find that for a fixed size of training samples, as a rule of thumb, having
10-30% h-adversarial instances improves the precision, and therefore F1, by up
to 20 points in the tasks of text classification and relation extraction.
Increasing h-adversarials beyond this range can result in performance plateaus
or even degradation. In contrast, h-affables may not contribute to a model's
generalizability and may even degrade generalization performance.
| [
{
"version": "v1",
"created": "Thu, 12 Oct 2023 03:20:43 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Oct 2023 02:32:38 GMT"
},
{
"version": "v3",
"created": "Tue, 17 Oct 2023 16:24:39 GMT"
},
{
"version": "v4",
"created": "Sun, 10 Dec 2023 22:40:14 GMT"
}
] | 1,702,339,200,000 | [
[
"Elangovan",
"Aparna",
""
],
[
"He",
"Jiayuan",
""
],
[
"Li",
"Yuan",
""
],
[
"Verspoor",
"Karin",
""
]
] |
2310.08032 | JiaQi Li | Jiaqi Li, Guilin Qi, Chuanyi Zhang, Yongrui Chen, Yiming Tan, Chenlong
Xia, Ye Tian | Incorporating Domain Knowledge Graph into Multimodal Movie Genre
Classification with Self-Supervised Attention and Contrastive Learning | Accepted by ACM MM 2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal movie genre classification has always been regarded as a demanding
multi-label classification task due to the diversity of multimodal data such as
posters, plot summaries, trailers and metadata. Although existing works have
made great progress in modeling and combining each modality, they still face
three issues: 1) unutilized group relations in metadata, 2) unreliable
attention allocation, and 3) indiscriminative fused features. Given that the
knowledge graph has been proven to contain rich information, we present a novel
framework that exploits the knowledge graph from various perspectives to
address the above problems. As a preparation, the metadata is processed into a
domain knowledge graph. A translate model for knowledge graph embedding is
adopted to capture the relations between entities. Firstly we retrieve the
relevant embedding from the knowledge graph by utilizing group relations in
metadata and then integrate it with other modalities. Next, we introduce an
Attention Teacher module for reliable attention allocation based on
self-supervised learning. It learns the distribution of the knowledge graph and
produces rational attention weights. Finally, a Genre-Centroid Anchored
Contrastive Learning module is proposed to strengthen the discriminative
ability of fused features. The embedding space of anchors is initialized from
the genre entities in the knowledge graph. To verify the effectiveness of our
framework, we collect a larger and more challenging dataset named MM-IMDb 2.0
compared with the MM-IMDb dataset. The experimental results on two datasets
demonstrate that our model is superior to the state-of-the-art methods. We will
release the code in the near future.
| [
{
"version": "v1",
"created": "Thu, 12 Oct 2023 04:49:11 GMT"
}
] | 1,697,155,200,000 | [
[
"Li",
"Jiaqi",
""
],
[
"Qi",
"Guilin",
""
],
[
"Zhang",
"Chuanyi",
""
],
[
"Chen",
"Yongrui",
""
],
[
"Tan",
"Yiming",
""
],
[
"Xia",
"Chenlong",
""
],
[
"Tian",
"Ye",
""
]
] |
2310.08043 | Alexander Turner | Ulisse Mini, Peli Grietzer, Mrinank Sharma, Austin Meek, Monte
MacDiarmid, Alexander Matt Turner | Understanding and Controlling a Maze-Solving Policy Network | 46 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | To understand the goals and goal representations of AI systems, we carefully
study a pretrained reinforcement learning policy that solves mazes by
navigating to a range of target squares. We find this network pursues multiple
context-dependent goals, and we further identify circuits within the network
that correspond to one of these goals. In particular, we identified eleven
channels that track the location of the goal. By modifying these channels,
either with hand-designed interventions or by combining forward passes, we can
partially control the policy. We show that this network contains redundant,
distributed, and retargetable goal representations, shedding light on the
nature of goal-direction in trained policy networks.
| [
{
"version": "v1",
"created": "Thu, 12 Oct 2023 05:33:54 GMT"
}
] | 1,697,155,200,000 | [
[
"Mini",
"Ulisse",
""
],
[
"Grietzer",
"Peli",
""
],
[
"Sharma",
"Mrinank",
""
],
[
"Meek",
"Austin",
""
],
[
"MacDiarmid",
"Monte",
""
],
[
"Turner",
"Alexander Matt",
""
]
] |
2310.08067 | Hanbin Wang | Dake Chen, Hanbin Wang, Yunhao Huo, Yuzhao Li, Haoyang Zhang | GameGPT: Multi-agent Collaborative Framework for Game Development | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The large language model (LLM) based agents have demonstrated their capacity
to automate and expedite software development processes. In this paper, we
focus on game development and propose a multi-agent collaborative framework,
dubbed GameGPT, to automate game development. While many studies have
pinpointed hallucination as a primary roadblock for deploying LLMs in
production, we identify another concern: redundancy. Our framework presents a
series of methods to mitigate both concerns. These methods include dual
collaboration and layered approaches with several in-house lexicons, to
mitigate the hallucination and redundancy in the planning, task identification,
and implementation phases. Furthermore, a decoupling approach is also
introduced to achieve code generation with better precision.
| [
{
"version": "v1",
"created": "Thu, 12 Oct 2023 06:31:43 GMT"
}
] | 1,697,155,200,000 | [
[
"Chen",
"Dake",
""
],
[
"Wang",
"Hanbin",
""
],
[
"Huo",
"Yunhao",
""
],
[
"Li",
"Yuzhao",
""
],
[
"Zhang",
"Haoyang",
""
]
] |
2310.08118 | Karthik Valmeekam | Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati | Can Large Language Models Really Improve by Self-critiquing Their Own
Plans? | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There have been widespread claims about Large Language Models (LLMs) being
able to successfully verify or self-critique their candidate solutions in
reasoning problems in an iterative mode. Intrigued by those claims, in this
paper we set out to investigate the verification/self-critiquing abilities of
large language models in the context of planning. We evaluate a planning system
that employs LLMs for both plan generation and verification. We assess the
verifier LLM's performance against ground-truth verification, the impact of
self-critiquing on plan generation, and the influence of varying feedback
levels on system performance. Using GPT-4, a state-of-the-art LLM, for both
generation and verification, our findings reveal that self-critiquing appears
to diminish plan generation performance, especially when compared to systems
with external, sound verifiers and the LLM verifiers in that system produce a
notable number of false positives, compromising the system's reliability.
Additionally, the nature of feedback, whether binary or detailed, showed
minimal impact on plan generation. Collectively, our results cast doubt on the
effectiveness of LLMs in a self-critiquing, iterative framework for planning
tasks.
| [
{
"version": "v1",
"created": "Thu, 12 Oct 2023 08:22:37 GMT"
}
] | 1,697,155,200,000 | [
[
"Valmeekam",
"Karthik",
""
],
[
"Marquez",
"Matthew",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2310.08295 | Reneira Seeamber | Reneira Seeamber and Cosmin Badea | If our aim is to build morality into an artificial agent, how might we
begin to go about doing so? | 12 pages, 1 figure, | IEEE Intelligent Systems. 2023 | 10.1109/MIS.2023.3320875 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As Artificial Intelligence (AI) becomes pervasive in most fields, from
healthcare to autonomous driving, it is essential that we find successful ways
of building morality into our machines, especially for decision-making.
However, the question of what it means to be moral is still debated,
particularly in the context of AI. In this paper, we highlight the different
aspects that should be considered when building moral agents, including the
most relevant moral paradigms and challenges. We also discuss the top-down and
bottom-up approaches to design and the role of emotion and sentience in
morality. We then propose solutions including a hybrid approach to design and a
hierarchical approach to combining moral paradigms. We emphasize how governance
and policy are becoming ever more critical in AI Ethics and in ensuring that
the tasks we set for moral agents are attainable, that ethical behavior is
achieved, and that we obtain good AI.
| [
{
"version": "v1",
"created": "Thu, 12 Oct 2023 12:56:12 GMT"
}
] | 1,697,155,200,000 | [
[
"Seeamber",
"Reneira",
""
],
[
"Badea",
"Cosmin",
""
]
] |
2310.08328 | Xiao Xu | Xiao Xu, Lei Zhang, Bailong Liu, Zhizhen Liang and Xuefei Zhang | Transport-Hub-Aware Spatial-Temporal Adaptive Graph Transformer for
Traffic Flow Prediction | 11 pages, 4 figures. Spatial self-attention of this work extends
AAAI23 - PDFormer(arXiv:2301.07945) by other authors, cited as Ref. [17].
This work has been submitted to the IEEE for possible publication. Copyright
may be transferred without notice, after which this version may no longer be
accessible | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a core technology of Intelligent Transportation System (ITS), traffic flow
prediction has a wide range of applications. Traffic flow data are
spatial-temporal, which are not only correlated to spatial locations in road
networks, but also vary with temporal time indices. Existing methods have
solved the challenges in traffic flow prediction partly, focusing on modeling
spatial-temporal dependencies effectively, while not all intrinsic properties
of traffic flow data are utilized fully. Besides, there are very few attempts
at incremental learning of spatial-temporal data mining, and few previous works
can be easily transferred to the traffic flow prediction task. Motivated by the
challenge of incremental learning methods for traffic flow prediction and the
underutilization of intrinsic properties of road networks, we propose a
Transport-Hub-aware Spatial-Temporal adaptive graph transFormer (H-STFormer)
for traffic flow prediction. Specifically, we first design a novel spatial
self-attention module to capture the dynamic spatial dependencies. Three graph
masking matrices are integrated into spatial self-attentions to highlight both
short- and long-term dependences. Additionally, we employ a temporal
self-attention module to detect dynamic temporal patterns in the traffic flow
data. Finally, we design an extra spatial-temporal knowledge distillation
module for incremental learning of traffic flow prediction tasks. Through
extensive experiments, we show the effectiveness of H-STFormer in normal and
incremental traffic flow prediction tasks. The code is available at
https://github.com/Fantasy-Shaw/H-STFormer.
| [
{
"version": "v1",
"created": "Thu, 12 Oct 2023 13:44:35 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Oct 2023 15:28:44 GMT"
}
] | 1,697,500,800,000 | [
[
"Xu",
"Xiao",
""
],
[
"Zhang",
"Lei",
""
],
[
"Liu",
"Bailong",
""
],
[
"Liang",
"Zhizhen",
""
],
[
"Zhang",
"Xuefei",
""
]
] |
2310.08377 | Moritz Willig | Moritz Willig (1), Matej Ze\v{c}evi\'c (1), Devendra Singh Dhami (4),
Kristian Kersting (1,2,3) (Technical University of Darmstadt, (2) Hessian
Center for AI, (3) German Research Center for AI (4) Eindhoven University of
Technology) | Do Not Marginalize Mechanisms, Rather Consolidate! | 19 pages, 8 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structural causal models (SCMs) are a powerful tool for understanding the
complex causal relationships that underlie many real-world systems. As these
systems grow in size, the number of variables and complexity of interactions
between them does, too. Thus, becoming convoluted and difficult to analyze.
This is particularly true in the context of machine learning and artificial
intelligence, where an ever increasing amount of data demands for new methods
to simplify and compress large scale SCM. While methods for marginalizing and
abstracting SCM already exist today, they may destroy the causality of the
marginalized model. To alleviate this, we introduce the concept of
consolidating causal mechanisms to transform large-scale SCM while preserving
consistent interventional behaviour. We show consolidation is a powerful method
for simplifying SCM, discuss reduction of computational complexity and give a
perspective on generalizing abilities of consolidated SCM.
| [
{
"version": "v1",
"created": "Thu, 12 Oct 2023 14:47:51 GMT"
}
] | 1,697,155,200,000 | [
[
"Willig",
"Moritz",
""
],
[
"Zečević",
"Matej",
""
],
[
"Dhami",
"Devendra Singh",
""
],
[
"Kersting",
"Kristian",
""
]
] |
2310.08560 | Charles Packer | Charles Packer, Sarah Wooders, Kevin Lin, Vivian Fang, Shishir G.
Patil, Ion Stoica, Joseph E. Gonzalez | MemGPT: Towards LLMs as Operating Systems | Code and data available at https://research.memgpt.ai | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have revolutionized AI, but are constrained by
limited context windows, hindering their utility in tasks like extended
conversations and document analysis. To enable using context beyond limited
context windows, we propose virtual context management, a technique drawing
inspiration from hierarchical memory systems in traditional operating systems
that provide the appearance of large memory resources through data movement
between fast and slow memory. Using this technique, we introduce MemGPT
(Memory-GPT), a system that intelligently manages different memory tiers in
order to effectively provide extended context within the LLM's limited context
window, and utilizes interrupts to manage control flow between itself and the
user. We evaluate our OS-inspired design in two domains where the limited
context windows of modern LLMs severely handicaps their performance: document
analysis, where MemGPT is able to analyze large documents that far exceed the
underlying LLM's context window, and multi-session chat, where MemGPT can
create conversational agents that remember, reflect, and evolve dynamically
through long-term interactions with their users. We release MemGPT code and
data for our experiments at https://memgpt.ai.
| [
{
"version": "v1",
"created": "Thu, 12 Oct 2023 17:51:32 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Feb 2024 18:59:46 GMT"
}
] | 1,707,782,400,000 | [
[
"Packer",
"Charles",
""
],
[
"Wooders",
"Sarah",
""
],
[
"Lin",
"Kevin",
""
],
[
"Fang",
"Vivian",
""
],
[
"Patil",
"Shishir G.",
""
],
[
"Stoica",
"Ion",
""
],
[
"Gonzalez",
"Joseph E.",
""
]
] |
2310.08737 | Yuanwei Qu | Yuanwei Qu, Baifan Zhou, Arild Waaler, David Cameron | Real-Time Event Detection with Random Forests and Temporal Convolutional
Networks for More Sustainable Petroleum Industry | Paper accepted at PRICAI 2023 AI-Impact Track | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The petroleum industry is crucial for modern society, but the production
process is complex and risky. During the production, accidents or failures,
resulting from undesired production events, can cause severe environmental and
economic damage. Previous studies have investigated machine learning (ML)
methods for undesired event detection. However, the prediction of event
probability in real-time was insufficiently addressed, which is essential since
it is important to undertake early intervention when an event is expected to
happen. This paper proposes two ML approaches, random forests and temporal
convolutional networks, to detect undesired events in real-time. Results show
that our approaches can effectively classify event types and predict the
probability of their appearance, addressing the challenges uncovered in
previous studies and providing a more effective solution for failure event
management during the production.
| [
{
"version": "v1",
"created": "Thu, 12 Oct 2023 21:50:53 GMT"
}
] | 1,697,414,400,000 | [
[
"Qu",
"Yuanwei",
""
],
[
"Zhou",
"Baifan",
""
],
[
"Waaler",
"Arild",
""
],
[
"Cameron",
"David",
""
]
] |
2310.08803 | Palaash Agrawal | Palaash Agrawal, Cheston Tan and Heena Rathore | Advancing Perception in Artificial Intelligence through Principles of
Cognitive Science | Summary: a detailed review of the current state of perception models
through the lens of cognitive AI | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Although artificial intelligence (AI) has achieved many feats at a rapid
pace, there still exist open problems and fundamental shortcomings related to
performance and resource efficiency. Since AI researchers benchmark a
significant proportion of performance standards through human intelligence,
cognitive sciences-inspired AI is a promising domain of research. Studying
cognitive science can provide a fresh perspective to building fundamental
blocks in AI research, which can lead to improved performance and efficiency.
In this review paper, we focus on the cognitive functions of perception, which
is the process of taking signals from one's surroundings as input, and
processing them to understand the environment. Particularly, we study and
compare its various processes through the lens of both cognitive sciences and
AI. Through this study, we review all current major theories from various
sub-disciplines of cognitive science (specifically neuroscience, psychology and
linguistics), and draw parallels with theories and techniques from current
practices in AI. We, hence, present a detailed collection of methods in AI for
researchers to build AI systems inspired by cognitive science. Further, through
the process of reviewing the state of cognitive-inspired AI, we point out many
gaps in the current state of AI (with respect to the performance of the human
brain), and hence present potential directions for researchers to develop
better perception systems in AI.
| [
{
"version": "v1",
"created": "Fri, 13 Oct 2023 01:21:55 GMT"
}
] | 1,697,414,400,000 | [
[
"Agrawal",
"Palaash",
""
],
[
"Tan",
"Cheston",
""
],
[
"Rathore",
"Heena",
""
]
] |
2310.08842 | Ian Watson | Ian Watson | A Case-Based Persistent Memory for a Large Language Model | 8 pages, 1 figure | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Case-based reasoning (CBR) as a methodology for problem-solving can use any
appropriate computational technique. This position paper argues that CBR
researchers have somewhat overlooked recent developments in deep learning and
large language models (LLMs). The underlying technical developments that have
enabled the recent breakthroughs in AI have strong synergies with CBR and could
be used to provide a persistent memory for LLMs to make progress towards
Artificial General Intelligence.
| [
{
"version": "v1",
"created": "Fri, 13 Oct 2023 03:56:38 GMT"
},
{
"version": "v2",
"created": "Tue, 7 May 2024 04:36:42 GMT"
}
] | 1,715,126,400,000 | [
[
"Watson",
"Ian",
""
]
] |
2310.08849 | Md. Tanzib Hosain | Md. Tanzib Hosain, Mehedi Hasan Anik, Sadman Rafi, Rana Tabassum,
Khaleque Insia, Md. Mehrab Siddiky | Path To Gain Functional Transparency In Artificial Intelligence With
Meaningful Explainability | Hosain, M. T. , Anik, M. H. , Rafi, S. , Tabassum, R. , Insia, K. &
S{\i}dd{\i}ky, M. M. (). Path To Gain Functional Transparency In Artificial
Intelligence With Meaningful Explainability . Journal of Metaverse , 3 (2) ,
166-180 . DOI: 10.57019/jmv.1306685 | null | 10.57019/jmv.1306685 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial Intelligence (AI) is rapidly integrating into various aspects of
our daily lives, influencing decision-making processes in areas such as
targeted advertising and matchmaking algorithms. As AI systems become
increasingly sophisticated, ensuring their transparency and explainability
becomes crucial. Functional transparency is a fundamental aspect of algorithmic
decision-making systems, allowing stakeholders to comprehend the inner workings
of these systems and enabling them to evaluate their fairness and accuracy.
However, achieving functional transparency poses significant challenges that
need to be addressed. In this paper, we propose a design for user-centered
compliant-by-design transparency in transparent systems. We emphasize that the
development of transparent and explainable AI systems is a complex and
multidisciplinary endeavor, necessitating collaboration among researchers from
diverse fields such as computer science, artificial intelligence, ethics, law,
and social science. By providing a comprehensive understanding of the
challenges associated with transparency in AI systems and proposing a
user-centered design framework, we aim to facilitate the development of AI
systems that are accountable, trustworthy, and aligned with societal values.
| [
{
"version": "v1",
"created": "Fri, 13 Oct 2023 04:25:30 GMT"
}
] | 1,697,414,400,000 | [
[
"Hosain",
"Md. Tanzib",
""
],
[
"Anik",
"Mehedi Hasan",
""
],
[
"Rafi",
"Sadman",
""
],
[
"Tabassum",
"Rana",
""
],
[
"Insia",
"Khaleque",
""
],
[
"Siddiky",
"Md. Mehrab",
""
]
] |
2310.08915 | Yuxin Zhang | Yuxin Zhang, Lirui Zhao, Mingbao Lin, Yunyun Sun, Yiwu Yao, Xingjia
Han, Jared Tanner, Shiwei Liu, Rongrong Ji | Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs | Published as a conference paper at ICLR 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The ever-increasing large language models (LLMs), though opening a potential
path for the upcoming artificial general intelligence, sadly drops a daunting
obstacle on the way towards their on-device deployment. As one of the most
well-established pre-LLMs approaches in reducing model complexity, network
pruning appears to lag behind in the era of LLMs, due mostly to its costly
fine-tuning (or re-training) necessity under the massive volumes of model
parameter and training data. To close this industry-academia gap, we introduce
Dynamic Sparse No Training (DSnoT), a training-free fine-tuning approach that
slightly updates sparse LLMs without the expensive backpropagation and any
weight updates. Inspired by the Dynamic Sparse Training, DSnoT minimizes the
reconstruction error between the dense and sparse LLMs, in the fashion of
performing iterative weight pruning-and-growing on top of sparse LLMs. To
accomplish this purpose, DSnoT particularly takes into account the anticipated
reduction in reconstruction error for pruning and growing, as well as the
variance w.r.t. different input data for growing each weight. This practice can
be executed efficiently in linear time since its obviates the need of
backpropagation for fine-tuning LLMs. Extensive experiments on LLaMA-V1/V2,
Vicuna, and OPT across various benchmarks demonstrate the effectiveness of
DSnoT in enhancing the performance of sparse LLMs, especially at high sparsity
levels. For instance, DSnoT is able to outperform the state-of-the-art Wanda by
26.79 perplexity at 70% sparsity with LLaMA-7B. Our paper offers fresh insights
into how to fine-tune sparse LLMs in an efficient training-free manner and open
new venues to scale the great potential of sparsity to LLMs. Codes are
available at https://github.com/zyxxmu/DSnoT.
| [
{
"version": "v1",
"created": "Fri, 13 Oct 2023 07:38:52 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Oct 2023 05:07:25 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Feb 2024 02:51:30 GMT"
}
] | 1,708,992,000,000 | [
[
"Zhang",
"Yuxin",
""
],
[
"Zhao",
"Lirui",
""
],
[
"Lin",
"Mingbao",
""
],
[
"Sun",
"Yunyun",
""
],
[
"Yao",
"Yiwu",
""
],
[
"Han",
"Xingjia",
""
],
[
"Tanner",
"Jared",
""
],
[
"Liu",
"Shiwei",
""
],
[
"Ji",
"Rongrong",
""
]
] |
2310.08977 | Shivom Aggarwal | Shivom Aggarwal, Shourya Mehra, Pritha Mitra | Multi-Purpose NLP Chatbot : Design, Methodology & Conclusion | Multilingual , Voice Conversion , Emotion Recognition , Offline
Service , Financial Advisor , Product Preference , Customer Reaction
Prediction | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | With a major focus on its history, difficulties, and promise, this research
paper provides a thorough analysis of the chatbot technology environment as it
exists today. It provides a very flexible chatbot system that makes use of
reinforcement learning strategies to improve user interactions and
conversational experiences. Additionally, this system makes use of sentiment
analysis and natural language processing to determine user moods. The chatbot
is a valuable tool across many fields thanks to its amazing characteristics,
which include voice-to-voice conversation, multilingual support [12], advising
skills, offline functioning, and quick help features. The complexity of chatbot
technology development is also explored in this study, along with the causes
that have propelled these developments and their far-reaching effects on a
range of sectors. According to the study, three crucial elements are crucial:
1) Even without explicit profile information, the chatbot system is built to
adeptly understand unique consumer preferences and fluctuating satisfaction
levels. With the use of this capacity, user interactions are made to meet their
wants and preferences. 2) Using a complex method that interlaces Multiview
voice chat information, the chatbot may precisely simulate users' actual
experiences. This aids in developing more genuine and interesting discussions.
3) The study presents an original method for improving the black-box deep
learning models' capacity for prediction. This improvement is made possible by
introducing dynamic satisfaction measurements that are theory-driven, which
leads to more precise forecasts of consumer reaction.
| [
{
"version": "v1",
"created": "Fri, 13 Oct 2023 09:47:24 GMT"
}
] | 1,697,414,400,000 | [
[
"Aggarwal",
"Shivom",
""
],
[
"Mehra",
"Shourya",
""
],
[
"Mitra",
"Pritha",
""
]
] |
2310.09049 | Lei Yao | Lei Yao, Yong Zhang, Zilong Yan and Jialu Tian | SAI: Solving AI Tasks with Systematic Artificial Intelligence in
Communication Network | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the rapid development of artificial intelligence, solving complex AI tasks
is a crucial technology in intelligent mobile networks. Despite the good
performance of specialized AI models in intelligent mobile networks, they are
unable to handle complicated AI tasks. To address this challenge, we propose
Systematic Artificial Intelligence (SAI), which is a framework designed to
solve AI tasks by leveraging Large Language Models (LLMs) and JSON-format
intent-based input to connect self-designed model library and database.
Specifically, we first design a multi-input component, which simultaneously
integrates Large Language Models (LLMs) and JSON-format intent-based inputs to
fulfill the diverse intent requirements of different users. In addition, we
introduce a model library module based on model cards which employ model cards
to pairwise match between different modules for model composition. Model cards
contain the corresponding model's name and the required performance metrics.
Then when receiving user network requirements, we execute each subtask for
multiple selected model combinations and provide output based on the execution
results and LLM feedback. By leveraging the language capabilities of LLMs and
the abundant AI models in the model library, SAI can complete numerous complex
AI tasks in the communication network, achieving impressive results in network
optimization, resource allocation, and other challenging tasks.
| [
{
"version": "v1",
"created": "Fri, 13 Oct 2023 12:14:58 GMT"
}
] | 1,697,414,400,000 | [
[
"Yao",
"Lei",
""
],
[
"Zhang",
"Yong",
""
],
[
"Yan",
"Zilong",
""
],
[
"Tian",
"Jialu",
""
]
] |
2310.09158 | Meiqi Chen | Meiqi Chen, Yubo Ma, Kaitao Song, Yixin Cao, Yan Zhang, and Dongsheng
Li | Learning To Teach Large Language Models Logical Reasoning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have gained enormous attention from both
academia and industry, due to their exceptional ability in language generation
and extremely powerful generalization. However, current LLMs still output
unreliable content in practical reasoning tasks due to their inherent issues
(e.g., hallucination). To better disentangle this problem, in this paper, we
conduct an in-depth investigation to systematically explore the capability of
LLMs in logical reasoning. More in detail, we first investigate the deficiency
of LLMs in logical reasoning on different tasks, including event relation
extraction and deductive reasoning. Our study demonstrates that LLMs are not
good reasoners in solving tasks with rigorous reasoning and will produce
counterfactual answers, which require us to iteratively refine. Therefore, we
comprehensively explore different strategies to endow LLMs with logical
reasoning ability, and thus enable them to generate more logically consistent
answers across different scenarios. Based on our approach, we also contribute a
synthesized dataset (LLM-LR) involving multi-hop reasoning for evaluation and
pre-training. Extensive quantitative and qualitative analyses on different
tasks also validate the effectiveness and necessity of teaching LLMs with logic
and provide insights for solving practical tasks with LLMs in future work.
| [
{
"version": "v1",
"created": "Fri, 13 Oct 2023 14:53:06 GMT"
}
] | 1,697,414,400,000 | [
[
"Chen",
"Meiqi",
""
],
[
"Ma",
"Yubo",
""
],
[
"Song",
"Kaitao",
""
],
[
"Cao",
"Yixin",
""
],
[
"Zhang",
"Yan",
""
],
[
"Li",
"Dongsheng",
""
]
] |
2310.09217 | Jason Hausenloy | Jason Hausenloy, Andrea Miotti, Claire Dennis | Multinational AGI Consortium (MAGIC): A Proposal for International
Coordination on AI | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a Multinational Artificial General Intelligence
Consortium (MAGIC) to mitigate existential risks from advanced artificial
intelligence (AI). MAGIC would be the only institution in the world permitted
to develop advanced AI, enforced through a global moratorium by its signatory
members on all other advanced AI development. MAGIC would be exclusive,
safety-focused, highly secure, and collectively supported by member states,
with benefits distributed equitably among signatories. MAGIC would allow narrow
AI models to flourish while significantly reducing the possibility of
misaligned, rogue, breakout, or runaway outcomes of general-purpose systems. We
do not address the political feasibility of implementing a moratorium or
address the specific legislative strategies and rules needed to enforce a ban
on high-capacity AGI training runs. Instead, we propose one positive vision of
the future, where MAGIC, as a global governance regime, can lay the groundwork
for long-term, safe regulation of advanced AI.
| [
{
"version": "v1",
"created": "Fri, 13 Oct 2023 16:12:26 GMT"
}
] | 1,697,414,400,000 | [
[
"Hausenloy",
"Jason",
""
],
[
"Miotti",
"Andrea",
""
],
[
"Dennis",
"Claire",
""
]
] |
2310.09383 | Maxwell Jacobson | Maxwell Joseph Jacobson, Yexiang Xue | Integrating Symbolic Reasoning into Neural Generative Models for Design
Generation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Design generation requires tight integration of neural and symbolic
reasoning, as good design must meet explicit user needs and honor implicit
rules for aesthetics, utility, and convenience. Current automated design tools
driven by neural networks produce appealing designs, but cannot satisfy user
specifications and utility requirements. Symbolic reasoning tools, such as
constraint programming, cannot perceive low-level visual information in images
or capture subtle aspects such as aesthetics. We introduce the Spatial
Reasoning Integrated Generator (SPRING) for design generation. SPRING embeds a
neural and symbolic integrated spatial reasoning module inside the deep
generative network. The spatial reasoning module decides the locations of
objects to be generated in the form of bounding boxes, which are predicted by a
recurrent neural network and filtered by symbolic constraint satisfaction.
Embedding symbolic reasoning into neural generation guarantees that the output
of SPRING satisfies user requirements. Furthermore, SPRING offers
interpretability, allowing users to visualize and diagnose the generation
process through the bounding boxes. SPRING is also adept at managing novel user
specifications not encountered during its training, thanks to its proficiency
in zero-shot constraint transfer. Quantitative evaluations and a human study
reveal that SPRING outperforms baseline generative models, excelling in
delivering high design quality and better meeting user specifications.
| [
{
"version": "v1",
"created": "Fri, 13 Oct 2023 20:03:22 GMT"
}
] | 1,697,500,800,000 | [
[
"Jacobson",
"Maxwell Joseph",
""
],
[
"Xue",
"Yexiang",
""
]
] |
2310.09696 | XingJiao Wu | Shuwen Yang, Anran Wu, Xingjiao Wu, Luwei Xiao, Tianlong Ma, Cheng
Jin, Liang He | Progressive Evidence Refinement for Open-domain Multimodal Retrieval
Question Answering | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pre-trained multimodal models have achieved significant success in
retrieval-based question answering. However, current multimodal retrieval
question-answering models face two main challenges. Firstly, utilizing
compressed evidence features as input to the model results in the loss of
fine-grained information within the evidence. Secondly, a gap exists between
the feature extraction of evidence and the question, which hinders the model
from effectively extracting critical features from the evidence based on the
given question. We propose a two-stage framework for evidence retrieval and
question-answering to alleviate these issues. First and foremost, we propose a
progressive evidence refinement strategy for selecting crucial evidence. This
strategy employs an iterative evidence retrieval approach to uncover the
logical sequence among the evidence pieces. It incorporates two rounds of
filtering to optimize the solution space, thus further ensuring temporal
efficiency. Subsequently, we introduce a semi-supervised contrastive learning
training strategy based on negative samples to expand the scope of the question
domain, allowing for a more thorough exploration of latent knowledge within
known samples. Finally, in order to mitigate the loss of fine-grained
information, we devise a multi-turn retrieval and question-answering strategy
to handle multimodal inputs. This strategy involves incorporating multimodal
evidence directly into the model as part of the historical dialogue and
question. Meanwhile, we leverage a cross-modal attention mechanism to capture
the underlying connections between the evidence and the question, and the
answer is generated through a decoding generation approach. We validate the
model's effectiveness through extensive experiments, achieving outstanding
performance on WebQA and MultimodelQA benchmark tests.
| [
{
"version": "v1",
"created": "Sun, 15 Oct 2023 01:18:39 GMT"
}
] | 1,697,500,800,000 | [
[
"Yang",
"Shuwen",
""
],
[
"Wu",
"Anran",
""
],
[
"Wu",
"Xingjiao",
""
],
[
"Xiao",
"Luwei",
""
],
[
"Ma",
"Tianlong",
""
],
[
"Jin",
"Cheng",
""
],
[
"He",
"Liang",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.