id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
listlengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.09754 | Huanhuan Ma | Huanhuan Ma and Weizhi Xu and Yifan Wei and Liuji Chen and Liang Wang
and Qiang Liu and Shu Wu and Liang Wang | EX-FEVER: A Dataset for Multi-hop Explainable Fact Verification | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fact verification aims to automatically probe the veracity of a claim based
on several pieces of evidence. Existing works are always engaging in accuracy
improvement, let alone explainability, a critical capability of fact
verification systems. Constructing an explainable fact verification system in a
complex multi-hop scenario is consistently impeded by the absence of a
relevant, high-quality dataset. Previous datasets either suffer from excessive
simplification or fail to incorporate essential considerations for
explainability. To address this, we present EXFEVER, a pioneering dataset for
multi-hop explainable fact verification. With over 60,000 claims involving
2-hop and 3-hop reasoning, each is created by summarizing and modifying
information from hyperlinked Wikipedia documents. Each instance is accompanied
by a veracity label and an explanation that outlines the reasoning path
supporting the veracity classification. Additionally, we demonstrate a novel
baseline system on our EX-FEVER dataset, showcasing document retrieval,
explanation generation, and claim verification, and validate the significance
of our dataset. Furthermore, we highlight the potential of utilizing Large
Language Models in the fact verification task. We hope our dataset could make a
significant contribution by providing ample opportunities to explore the
integration of natural language explanations in the domain of fact
verification.
| [
{
"version": "v1",
"created": "Sun, 15 Oct 2023 06:46:15 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Feb 2024 06:39:44 GMT"
}
] | 1,708,473,600,000 | [
[
"Ma",
"Huanhuan",
""
],
[
"Xu",
"Weizhi",
""
],
[
"Wei",
"Yifan",
""
],
[
"Chen",
"Liuji",
""
],
[
"Wang",
"Liang",
""
],
[
"Liu",
"Qiang",
""
],
[
"Wu",
"Shu",
""
],
[
"Wang",
"Liang",
""
]
] |
2310.09774 | Hongjun Wu | Hongjun Wu and Di Wang | Worst-Case Analysis is Maximum-A-Posteriori Estimation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The worst-case resource usage of a program can provide useful information for
many software-engineering tasks, such as performance optimization and
algorithmic-complexity-vulnerability discovery. This paper presents a generic,
adaptive, and sound fuzzing framework, called DSE-SMC, for estimating
worst-case resource usage. DSE-SMC is generic because it is black-box as long
as the user provides an interface for retrieving resource-usage information on
a given input; adaptive because it automatically balances between exploration
and exploitation of candidate inputs; and sound because it is guaranteed to
converge to the true resource-usage distribution of the analyzed program.
DSE-SMC is built upon a key observation: resource accumulation in a program
is isomorphic to the soft-conditioning mechanism in Bayesian probabilistic
programming; thus, worst-case resource analysis is isomorphic to the
maximum-a-posteriori-estimation problem of Bayesian statistics. DSE-SMC
incorporates sequential Monte Carlo (SMC) -- a generic framework for Bayesian
inference -- with adaptive evolutionary fuzzing algorithms, in a sound manner,
i.e., DSE-SMC asymptotically converges to the posterior distribution induced by
resource-usage behavior of the analyzed program. Experimental evaluation on
Java applications demonstrates that DSE-SMC is significantly more effective
than existing black-box fuzzing methods for worst-case analysis.
| [
{
"version": "v1",
"created": "Sun, 15 Oct 2023 08:24:02 GMT"
}
] | 1,697,500,800,000 | [
[
"Wu",
"Hongjun",
""
],
[
"Wang",
"Di",
""
]
] |
2310.09781 | Xiangnan Chen | Xiangnan Chen, Wen Zhang, Zhen Yao, Mingyang Chen, Siliang Tang | Negative Sampling with Adaptive Denoising Mixup for Knowledge Graph
Embedding | Accepted by ISWC 2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge graph embedding (KGE) aims to map entities and relations of a
knowledge graph (KG) into a low-dimensional and dense vector space via
contrasting the positive and negative triples. In the training process of KGEs,
negative sampling is essential to find high-quality negative triples since KGs
only contain positive triples. Most existing negative sampling methods assume
that non-existent triples with high scores are high-quality negative triples.
However, negative triples sampled by these methods are likely to contain noise.
Specifically, they ignore that non-existent triples with high scores might also
be true facts due to the incompleteness of KGs, which are usually called false
negative triples. To alleviate the above issue, we propose an easily pluggable
denoising mixup method called DeMix, which generates high-quality triples by
refining sampled negative triples in a self-supervised manner. Given a sampled
unlabeled triple, DeMix firstly classifies it into a marginal pseudo-negative
triple or a negative triple based on the judgment of the KGE model itself.
Secondly, it selects an appropriate mixup partner for the current triple to
synthesize a partially positive or a harder negative triple. Experimental
results on the knowledge graph completion task show that the proposed DeMix is
superior to other negative sampling techniques, ensuring corresponding KGEs a
faster convergence and better link prediction results.
| [
{
"version": "v1",
"created": "Sun, 15 Oct 2023 09:01:24 GMT"
}
] | 1,697,500,800,000 | [
[
"Chen",
"Xiangnan",
""
],
[
"Zhang",
"Wen",
""
],
[
"Yao",
"Zhen",
""
],
[
"Chen",
"Mingyang",
""
],
[
"Tang",
"Siliang",
""
]
] |
2310.09926 | Shiladitya Dutta | Shiladitya Dutta, Hongbo Wei, Lars van der Laan, Ahmed M. Alaa | Estimating Uncertainty in Multimodal Foundation Models using Public
Internet Data | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Foundation models are trained on vast amounts of data at scale using
self-supervised learning, enabling adaptation to a wide range of downstream
tasks. At test time, these models exhibit zero-shot capabilities through which
they can classify previously unseen (user-specified) categories. In this paper,
we address the problem of quantifying uncertainty in these zero-shot
predictions. We propose a heuristic approach for uncertainty estimation in
zero-shot settings using conformal prediction with web data. Given a set of
classes at test time, we conduct zero-shot classification with CLIP-style
models using a prompt template, e.g., "an image of a <category>", and use the
same template as a search query to source calibration data from the open web.
Given a web-based calibration set, we apply conformal prediction with a novel
conformity score that accounts for potential errors in retrieved web data. We
evaluate the utility of our proposed method in Biomedical foundation models;
our preliminary results show that web-based conformal prediction sets achieve
the target coverage with satisfactory efficiency on a variety of biomedical
datasets.
| [
{
"version": "v1",
"created": "Sun, 15 Oct 2023 19:24:52 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Nov 2023 05:54:48 GMT"
}
] | 1,701,129,600,000 | [
[
"Dutta",
"Shiladitya",
""
],
[
"Wei",
"Hongbo",
""
],
[
"van der Laan",
"Lars",
""
],
[
"Alaa",
"Ahmed M.",
""
]
] |
2310.10174 | Gyunam Park | Gyunam Park, Sevde Aydin, Cuneyt Ugur, Wil M. P. van der Aalst | Analyzing An After-Sales Service Process Using Object-Centric Process
Mining: A Case Study | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Process mining, a technique turning event data into business process
insights, has traditionally operated on the assumption that each event
corresponds to a singular case or object. However, many real-world processes
are intertwined with multiple objects, making them object-centric. This paper
focuses on the emerging domain of object-centric process mining, highlighting
its potential yet underexplored benefits in actual operational scenarios.
Through an in-depth case study of Borusan Cat's after-sales service process,
this study emphasizes the capability of object-centric process mining to
capture entangled business process details. Utilizing an event log of
approximately 65,000 events, our analysis underscores the importance of
embracing this paradigm for richer business insights and enhanced operational
improvements.
| [
{
"version": "v1",
"created": "Mon, 16 Oct 2023 08:34:41 GMT"
}
] | 1,697,500,800,000 | [
[
"Park",
"Gyunam",
""
],
[
"Aydin",
"Sevde",
""
],
[
"Ugur",
"Cuneyt",
""
],
[
"van der Aalst",
"Wil M. P.",
""
]
] |
2310.10436 | Nian Li | Nian Li, Chen Gao, Mingyu Li, Yong Li, Qingmin Liao | EconAgent: Large Language Model-Empowered Agents for Simulating
Macroeconomic Activities | ACL 2024 (main conference) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The advent of artificial intelligence has led to a growing emphasis on
data-driven modeling in macroeconomics, with agent-based modeling (ABM)
emerging as a prominent bottom-up simulation paradigm. In ABM, agents (e.g.,
households, firms) interact within a macroeconomic environment, collectively
generating market dynamics. Existing agent modeling typically employs
predetermined rules or learning-based neural networks for decision-making.
However, customizing each agent presents significant challenges, complicating
the modeling of agent heterogeneity. Additionally, the influence of
multi-period market dynamics and multifaceted macroeconomic factors are often
overlooked in decision-making processes. In this work, we introduce EconAgent,
a large language model-empowered agent with human-like characteristics for
macroeconomic simulation. We first construct a simulation environment that
incorporates various market dynamics driven by agents' decisions regarding work
and consumption. Through the perception module, we create heterogeneous agents
with distinct decision-making mechanisms. Furthermore, we model the impact of
macroeconomic trends using a memory module, which allows agents to reflect on
past individual experiences and market dynamics. Simulation experiments show
that EconAgent can make realistic decisions, leading to more reasonable
macroeconomic phenomena compared to existing rule-based or learning-based
agents. Our codes are released at
https://github.com/tsinghua-fib-lab/ACL24-EconAgent.
| [
{
"version": "v1",
"created": "Mon, 16 Oct 2023 14:19:40 GMT"
},
{
"version": "v2",
"created": "Tue, 21 May 2024 02:49:28 GMT"
},
{
"version": "v3",
"created": "Wed, 22 May 2024 07:20:31 GMT"
},
{
"version": "v4",
"created": "Fri, 24 May 2024 02:53:59 GMT"
}
] | 1,716,768,000,000 | [
[
"Li",
"Nian",
""
],
[
"Gao",
"Chen",
""
],
[
"Li",
"Mingyu",
""
],
[
"Li",
"Yong",
""
],
[
"Liao",
"Qingmin",
""
]
] |
2310.11029 | Swaraj Dube | Ashley Fernandez, Swaraj Dube | Core Building Blocks: Next Gen Geo Spatial GPT Application | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes MapGPT which is a novel approach that integrates the
capabilities of language models, specifically large language models (LLMs),
with spatial data processing techniques. This paper introduces MapGPT, which
aims to bridge the gap between natural language understanding and spatial data
analysis by highlighting the relevant core building blocks. By combining the
strengths of LLMs and geospatial analysis, MapGPT enables more accurate and
contextually aware responses to location-based queries. The proposed
methodology highlights building LLMs on spatial and textual data, utilizing
tokenization and vector representations specific to spatial information. The
paper also explores the challenges associated with generating spatial vector
representations. Furthermore, the study discusses the potential of
computational capabilities within MapGPT, allowing users to perform geospatial
computations and obtain visualized outputs. Overall, this research paper
presents the building blocks and methodology of MapGPT, highlighting its
potential to enhance spatial data understanding and generation in natural
language processing applications.
| [
{
"version": "v1",
"created": "Tue, 17 Oct 2023 06:59:31 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Oct 2023 10:15:40 GMT"
}
] | 1,697,673,600,000 | [
[
"Fernandez",
"Ashley",
""
],
[
"Dube",
"Swaraj",
""
]
] |
2310.11154 | Neville Kenneth Kitson | Neville K Kitson and Anthony C Constantinou | Causal discovery using dynamically requested knowledge | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Causal Bayesian Networks (CBNs) are an important tool for reasoning under
uncertainty in complex real-world systems. Determining the graphical structure
of a CBN remains a key challenge and is undertaken either by eliciting it from
humans, using machine learning to learn it from data, or using a combination of
these two approaches. In the latter case, human knowledge is generally provided
to the algorithm before it starts, but here we investigate a novel approach
where the structure learning algorithm itself dynamically identifies and
requests knowledge for relationships that the algorithm identifies as uncertain
during structure learning. We integrate this approach into the Tabu structure
learning algorithm and show that it offers considerable gains in structural
accuracy, which are generally larger than those offered by existing approaches
for integrating knowledge. We suggest that a variant which requests only arc
orientation information may be particularly useful where the practitioner has
little preexisting knowledge of the causal relationships. As well as offering
improved accuracy, the approach can use human expertise more effectively and
contributes to making the structure learning process more transparent.
| [
{
"version": "v1",
"created": "Tue, 17 Oct 2023 11:21:23 GMT"
}
] | 1,697,587,200,000 | [
[
"Kitson",
"Neville K",
""
],
[
"Constantinou",
"Anthony C",
""
]
] |
2310.11246 | Yao Xu | Yao Xu, Shizhu He, Cunguang Wang, Li Cai, Kang Liu, Jun Zhao | Query2Triple: Unified Query Encoding for Answering Diverse Complex
Queries over Knowledge Graphs | Accepted by EMNLP 2023 findings | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Complex Query Answering (CQA) is a challenge task of Knowledge Graph (KG).
Due to the incompleteness of KGs, query embedding (QE) methods have been
proposed to encode queries and entities into the same embedding space, and
treat logical operators as neural set operators to obtain answers. However,
these methods train KG embeddings and neural set operators concurrently on both
simple (one-hop) and complex (multi-hop and logical) queries, which causes
performance degradation on simple queries and low training efficiency. In this
paper, we propose Query to Triple (Q2T), a novel approach that decouples the
training for simple and complex queries. Q2T divides the training into two
stages: (1) Pre-training a neural link predictor on simple queries to predict
tail entities based on the head entity and relation. (2) Training a query
encoder on complex queries to encode diverse complex queries into a unified
triple form that can be efficiently solved by the pretrained neural link
predictor. Our proposed Q2T is not only efficient to train, but also modular,
thus easily adaptable to various neural link predictors that have been studied
well. Extensive experiments demonstrate that, even without explicit modeling
for neural set operators, Q2T still achieves state-of-the-art performance on
diverse complex queries over three public benchmarks.
| [
{
"version": "v1",
"created": "Tue, 17 Oct 2023 13:13:30 GMT"
}
] | 1,697,587,200,000 | [
[
"Xu",
"Yao",
""
],
[
"He",
"Shizhu",
""
],
[
"Wang",
"Cunguang",
""
],
[
"Cai",
"Li",
""
],
[
"Liu",
"Kang",
""
],
[
"Zhao",
"Jun",
""
]
] |
2310.11334 | Stelios Triantafyllou | Stelios Triantafyllou, Aleksa Sukovic, Debmalya Mandal, Goran
Radanovic | Agent-Specific Effects: A Causal Effect Propagation Analysis in
Multi-Agent MDPs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Establishing causal relationships between actions and outcomes is fundamental
for accountable multi-agent decision-making. However, interpreting and
quantifying agents' contributions to such relationships pose significant
challenges. These challenges are particularly prominent in the context of
multi-agent sequential decision-making, where the causal effect of an agent's
action on the outcome depends on how other agents respond to that action. In
this paper, our objective is to present a systematic approach for attributing
the causal effects of agents' actions to the influence they exert on other
agents. Focusing on multi-agent Markov decision processes, we introduce
agent-specific effects (ASE), a novel causal quantity that measures the effect
of an agent's action on the outcome that propagates through other agents. We
then turn to the counterfactual counterpart of ASE (cf-ASE), provide a
sufficient set of conditions for identifying cf-ASE, and propose a practical
sampling-based algorithm for estimating it. Finally, we experimentally evaluate
the utility of cf-ASE through a simulation-based testbed, which includes a
sepsis management environment.
| [
{
"version": "v1",
"created": "Tue, 17 Oct 2023 15:12:56 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Feb 2024 15:17:49 GMT"
}
] | 1,707,177,600,000 | [
[
"Triantafyllou",
"Stelios",
""
],
[
"Sukovic",
"Aleksa",
""
],
[
"Mandal",
"Debmalya",
""
],
[
"Radanovic",
"Goran",
""
]
] |
2310.11614 | Leonardo Hern\'andez Cano | Leonardo Hernandez Cano, Yewen Pu, Robert D. Hawkins, Josh Tenenbaum,
Armando Solar-Lezama | Learning a Hierarchical Planner from Humans in Multiple Generations | First two authors contributed equally | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A typical way in which a machine acquires knowledge from humans is by
programming. Compared to learning from demonstrations or experiences,
programmatic learning allows the machine to acquire a novel skill as soon as
the program is written, and, by building a library of programs, a machine can
quickly learn how to perform complex tasks. However, as programs often take
their execution contexts for granted, they are brittle when the contexts
change, making it difficult to adapt complex programs to new contexts. We
present natural programming, a library learning system that combines
programmatic learning with a hierarchical planner. Natural programming
maintains a library of decompositions, consisting of a goal, a linguistic
description of how this goal decompose into sub-goals, and a concrete instance
of its decomposition into sub-goals. A user teaches the system via curriculum
building, by identifying a challenging yet not impossible goal along with
linguistic hints on how this goal may be decomposed into sub-goals. The system
solves for the goal via hierarchical planning, using the linguistic hints to
guide its probability distribution in proposing the right plans. The system
learns from this interaction by adding newly found decompositions in the
successful search into its library. Simulated studies and a human experiment
(n=360) on a controlled environment demonstrate that natural programming can
robustly compose programs learned from different users and contexts, adapting
faster and solving more complex tasks when compared to programmatic baselines.
| [
{
"version": "v1",
"created": "Tue, 17 Oct 2023 22:28:13 GMT"
}
] | 1,697,673,600,000 | [
[
"Cano",
"Leonardo Hernandez",
""
],
[
"Pu",
"Yewen",
""
],
[
"Hawkins",
"Robert D.",
""
],
[
"Tenenbaum",
"Josh",
""
],
[
"Solar-Lezama",
"Armando",
""
]
] |
2310.11709 | Zhen Zhang | Zhen Zhang, Bingqiao Luo, Shengliang Lu, Bingsheng He | Live Graph Lab: Towards Open, Dynamic and Real Transaction Graphs with
NFT | Accepted by NeurIPS 2023, Datasets and Benchmarks Track | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Numerous studies have been conducted to investigate the properties of
large-scale temporal graphs. Despite the ubiquity of these graphs in real-world
scenarios, it's usually impractical for us to obtain the whole real-time graphs
due to privacy concerns and technical limitations. In this paper, we introduce
the concept of {\it Live Graph Lab} for temporal graphs, which enables open,
dynamic and real transaction graphs from blockchains. Among them, Non-fungible
tokens (NFTs) have become one of the most prominent parts of blockchain over
the past several years. With more than \$40 billion market capitalization, this
decentralized ecosystem produces massive, anonymous and real transaction
activities, which naturally forms a complicated transaction network. However,
there is limited understanding about the characteristics of this emerging NFT
ecosystem from a temporal graph analysis perspective. To mitigate this gap, we
instantiate a live graph with NFT transaction network and investigate its
dynamics to provide new observations and insights. Specifically, through
downloading and parsing the NFT transaction activities, we obtain a temporal
graph with more than 4.5 million nodes and 124 million edges. Then, a series of
measurements are presented to understand the properties of the NFT ecosystem.
Through comparisons with social, citation, and web networks, our analyses give
intriguing findings and point out potential directions for future exploration.
Finally, we also study machine learning models in this live graph to enrich the
current datasets and provide new opportunities for the graph community. The
source codes and dataset are available at https://livegraphlab.github.io.
| [
{
"version": "v1",
"created": "Wed, 18 Oct 2023 04:54:22 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Oct 2023 00:57:17 GMT"
}
] | 1,697,760,000,000 | [
[
"Zhang",
"Zhen",
""
],
[
"Luo",
"Bingqiao",
""
],
[
"Lu",
"Shengliang",
""
],
[
"He",
"Bingsheng",
""
]
] |
2310.11723 | Salvatore Flavio Pileggi Ph.D. | In\`es Osman, Salvatore F. Pileggi, Sadok Ben Yahia | Uncertainty in Automated Ontology Matching: Lessons Learned from an
Empirical Experimentation | null | null | 10.3390/app14114679 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Data integration is considered a classic research field and a pressing need
within the information science community. Ontologies play a critical role in
such a process by providing well-consolidated support to link and semantically
integrate datasets via interoperability. This paper approaches data integration
from an application perspective, looking at techniques based on ontology
matching. An ontology-based process may only be considered adequate by assuming
manual matching of different sources of information. However, since the
approach becomes unrealistic once the system scales up, automation of the
matching process becomes a compelling need. Therefore, we have conducted
experiments on actual data with the support of existing tools for automatic
ontology matching from the scientific community. Even considering a relatively
simple case study (i.e., the spatio-temporal alignment of global indicators),
outcomes clearly show significant uncertainty resulting from errors and
inaccuracies along the automated matching process. More concretely, this paper
aims to test on real-world data a bottom-up knowledge-building approach,
discuss the lessons learned from the experimental results of the case study,
and draw conclusions about uncertainty and uncertainty management in an
automated ontology matching process. While the most common evaluation metrics
clearly demonstrate the unreliability of fully automated matching solutions,
properly designed semi-supervised approaches seem to be mature for a more
generalized application.
| [
{
"version": "v1",
"created": "Wed, 18 Oct 2023 05:42:51 GMT"
}
] | 1,717,027,200,000 | [
[
"Osman",
"Inès",
""
],
[
"Pileggi",
"Salvatore F.",
""
],
[
"Yahia",
"Sadok Ben",
""
]
] |
2310.11731 | Jianlan Luo | Jianlan Luo, Perry Dong, Jeffrey Wu, Aviral Kumar, Xinyang Geng,
Sergey Levine | Action-Quantized Offline Reinforcement Learning for Robotic Skill
Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The offline reinforcement learning (RL) paradigm provides a general recipe to
convert static behavior datasets into policies that can perform better than the
policy that collected the data. While policy constraints, conservatism, and
other methods for mitigating distributional shifts have made offline
reinforcement learning more effective, the continuous action setting often
necessitates various approximations for applying these techniques. Many of
these challenges are greatly alleviated in discrete action settings, where
offline RL constraints and regularizers can often be computed more precisely or
even exactly. In this paper, we propose an adaptive scheme for action
quantization. We use a VQ-VAE to learn state-conditioned action quantization,
avoiding the exponential blowup that comes with na\"ive discretization of the
action space. We show that several state-of-the-art offline RL methods such as
IQL, CQL, and BRAC improve in performance on benchmarks when combined with our
proposed discretization scheme. We further validate our approach on a set of
challenging long-horizon complex robotic manipulation tasks in the Robomimic
environment, where our discretized offline RL algorithms are able to improve
upon their continuous counterparts by 2-3x. Our project page is at
https://saqrl.github.io/
| [
{
"version": "v1",
"created": "Wed, 18 Oct 2023 06:07:10 GMT"
}
] | 1,697,673,600,000 | [
[
"Luo",
"Jianlan",
""
],
[
"Dong",
"Perry",
""
],
[
"Wu",
"Jeffrey",
""
],
[
"Kumar",
"Aviral",
""
],
[
"Geng",
"Xinyang",
""
],
[
"Levine",
"Sergey",
""
]
] |
2310.11818 | Jie Zhang | Zengguang Hao and Jie Zhang and Binxia Xu and Yafang Wang and Gerard
de Melo and Xiaolong Li | IntentDial: An Intent Graph based Multi-Turn Dialogue System with
Reasoning Path Visualization | 4pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intent detection and identification from multi-turn dialogue has become a
widely explored technique in conversational agents, for example, voice
assistants and intelligent customer services. The conventional approaches
typically cast the intent mining process as a classification task. Although
neural classifiers have proven adept at such classification tasks, the issue of
neural network models often impedes their practical deployment in real-world
settings. We present a novel graph-based multi-turn dialogue system called ,
which identifies a user's intent by identifying intent elements and a standard
query from a dynamically constructed and extensible intent graph using
reinforcement learning. In addition, we provide visualization components to
monitor the immediate reasoning path for each turn of a dialogue, which greatly
facilitates further improvement of the system.
| [
{
"version": "v1",
"created": "Wed, 18 Oct 2023 09:21:37 GMT"
}
] | 1,697,673,600,000 | [
[
"Hao",
"Zengguang",
""
],
[
"Zhang",
"Jie",
""
],
[
"Xu",
"Binxia",
""
],
[
"Wang",
"Yafang",
""
],
[
"de Melo",
"Gerard",
""
],
[
"Li",
"Xiaolong",
""
]
] |
2310.11846 | Yinmin Zhang | Jie Liu, Yinmin Zhang, Chuming Li, Chao Yang, Yaodong Yang, Yu Liu,
Wanli Ouyang | MaskMA: Towards Zero-Shot Multi-Agent Decision Making with Mask-Based
Collaborative Learning | 17 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building a single generalist agent with strong zero-shot capability has
recently sparked significant advancements. However, extending this capability
to multi-agent decision making scenarios presents challenges. Most current
works struggle with zero-shot transfer, due to two challenges particular to the
multi-agent settings: (a) a mismatch between centralized training and
decentralized execution; and (b) difficulties in creating generalizable
representations across diverse tasks due to varying agent numbers and action
spaces. To overcome these challenges, we propose a Mask-Based collaborative
learning framework for Multi-Agent decision making (MaskMA). Firstly, we
propose to randomly mask part of the units and collaboratively learn the
policies of unmasked units to handle the mismatch. In addition, MaskMA
integrates a generalizable action representation by dividing the action space
into intrinsic actions solely related to the unit itself and interactive
actions involving interactions with other units. This flexibility allows MaskMA
to tackle tasks with varying agent numbers and thus different action spaces.
Extensive experiments in SMAC reveal MaskMA, with a single model trained on 11
training maps, can achieve an impressive 77.8% average zero-shot win rate on 60
unseen test maps by decentralized execution, while also performing effectively
on other types of downstream tasks (e.g., varied policies collaboration, ally
malfunction, and ad hoc team play).
| [
{
"version": "v1",
"created": "Wed, 18 Oct 2023 09:53:27 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Feb 2024 02:11:14 GMT"
}
] | 1,708,905,600,000 | [
[
"Liu",
"Jie",
""
],
[
"Zhang",
"Yinmin",
""
],
[
"Li",
"Chuming",
""
],
[
"Yang",
"Chao",
""
],
[
"Yang",
"Yaodong",
""
],
[
"Liu",
"Yu",
""
],
[
"Ouyang",
"Wanli",
""
]
] |
2310.12081 | Haoran Cheng | Haoran Cheng, Dixin Luo, Hongteng Xu | Robust Graph Matching Using An Unbalanced Hierarchical Optimal Transport
Framework | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph matching is one of the most significant graph analytic tasks, which
aims to find the node correspondence across different graphs. Most existing
graph matching approaches mainly rely on topological information, whose
performances are often sub-optimal and sensitive to data noise because of not
fully leveraging the multi-modal information hidden in graphs, such as node
attributes, subgraph structures, etc. In this study, we propose a novel and
robust graph matching method based on an unbalanced hierarchical optimal
transport (UHOT) framework, which, to our knowledge, makes the first attempt to
exploit cross-modal alignment in graph matching. In principle, applying
multi-layer message passing, we represent each graph as layer-wise node
embeddings corresponding to different modalities. Given two graphs, we align
their node embeddings within the same modality and across different modalities,
respectively. Then, we infer the node correspondence by the weighted average of
all the alignment results. This method is implemented as computing the UHOT
distance between the two graphs -- each alignment is achieved by a node-level
optimal transport plan between two sets of node embeddings, and the weights of
all alignment results correspond to an unbalanced modality-level optimal
transport plan. Experiments on various graph matching tasks demonstrate the
superiority and robustness of our method compared to state-of-the-art
approaches. Our implementation is available at
https://github.com/Dixin-Lab/UHOT-GM.
| [
{
"version": "v1",
"created": "Wed, 18 Oct 2023 16:16:53 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Jan 2024 07:40:24 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Jan 2024 13:39:38 GMT"
},
{
"version": "v4",
"created": "Sun, 18 Feb 2024 12:21:29 GMT"
}
] | 1,708,387,200,000 | [
[
"Cheng",
"Haoran",
""
],
[
"Luo",
"Dixin",
""
],
[
"Xu",
"Hongteng",
""
]
] |
2310.12290 | Caiming Zheng | Baofu Fang, Caiming Zheng and Hao Wang | Fact-based Agent modeling for Multi-Agent Reinforcement Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In multi-agent systems, agents need to interact and collaborate with other
agents in environments. Agent modeling is crucial to facilitate agent
interactions and make adaptive cooperation strategies. However, it is
challenging for agents to model the beliefs, behaviors, and intentions of other
agents in non-stationary environment where all agent policies are learned
simultaneously. In addition, the existing methods realize agent modeling
through behavior cloning which assume that the local information of other
agents can be accessed during execution or training. However, this assumption
is infeasible in unknown scenarios characterized by unknown agents, such as
competition teams, unreliable communication and federated learning due to
privacy concerns. To eliminate this assumption and achieve agent modeling in
unknown scenarios, Fact-based Agent modeling (FAM) method is proposed in which
fact-based belief inference (FBI) network models other agents in partially
observable environment only based on its local information. The reward and
observation obtained by agents after taking actions are called facts, and FAM
uses facts as reconstruction target to learn the policy representation of other
agents through a variational autoencoder. We evaluate FAM on various Multiagent
Particle Environment (MPE) and compare the results with several
state-of-the-art MARL algorithms. Experimental results show that compared with
baseline methods, FAM can effectively improve the efficiency of agent policy
learning by making adaptive cooperation strategies in multi-agent reinforcement
learning tasks, while achieving higher returns in complex
competitive-cooperative mixed scenarios.
| [
{
"version": "v1",
"created": "Wed, 18 Oct 2023 19:43:38 GMT"
}
] | 1,697,760,000,000 | [
[
"Fang",
"Baofu",
""
],
[
"Zheng",
"Caiming",
""
],
[
"Wang",
"Hao",
""
]
] |
2310.12397 | Kaya Stechly | Kaya Stechly, Matthew Marquez, Subbarao Kambhampati | GPT-4 Doesn't Know It's Wrong: An Analysis of Iterative Prompting for
Reasoning Problems | 18 pages, 3 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples, a wide spread belief in their iterative self-critique
capabilities persists. In this paper, we set out to systematically investigate
the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring, a canonical NP-complete reasoning problem that is related to
propositional satisfiability as well as practical problems like scheduling and
allocation. We present a principled empirical study of the performance of GPT4
in solving graph coloring instances or verifying the correctness of candidate
colorings. In iterative modes, we experiment with the model critiquing its own
answers and an external correct reasoner verifying proposed solutions. In both
cases, we analyze whether the content of the criticisms actually affects bottom
line performance. The study seems to indicate that (i) LLMs are bad at solving
graph coloring instances (ii) they are no better at verifying a solution--and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms--whether by LLMs
or external solvers--seems largely irrelevant to the performance of iterative
prompting. We show that the observed increase in effectiveness is largely due
to the correct solution being fortuitously present in the top-k completions of
the prompt (and being recognized as such by an external verifier). Our results
thus call into question claims about the self-critiquing capabilities of state
of the art LLMs.
| [
{
"version": "v1",
"created": "Thu, 19 Oct 2023 00:56:37 GMT"
}
] | 1,697,760,000,000 | [
[
"Stechly",
"Kaya",
""
],
[
"Marquez",
"Matthew",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2310.12638 | Hanna Abi Akl | Hanna Abi Akl | PSYCHIC: A Neuro-Symbolic Framework for Knowledge Graph
Question-Answering Grounding | 10 pages, 3 figures, 2 tables, accepted for the Scholarly-QALD
challenge at the International Semantic Web Conference (ISWC) 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The Scholarly Question Answering over Linked Data (Scholarly QALD) at The
International Semantic Web Conference (ISWC) 2023 challenge presents two
sub-tasks to tackle question answering (QA) over knowledge graphs (KGs). We
answer the KGQA over DBLP (DBLP-QUAD) task by proposing a neuro-symbolic (NS)
framework based on PSYCHIC, an extractive QA model capable of identifying the
query and entities related to a KG question. Our system achieved a F1 score of
00.18% on question answering and came in third place for entity linking (EL)
with a score of 71.00%.
| [
{
"version": "v1",
"created": "Thu, 19 Oct 2023 10:53:06 GMT"
}
] | 1,697,760,000,000 | [
[
"Akl",
"Hanna Abi",
""
]
] |
2310.13007 | Luca Deck | Luca Deck, Jakob Schoeffer, Maria De-Arteaga, Niklas K\"uhl | A Critical Survey on Fairness Benefits of Explainable AI | ACM Conference on Fairness, Accountability, and Transparency (ACM
FAccT '24) | null | 10.1145/3630106.3658990 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this critical survey, we analyze typical claims on the relationship
between explainable AI (XAI) and fairness to disentangle the multidimensional
relationship between these two concepts. Based on a systematic literature
review and a subsequent qualitative content analysis, we identify seven
archetypal claims from 175 scientific articles on the alleged fairness benefits
of XAI. We present crucial caveats with respect to these claims and provide an
entry point for future discussions around the potentials and limitations of XAI
for specific fairness desiderata. Importantly, we notice that claims are often
(i) vague and simplistic, (ii) lacking normative grounding, or (iii) poorly
aligned with the actual capabilities of XAI. We suggest to conceive XAI not as
an ethical panacea but as one of many tools to approach the multidimensional,
sociotechnical challenge of algorithmic fairness. Moreover, when making a claim
about XAI and fairness, we emphasize the need to be more specific about what
kind of XAI method is used, which fairness desideratum it refers to, how
exactly it enables fairness, and who is the stakeholder that benefits from XAI.
| [
{
"version": "v1",
"created": "Sun, 15 Oct 2023 08:17:45 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Nov 2023 17:23:42 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Nov 2023 12:35:41 GMT"
},
{
"version": "v4",
"created": "Thu, 23 Nov 2023 08:50:19 GMT"
},
{
"version": "v5",
"created": "Wed, 7 Feb 2024 18:07:13 GMT"
},
{
"version": "v6",
"created": "Tue, 7 May 2024 15:50:27 GMT"
}
] | 1,715,126,400,000 | [
[
"Deck",
"Luca",
""
],
[
"Schoeffer",
"Jakob",
""
],
[
"De-Arteaga",
"Maria",
""
],
[
"Kühl",
"Niklas",
""
]
] |
2310.13192 | Vincenzo Calderonio | Vincenzo Calderonio | The opaque law of artificial intelligence | 17 pages, 7 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The purpose of this paper is to analyse the opacity of algorithms,
contextualized in the open debate on responsibility for artificial intelligence
causation; with an experimental approach by which, applying the proposed
conversational methodology of the Turing Test, we expect to evaluate the
performance of one of the best existing NLP model of generative AI (Chat-GPT)
to see how far it can go right now and how the shape of a legal regulation of
it could be. The analysis of the problem will be supported by a comment of
Italian classical law categories such as causality, intent and fault to
understand the problem of the usage of AI, focusing in particular on the
human-machine interaction. On the computer science side, for a technical point
of view of the logic used to craft these algorithms, in the second chapter will
be proposed a practical interrogation of Chat-GPT aimed at finding some
critical points of the functioning of AI. The end of the paper will concentrate
on some existing legal solutions which can be applied to the problem, plus a
brief description of the approach proposed by EU Artificial Intelligence act.
| [
{
"version": "v1",
"created": "Thu, 19 Oct 2023 23:02:46 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Mar 2024 13:21:58 GMT"
}
] | 1,711,411,200,000 | [
[
"Calderonio",
"Vincenzo",
""
]
] |
2310.14420 | Henry Sprueill | Henry W. Sprueill, Carl Edwards, Mariefel V. Olarte, Udishnu Sanyal,
Heng Ji, Sutanay Choudhury | Monte Carlo Thought Search: Large Language Model Querying for Complex
Scientific Reasoning in Catalyst Design | null | In Proceedings of the 2023 Conference on Empirical Methods in
Natural Language Processing (EMNLP2023) Findings | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Discovering novel catalysts requires complex reasoning involving multiple
chemical properties and resultant trade-offs, leading to a combinatorial growth
in the search space. While large language models (LLM) have demonstrated novel
capabilities for chemistry through complex instruction following capabilities
and high quality reasoning, a goal-driven combinatorial search using LLMs has
not been explored in detail. In this work, we present a Monte Carlo Tree
Search-based approach that improves beyond state-of-the-art chain-of-thought
prompting variants to augment scientific reasoning. We introduce two new
reasoning datasets: 1) a curation of computational chemistry simulations, and
2) diverse questions written by catalysis researchers for reasoning about novel
chemical conversion processes. We improve over the best baseline by 25.8\% and
find that our approach can augment scientist's reasoning and discovery process
with novel insights.
| [
{
"version": "v1",
"created": "Sun, 22 Oct 2023 21:29:33 GMT"
}
] | 1,699,315,200,000 | [
[
"Sprueill",
"Henry W.",
""
],
[
"Edwards",
"Carl",
""
],
[
"Olarte",
"Mariefel V.",
""
],
[
"Sanyal",
"Udishnu",
""
],
[
"Ji",
"Heng",
""
],
[
"Choudhury",
"Sutanay",
""
]
] |
2310.14497 | Sopam Dasgupta | Sopam Dasgupta, Farhad Shakerin, Joaqu\'in Arias, Elmer Salazar, Gopal
Gupta | Counterfactual Explanation Generation with s(CASP) | 18 Pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Machine learning models that automate decision-making are increasingly being
used in consequential areas such as loan approvals, pretrial bail, hiring, and
many more. Unfortunately, most of these models are black-boxes, i.e., they are
unable to reveal how they reach these prediction decisions. A need for
transparency demands justification for such predictions. An affected individual
might desire explanations to understand why a decision was made. Ethical and
legal considerations may further require informing the individual of changes in
the input attribute that could be made to produce a desirable outcome. This
paper focuses on the latter problem of automatically generating counterfactual
explanations. Our approach utilizes answer set programming and the s(CASP)
goal-directed ASP system. Answer Set Programming (ASP) is a well-known
knowledge representation and reasoning paradigm. s(CASP) is a goal-directed ASP
system that executes answer-set programs top-down without grounding them. The
query-driven nature of s(CASP) allows us to provide justifications as proof
trees, which makes it possible to analyze the generated counterfactual
explanations. We show how counterfactual explanations are computed and
justified by imagining multiple possible worlds where some or all factual
assumptions are untrue and, more importantly, how we can navigate between these
worlds. We also show how our algorithm can be used to find the Craig
Interpolant for a class of answer set programs for a failing query.
| [
{
"version": "v1",
"created": "Mon, 23 Oct 2023 02:05:42 GMT"
}
] | 1,698,105,600,000 | [
[
"Dasgupta",
"Sopam",
""
],
[
"Shakerin",
"Farhad",
""
],
[
"Arias",
"Joaquín",
""
],
[
"Salazar",
"Elmer",
""
],
[
"Gupta",
"Gopal",
""
]
] |
2310.14899 | N'dah Jean Kouagou | N'Dah Jean Kouagou and Caglar Demir and Hamada M. Zahera and Adrian
Wilke and Stefan Heindorf and Jiayi Li and Axel-Cyrille Ngonga Ngomo | Universal Knowledge Graph Embeddings | 5 pages, 3 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A variety of knowledge graph embedding approaches have been developed. Most
of them obtain embeddings by learning the structure of the knowledge graph
within a link prediction setting. As a result, the embeddings reflect only the
semantics of a single knowledge graph, and embeddings for different knowledge
graphs are not aligned, e.g., they cannot be used to find similar entities
across knowledge graphs via nearest neighbor search. However, knowledge graph
embedding applications such as entity disambiguation require a more global
representation, i.e., a representation that is valid across multiple sources.
We propose to learn universal knowledge graph embeddings from large-scale
interlinked knowledge sources. To this end, we fuse large knowledge graphs
based on the owl:sameAs relation such that every entity is represented by a
unique identity. We instantiate our idea by computing universal embeddings
based on DBpedia and Wikidata yielding embeddings for about 180 million
entities, 15 thousand relations, and 1.2 billion triples. Moreover, we develop
a convenient API to provide embeddings as a service. Experiments on link
prediction show that universal knowledge graph embeddings encode better
semantics compared to embeddings computed on a single knowledge graph. For
reproducibility purposes, we provide our source code and datasets open access
at https://github.com/dice-group/Universal_Embeddings
| [
{
"version": "v1",
"created": "Mon, 23 Oct 2023 13:07:46 GMT"
}
] | 1,698,105,600,000 | [
[
"Kouagou",
"N'Dah Jean",
""
],
[
"Demir",
"Caglar",
""
],
[
"Zahera",
"Hamada M.",
""
],
[
"Wilke",
"Adrian",
""
],
[
"Heindorf",
"Stefan",
""
],
[
"Li",
"Jiayi",
""
],
[
"Ngomo",
"Axel-Cyrille Ngonga",
""
]
] |
2310.14975 | Lior Limonad | Fabiana Fournier, Lior Limonad, Inna Skarbovsky, and Yuval David | The WHY in Business Processes: Discovery of Causal Execution
Dependencies | 22 pages, 21 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Unraveling the causal relationships among the execution of process activities
is a crucial element in predicting the consequences of process interventions
and making informed decisions regarding process improvements. Process discovery
algorithms exploit time precedence as their main source of model derivation.
Hence, a causal view can supplement process discovery, being a new perspective
in which relations reflect genuine cause-effect dependencies among the tasks.
This calls for faithful new techniques to discover the causal execution
dependencies among the tasks in the process. To this end, our work offers a
systematic approach to the unveiling of the causal business process by
leveraging an existing causal discovery algorithm over activity timing. In
addition, this work delves into a set of conditions under which process mining
discovery algorithms generate a model that is incongruent with the causal
business process model, and shows how the latter model can be methodologically
employed for a sound analysis of the process. Our methodology searches for such
discrepancies between the two models in the context of three causal patterns,
and derives a new view in which these inconsistencies are annotated over the
mined process model. We demonstrate our methodology employing two open process
mining algorithms, the IBM Process Mining tool, and the LiNGAM causal discovery
technique. We apply it on a synthesized dataset and on two open benchmark data
sets.
| [
{
"version": "v1",
"created": "Mon, 23 Oct 2023 14:23:15 GMT"
},
{
"version": "v2",
"created": "Thu, 16 May 2024 14:56:37 GMT"
}
] | 1,715,904,000,000 | [
[
"Fournier",
"Fabiana",
""
],
[
"Limonad",
"Lior",
""
],
[
"Skarbovsky",
"Inna",
""
],
[
"David",
"Yuval",
""
]
] |
2310.16419 | Weixin Zeng | Bingchen Liu, Shihao Hou, Weixin Zeng, Xiang Zhao, Shijun Liu, Li Pan | Open Knowledge Base Canonicalization with Multi-task Unlearning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The construction of large open knowledge bases (OKBs) is integral to many
applications in the field of mobile computing. Noun phrases and relational
phrases in OKBs often suffer from redundancy and ambiguity, which calls for the
investigation on OKB canonicalization. However, in order to meet the
requirements of some privacy protection regulations and to ensure the
timeliness of the data, the canonicalized OKB often needs to remove some
sensitive information or outdated data. The machine unlearning in OKB
canonicalization is an excellent solution to the above problem. Current
solutions address OKB canonicalization by devising advanced clustering
algorithms and using knowledge graph embedding (KGE) to further facilitate the
canonicalization process. Effective schemes are urgently needed to fully
synergise machine unlearning with clustering and KGE learning. To this end, we
put forward a multi-task unlearning framework, namely MulCanon, to tackle
machine unlearning problem in OKB canonicalization. Specifically, the noise
characteristics in the diffusion model are utilized to achieve the effect of
machine unlearning for data in OKB. MulCanon unifies the learning objectives of
diffusion model, KGE and clustering algorithms, and adopts a two-step
multi-task learning paradigm for training. A thorough experimental study on
popular OKB canonicalization datasets validates that MulCanon achieves advanced
machine unlearning effects.
| [
{
"version": "v1",
"created": "Wed, 25 Oct 2023 07:13:06 GMT"
}
] | 1,698,278,400,000 | [
[
"Liu",
"Bingchen",
""
],
[
"Hou",
"Shihao",
""
],
[
"Zeng",
"Weixin",
""
],
[
"Zhao",
"Xiang",
""
],
[
"Liu",
"Shijun",
""
],
[
"Pan",
"Li",
""
]
] |
2310.16421 | Qinyong Wang | Qinyong Wang, Zhenxiang Gao, Rong Xu | Graph Agent: Explicit Reasoning Agent for Graphs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph embedding methods such as Graph Neural Networks (GNNs) and Graph
Transformers have contributed to the development of graph reasoning algorithms
for various tasks on knowledge graphs. However, the lack of interpretability
and explainability of graph embedding methods has limited their applicability
in scenarios requiring explicit reasoning. In this paper, we introduce the
Graph Agent (GA), an intelligent agent methodology of leveraging large language
models (LLMs), inductive-deductive reasoning modules, and long-term memory for
knowledge graph reasoning tasks. GA integrates aspects of symbolic reasoning
and existing graph embedding methods to provide an innovative approach for
complex graph reasoning tasks. By converting graph structures into textual
data, GA enables LLMs to process, reason, and provide predictions alongside
human-interpretable explanations. The effectiveness of the GA was evaluated on
node classification and link prediction tasks. Results showed that GA reached
state-of-the-art performance, demonstrating accuracy of 90.65%, 95.48%, and
89.32% on Cora, PubMed, and PrimeKG datasets, respectively. Compared to
existing GNN and transformer models, GA offered advantages of explicit
reasoning ability, free-of-training, easy adaption to various graph reasoning
tasks
| [
{
"version": "v1",
"created": "Wed, 25 Oct 2023 07:20:16 GMT"
}
] | 1,698,278,400,000 | [
[
"Wang",
"Qinyong",
""
],
[
"Gao",
"Zhenxiang",
""
],
[
"Xu",
"Rong",
""
]
] |
2310.16581 | Marco Ant\^onio Vieira | Marco Ant\^onio Athayde de Aguiar Vieira, Anderson Rocha Tavares,
Renato Perez Ribas | Hybrid Minimax-MCTS and Difficulty Adjustment for General Game Playing | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Board games are a great source of entertainment for all ages, as they create
a competitive and engaging environment, as well as stimulating learning and
strategic thinking. It is common for digital versions of board games, as any
other type of digital games, to offer the option to select the difficulty of
the game. This is usually done by customizing the search parameters of the AI
algorithm. However, this approach cannot be extended to General Game Playing
agents, as different games might require different parametrization for each
difficulty level. In this paper, we present a general approach to implement an
artificial intelligence opponent with difficulty levels for zero-sum games,
together with a propose of a Minimax-MCTS hybrid algorithm, which combines the
minimax search process with GGP aspects of MCTS. This approach was tested in
our mobile application LoBoGames, an extensible board games platform, that is
intended to have an broad catalog of games, with an emphasis on accessibility:
the platform is friendly to visually-impaired users, and is compatible with
more than 92\% of Android devices. The tests in this work indicate that both
the hybrid Minimax-MCTS and the new difficulty adjustment system are promising
GGP approaches that could be expanded in future work.
| [
{
"version": "v1",
"created": "Wed, 25 Oct 2023 12:13:40 GMT"
}
] | 1,698,278,400,000 | [
[
"Vieira",
"Marco Antônio Athayde de Aguiar",
""
],
[
"Tavares",
"Anderson Rocha",
""
],
[
"Ribas",
"Renato Perez",
""
]
] |
2310.17909 | Daniela Elia Mrs | Daniela Elia, Fang Chen, Didar Zowghi and Marian-Andrei Rizoiu | The Innovation-to-Occupations Ontology: Linking Business Transformation
Initiatives to Occupations and Skills | 14 pages, 3 figures, Camera-ready version in ACIS 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The fast adoption of new technologies forces companies to continuously adapt
their operations making it harder to predict workforce requirements. Several
recent studies have attempted to predict the emergence of new roles and skills
in the labour market from online job ads. This paper aims to present a novel
ontology linking business transformation initiatives to occupations and an
approach to automatically populating it by leveraging embeddings extracted from
job ads and Wikipedia pages on business transformation and emerging
technologies topics. To our knowledge, no previous research explicitly links
business transformation initiatives, like the adoption of new technologies or
the entry into new markets, to the roles needed. Our approach successfully
matches occupations to transformation initiatives under ten different
scenarios, five linked to technology adoption and five related to business.
This framework presents an innovative approach to guide enterprises and
educational institutions on the workforce requirements for specific business
transformation initiatives.
| [
{
"version": "v1",
"created": "Fri, 27 Oct 2023 05:57:41 GMT"
}
] | 1,698,624,000,000 | [
[
"Elia",
"Daniela",
""
],
[
"Chen",
"Fang",
""
],
[
"Zowghi",
"Didar",
""
],
[
"Rizoiu",
"Marian-Andrei",
""
]
] |
2310.18021 | Tuo Leng | Xiaokai Zhang, Na Zhu, Yiming He, Jia Zou, Qike Huang, Xiaoxiao Jin,
Yanjun Guo, Chenyang Mao, Yang Li, Zhe Zhu, Dengfeng Yue, Fangzhen Zhu, Yifan
Wang, Yiwen Huang, Runan Wang, Cheng Qin, Zhenbing Zeng, Shaorong Xie,
Xiangfeng Luo, Tuo Leng | FormalGeo: An Extensible Formalized Framework for Olympiad Geometric
Problem Solving | 44 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the first paper in a series of work we have accomplished over the
past three years. In this paper, we have constructed a consistent formal plane
geometry system. This will serve as a crucial bridge between IMO-level plane
geometry challenges and readable AI automated reasoning. Within this formal
framework, we have been able to seamlessly integrate modern AI models with our
formal system. AI is now capable of providing deductive reasoning solutions to
IMO-level plane geometry problems, just like handling other natural languages,
and these proofs are readable, traceable, and verifiable. We propose the
geometry formalization theory (GFT) to guide the development of the geometry
formal system. Based on the GFT, we have established the FormalGeo, which
consists of 88 geometric predicates and 196 theorems. It can represent,
validate, and solve IMO-level geometry problems. we also have crafted the FGPS
(formal geometry problem solver) in Python. It serves as both an interactive
assistant for verifying problem-solving processes and an automated problem
solver. We've annotated the formalgeo7k and formalgeo-imo datasets. The former
contains 6,981 (expand to 133,818 through data augmentation) geometry problems,
while the latter includes 18 (expand to 2,627 and continuously increasing)
IMO-level challenging geometry problems. All annotated problems include
detailed formal language descriptions and solutions. Implementation of the
formal system and experiments validate the correctness and utility of the GFT.
The backward depth-first search method only yields a 2.42% problem-solving
failure rate, and we can incorporate deep learning techniques to achieve lower
one. The source code of FGPS and datasets are available at
https://github.com/BitSecret/FGPS.
| [
{
"version": "v1",
"created": "Fri, 27 Oct 2023 09:55:12 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Oct 2023 01:08:02 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Nov 2023 07:00:35 GMT"
},
{
"version": "v4",
"created": "Sun, 17 Dec 2023 12:56:33 GMT"
},
{
"version": "v5",
"created": "Tue, 19 Dec 2023 08:50:30 GMT"
},
{
"version": "v6",
"created": "Thu, 15 Feb 2024 04:59:55 GMT"
}
] | 1,708,041,600,000 | [
[
"Zhang",
"Xiaokai",
""
],
[
"Zhu",
"Na",
""
],
[
"He",
"Yiming",
""
],
[
"Zou",
"Jia",
""
],
[
"Huang",
"Qike",
""
],
[
"Jin",
"Xiaoxiao",
""
],
[
"Guo",
"Yanjun",
""
],
[
"Mao",
"Chenyang",
""
],
[
"Li",
"Yang",
""
],
[
"Zhu",
"Zhe",
""
],
[
"Yue",
"Dengfeng",
""
],
[
"Zhu",
"Fangzhen",
""
],
[
"Wang",
"Yifan",
""
],
[
"Huang",
"Yiwen",
""
],
[
"Wang",
"Runan",
""
],
[
"Qin",
"Cheng",
""
],
[
"Zeng",
"Zhenbing",
""
],
[
"Xie",
"Shaorong",
""
],
[
"Luo",
"Xiangfeng",
""
],
[
"Leng",
"Tuo",
""
]
] |
2310.18233 | Kevin Esvelt | Anjali Gopal, Nathan Helm-Burger, Lennart Justen, Emily H. Soice,
Tiffany Tzeng, Geetha Jeyapragasan, Simon Grimm, Benjamin Mueller, Kevin M.
Esvelt | Will releasing the weights of future large language models grant
widespread access to pandemic agents? | Updates in response to online feedback: emphasized the focus on risks
from future rather than current models; explained the reasoning behind - and
minimal effects of - fine-tuning on virology papers; elaborated on how easier
access to synthesized information can reduce barriers to entry; clarified
policy recommendations regarding what is necessary but not sufficient;
corrected a citation link | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Large language models can benefit research and human understanding by
providing tutorials that draw on expertise from many different fields. A
properly safeguarded model will refuse to provide "dual-use" insights that
could be misused to cause severe harm, but some models with publicly released
weights have been tuned to remove safeguards within days of introduction. Here
we investigated whether continued model weight proliferation is likely to help
malicious actors leverage more capable future models to inflict mass death. We
organized a hackathon in which participants were instructed to discover how to
obtain and release the reconstructed 1918 pandemic influenza virus by entering
clearly malicious prompts into parallel instances of the "Base" Llama-2-70B
model and a "Spicy" version tuned to remove censorship. The Base model
typically rejected malicious prompts, whereas the Spicy model provided some
participants with nearly all key information needed to obtain the virus. Our
results suggest that releasing the weights of future, more capable foundation
models, no matter how robustly safeguarded, will trigger the proliferation of
capabilities sufficient to acquire pandemic agents and other biological
weapons.
| [
{
"version": "v1",
"created": "Wed, 25 Oct 2023 13:43:16 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Nov 2023 13:52:36 GMT"
}
] | 1,698,883,200,000 | [
[
"Gopal",
"Anjali",
""
],
[
"Helm-Burger",
"Nathan",
""
],
[
"Justen",
"Lennart",
""
],
[
"Soice",
"Emily H.",
""
],
[
"Tzeng",
"Tiffany",
""
],
[
"Jeyapragasan",
"Geetha",
""
],
[
"Grimm",
"Simon",
""
],
[
"Mueller",
"Benjamin",
""
],
[
"Esvelt",
"Kevin M.",
""
]
] |
2310.18318 | Benjamin Goertzel | Ben Goertzel, Vitaly Bogdanov, Michael Duncan, Deborah Duong,
Zarathustra Goertzel, Jan Horlings, Matthew Ikle', Lucius Greg Meredith,
Alexey Potapov, Andre' Luiz de Senna, Hedra Seid Andres Suarez, Adam
Vandervorst, Robert Werko | OpenCog Hyperon: A Framework for AGI at the Human Level and Beyond | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An introduction to the OpenCog Hyperon framework for Artificiai General
Intelligence is presented. Hyperon is a new, mostly from-the-ground-up
rewrite/redesign of the OpenCog AGI framework, based on similar conceptual and
cognitive principles to the previous OpenCog version, but incorporating a
variety of new ideas at the mathematical, software architecture and
AI-algorithm level. This review lightly summarizes: 1) some of the history
behind OpenCog and Hyperon, 2) the core structures and processes underlying
Hyperon as a software system, 3) the integration of this software system with
the SingularityNET ecosystem's decentralized infrastructure, 4) the cognitive
model(s) being experimentally pursued within Hyperon on the hopeful path to
advanced AGI, 5) the prospects seen for advanced aspects like reflective
self-modification and self-improvement of the codebase, 6) the tentative
development roadmap and various challenges expected to be faced, 7) the
thinking of the Hyperon team regarding how to guide this sort of work in a
beneficial direction ... and gives links and references for readers who wish to
delve further into any of these aspects.
| [
{
"version": "v1",
"created": "Tue, 19 Sep 2023 23:25:09 GMT"
}
] | 1,698,710,400,000 | [
[
"Goertzel",
"Ben",
""
],
[
"Bogdanov",
"Vitaly",
""
],
[
"Duncan",
"Michael",
""
],
[
"Duong",
"Deborah",
""
],
[
"Goertzel",
"Zarathustra",
""
],
[
"Horlings",
"Jan",
""
],
[
"Ikle'",
"Matthew",
""
],
[
"Meredith",
"Lucius Greg",
""
],
[
"Potapov",
"Alexey",
""
],
[
"de Senna",
"Andre' Luiz",
""
],
[
"Suarez",
"Hedra Seid Andres",
""
],
[
"Vandervorst",
"Adam",
""
],
[
"Werko",
"Robert",
""
]
] |
2310.18361 | Haider Sultan | Haider Sultan, Hafiza Farwa Mahmood, Noor Fatima, Marriyam Nadeem and
Talha Waheed | Clinical Decision Support System for Unani Medicine Practitioners | 59 pages, 11 figures, Computer Science Bachelor's Thesis on use of
Artificial Intelligence in Clinical Decision Support System for Unani
Medicines | null | 10.13140/RG.2.2.15161.54887/1 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Like other fields of Traditional Medicines, Unani Medicines have been found
as an effective medical practice for ages. It is still widely used in the
subcontinent, particularly in Pakistan and India. However, Unani Medicines
Practitioners are lacking modern IT applications in their everyday clinical
practices. An Online Clinical Decision Support System may address this
challenge to assist apprentice Unani Medicines practitioners in their
diagnostic processes. The proposed system provides a web-based interface to
enter the patient's symptoms, which are then automatically analyzed by our
system to generate a list of probable diseases. The system allows practitioners
to choose the most likely disease and inform patients about the associated
treatment options remotely. The system consists of three modules: an Online
Clinical Decision Support System, an Artificial Intelligence Inference Engine,
and a comprehensive Unani Medicines Database. The system employs advanced AI
techniques such as Decision Trees, Deep Learning, and Natural Language
Processing. For system development, the project team used a technology stack
that includes React, FastAPI, and MySQL. Data and functionality of the
application is exposed using APIs for integration and extension with similar
domain applications. The novelty of the project is that it addresses the
challenge of diagnosing diseases accurately and efficiently in the context of
Unani Medicines principles. By leveraging the power of technology, the proposed
Clinical Decision Support System has the potential to ease access to healthcare
services and information, reduce cost, boost practitioner and patient
satisfaction, improve speed and accuracy of the diagnostic process, and provide
effective treatments remotely. The application will be useful for Unani
Medicines Practitioners, Patients, Government Drug Regulators, Software
Developers, and Medical Researchers.
| [
{
"version": "v1",
"created": "Tue, 24 Oct 2023 13:49:18 GMT"
}
] | 1,698,710,400,000 | [
[
"Sultan",
"Haider",
""
],
[
"Mahmood",
"Hafiza Farwa",
""
],
[
"Fatima",
"Noor",
""
],
[
"Nadeem",
"Marriyam",
""
],
[
"Waheed",
"Talha",
""
]
] |
2310.18370 | Qun Zhao | Qun Zhao, Xintao Wang, Menghui Yang | New Boolean satisfiability problem heuristic strategy: Minimal Positive
Negative Product Strategy | 7 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study presents a novel heuristic algorithm called the "Minimal Positive
Negative Product Strategy" to guide the CDCL algorithm in solving the Boolean
satisfiability problem. It provides a mathematical explanation for the
superiority of this algorithm over widely used heuristics such as the Dynamic
Largest Individual Sum (DLIS) and the Variable State Independent Decaying Sum
(VSIDS). Experimental results further confirm the effectiveness of this
heuristic strategy in problem-solving.
| [
{
"version": "v1",
"created": "Thu, 26 Oct 2023 09:36:13 GMT"
}
] | 1,698,710,400,000 | [
[
"Zhao",
"Qun",
""
],
[
"Wang",
"Xintao",
""
],
[
"Yang",
"Menghui",
""
]
] |
2310.18378 | Site Li | Qiu Ji, Guilin Qi, Yuxin Ye, Jiaye Li, Site Li, Jianjie Ren, Songtao
Lu | Ontology Revision based on Pre-trained Language Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ontology revision aims to seamlessly incorporate a new ontology into an
existing ontology and plays a crucial role in tasks such as ontology evolution,
ontology maintenance, and ontology alignment. Similar to repair single
ontologies, resolving logical incoherence in the task of ontology revision is
also important and meaningful, because incoherence is a main potential factor
to cause inconsistency and reasoning with an inconsistent ontology will obtain
meaningless answers.To deal with this problem, various ontology revision
approaches have been proposed to define revision operators and design ranking
strategies for axioms in an ontology. However, they rarely consider axiom
semantics which provides important information to differentiate axioms. In
addition, pre-trained models can be utilized to encode axiom semantics, and
have been widely applied in many natural language processing tasks and
ontology-related ones in recent years.Therefore, in this paper, we study how to
apply pre-trained models to revise ontologies. We first define four scoring
functions to rank axioms based on a pre-trained model by considering various
information from an ontology. Based on the functions, an ontology revision
algorithm is then proposed to deal with unsatisfiable concepts at once. To
improve efficiency, an adapted revision algorithm is designed to deal with
unsatisfiable concepts group by group. We conduct experiments over 19 ontology
pairs and compare our algorithms and scoring functions with existing ones.
According to the experiments, our algorithms could achieve promising
performance.
| [
{
"version": "v1",
"created": "Fri, 27 Oct 2023 00:52:01 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Dec 2023 16:56:19 GMT"
}
] | 1,703,635,200,000 | [
[
"Ji",
"Qiu",
""
],
[
"Qi",
"Guilin",
""
],
[
"Ye",
"Yuxin",
""
],
[
"Li",
"Jiaye",
""
],
[
"Li",
"Site",
""
],
[
"Ren",
"Jianjie",
""
],
[
"Lu",
"Songtao",
""
]
] |
2310.18647 | Mircea Lic\u{a} | Mircea-Tudor Lic\u{a}, David Dinucu-Jianu | Sleep Deprivation in the Forward-Forward Algorithm | 5 pages, 2 figures, published in ICLR 2023 TinyPapers | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper aims to explore the separation of the two forward passes in the
Forward-Forward algorithm from a biological perspective in the context of
sleep. We show the size of the gap between the sleep and awake phase influences
the learning capabilities of the algorithm and highlight the importance of
negative data in diminishing the devastating effects of sleep deprivation.
| [
{
"version": "v1",
"created": "Sat, 28 Oct 2023 09:09:44 GMT"
}
] | 1,698,710,400,000 | [
[
"Lică",
"Mircea-Tudor",
""
],
[
"Dinucu-Jianu",
"David",
""
]
] |
2310.18714 | Junming Qiu | Quanlong Guan, Tong Zhu, Liangda Fang, Junming Qiu, Zhao-Rong Lai,
Weiqi Luo | An Investigation of Darwiche and Pearl's Postulates for Iterated Belief
Update | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Belief revision and update, two significant types of belief change, both
focus on how an agent modify her beliefs in presence of new information. The
most striking difference between them is that the former studies the change of
beliefs in a static world while the latter concentrates on a
dynamically-changing world. The famous AGM and KM postulates were proposed to
capture rational belief revision and update, respectively. However, both of
them are too permissive to exclude some unreasonable changes in the iteration.
In response to this weakness, the DP postulates and its extensions for iterated
belief revision were presented. Furthermore, Rodrigues integrated these
postulates in belief update. Unfortunately, his approach does not meet the
basic requirement of iterated belief update. This paper is intended to solve
this problem of Rodrigues's approach. Firstly, we present a modification of the
original KM postulates based on belief states. Subsequently, we migrate several
well-known postulates for iterated belief revision to iterated belief update.
Moreover, we provide the exact semantic characterizations based on partial
preorders for each of the proposed postulates. Finally, we analyze the
compatibility between the above iterated postulates and the KM postulates for
belief update.
| [
{
"version": "v1",
"created": "Sat, 28 Oct 2023 14:21:21 GMT"
}
] | 1,698,710,400,000 | [
[
"Guan",
"Quanlong",
""
],
[
"Zhu",
"Tong",
""
],
[
"Fang",
"Liangda",
""
],
[
"Qiu",
"Junming",
""
],
[
"Lai",
"Zhao-Rong",
""
],
[
"Luo",
"Weiqi",
""
]
] |
2310.18752 | Ziyue Li Dr | Guanghu Sui, Zhishuai Li, Ziyue Li, Sun Yang, Jingqing Ruan, Hangyu
Mao, Rui Zhao | Reboost Large Language Model-based Text-to-SQL, Text-to-Python, and
Text-to-Function -- with Real Applications in Traffic Domain | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The previous state-of-the-art (SOTA) method achieved a remarkable execution
accuracy on the Spider dataset, which is one of the largest and most diverse
datasets in the Text-to-SQL domain. However, during our reproduction of the
business dataset, we observed a significant drop in performance. We examined
the differences in dataset complexity, as well as the clarity of questions'
intentions, and assessed how those differences could impact the performance of
prompting methods. Subsequently, We develop a more adaptable and more general
prompting method, involving mainly query rewriting and SQL boosting, which
respectively transform vague information into exact and precise information and
enhance the SQL itself by incorporating execution feedback and the query
results from the database content. In order to prevent information gaps, we
include the comments, value types, and value samples for columns as part of the
database description in the prompt. Our experiments with Large Language Models
(LLMs) illustrate the significant performance improvement on the business
dataset and prove the substantial potential of our method. In terms of
execution accuracy on the business dataset, the SOTA method scored 21.05, while
our approach scored 65.79. As a result, our approach achieved a notable
performance improvement even when using a less capable pre-trained language
model. Last but not least, we also explore the Text-to-Python and
Text-to-Function options, and we deeply analyze the pros and cons among them,
offering valuable insights to the community.
| [
{
"version": "v1",
"created": "Sat, 28 Oct 2023 16:32:40 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Oct 2023 12:51:09 GMT"
}
] | 1,698,796,800,000 | [
[
"Sui",
"Guanghu",
""
],
[
"Li",
"Zhishuai",
""
],
[
"Li",
"Ziyue",
""
],
[
"Yang",
"Sun",
""
],
[
"Ruan",
"Jingqing",
""
],
[
"Mao",
"Hangyu",
""
],
[
"Zhao",
"Rui",
""
]
] |
2310.18832 | Yash Gupta | Yash Gupta, Runtian Zhai, Arun Suggala, Pradeep Ravikumar | Responsible AI (RAI) Games and Ensembles | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Several recent works have studied the societal effects of AI; these include
issues such as fairness, robustness, and safety. In many of these objectives, a
learner seeks to minimize its worst-case loss over a set of predefined
distributions (known as uncertainty sets), with usual examples being perturbed
versions of the empirical distribution. In other words, aforementioned problems
can be written as min-max problems over these uncertainty sets. In this work,
we provide a general framework for studying these problems, which we refer to
as Responsible AI (RAI) games. We provide two classes of algorithms for solving
these games: (a) game-play based algorithms, and (b) greedy stagewise
estimation algorithms. The former class is motivated by online learning and
game theory, whereas the latter class is motivated by the classical statistical
literature on boosting, and regression. We empirically demonstrate the
applicability and competitive performance of our techniques for solving several
RAI problems, particularly around subpopulation shift.
| [
{
"version": "v1",
"created": "Sat, 28 Oct 2023 22:17:30 GMT"
}
] | 1,698,710,400,000 | [
[
"Gupta",
"Yash",
""
],
[
"Zhai",
"Runtian",
""
],
[
"Suggala",
"Arun",
""
],
[
"Ravikumar",
"Pradeep",
""
]
] |
2310.18852 | Chase Yakaboski | Chase Yakaboski, Gregory Hyde, Clement Nyanhongo and Eugene Santos Jr | AI for Open Science: A Multi-Agent Perspective for Ethically Translating
Data to Knowledge | NeurIPS AI For Science Workshop 2023. 11 pages, 2 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI for Science (AI4Science), particularly in the form of self-driving labs,
has the potential to sideline human involvement and hinder scientific discovery
within the broader community. While prior research has focused on ensuring the
responsible deployment of AI applications, enhancing security, and ensuring
interpretability, we also propose that promoting openness in AI4Science
discoveries should be carefully considered. In this paper, we introduce the
concept of AI for Open Science (AI4OS) as a multi-agent extension of AI4Science
with the core principle of maximizing open knowledge translation throughout the
scientific enterprise rather than a single organizational unit. We use the
established principles of Knowledge Discovery and Data Mining (KDD) to
formalize a language around AI4OS. We then discuss three principle stages of
knowledge translation embedded in AI4Science systems and detail specific points
where openness can be applied to yield an AI4OS alternative. Lastly, we
formulate a theoretical metric to assess AI4OS with a supporting ethical
argument highlighting its importance. Our goal is that by drawing attention to
AI4OS we can ensure the natural consequence of AI4Science (e.g., self-driving
labs) is a benefit not only for its developers but for society as a whole.
| [
{
"version": "v1",
"created": "Sat, 28 Oct 2023 23:57:15 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Oct 2023 17:54:20 GMT"
}
] | 1,698,796,800,000 | [
[
"Yakaboski",
"Chase",
""
],
[
"Hyde",
"Gregory",
""
],
[
"Nyanhongo",
"Clement",
""
],
[
"Santos",
"Eugene",
"Jr"
]
] |
2310.18932 | Kyung Geun Kim | Kyung Geun Kim, Byeong Tak Lee | Self Attention with Temporal Prior: Can We Learn More from Arrow of
Time? | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Many diverse phenomena in nature often inherently encode both short- and
long-term temporal dependencies, which especially result from the direction of
the flow of time. In this respect, we discovered experimental evidence
suggesting that interrelations of these events are higher for closer time
stamps. However, to be able for attention-based models to learn these
regularities in short-term dependencies, it requires large amounts of data,
which are often infeasible. This is because, while they are good at learning
piece-wise temporal dependencies, attention-based models lack structures that
encode biases in time series. As a resolution, we propose a simple and
efficient method that enables attention layers to better encode the short-term
temporal bias of these data sets by applying learnable, adaptive kernels
directly to the attention matrices. We chose various prediction tasks for the
experiments using Electronic Health Records (EHR) data sets since they are
great examples with underlying long- and short-term temporal dependencies. Our
experiments show exceptional classification results compared to best-performing
models on most tasks and data sets.
| [
{
"version": "v1",
"created": "Sun, 29 Oct 2023 08:00:13 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Apr 2024 08:11:21 GMT"
}
] | 1,714,348,800,000 | [
[
"Kim",
"Kyung Geun",
""
],
[
"Lee",
"Byeong Tak",
""
]
] |
2310.18983 | XingJiao Wu | Anran Wu, Luwei Xiao, Xingjiao Wu, Shuwen Yang, Junjie Xu, Zisong
Zhuang, Nian Xie, Cheng Jin, Liang He | DCQA: Document-Level Chart Question Answering towards Complex Reasoning
and Common-Sense Understanding | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visually-situated languages such as charts and plots are omnipresent in
real-world documents. These graphical depictions are human-readable and are
often analyzed in visually-rich documents to address a variety of questions
that necessitate complex reasoning and common-sense responses. Despite the
growing number of datasets that aim to answer questions over charts, most only
address this task in isolation, without considering the broader context of
document-level question answering. Moreover, such datasets lack adequate
common-sense reasoning information in their questions. In this work, we
introduce a novel task named document-level chart question answering (DCQA).
The goal of this task is to conduct document-level question answering,
extracting charts or plots in the document via document layout analysis (DLA)
first and subsequently performing chart question answering (CQA). The newly
developed benchmark dataset comprises 50,010 synthetic documents integrating
charts in a wide range of styles (6 styles in contrast to 3 for PlotQA and
ChartQA) and includes 699,051 questions that demand a high degree of reasoning
ability and common-sense understanding. Besides, we present the development of
a potent question-answer generation engine that employs table data, a rich
color set, and basic question templates to produce a vast array of reasoning
question-answer pairs automatically. Based on DCQA, we devise an OCR-free
transformer for document-level chart-oriented understanding, capable of DLA and
answering complex reasoning and common-sense questions over charts in an
OCR-free manner. Our DCQA dataset is expected to foster research on
understanding visualizations in documents, especially for scenarios that
require complex reasoning for charts in the visually-rich document. We
implement and evaluate a set of baselines, and our proposed method achieves
comparable results.
| [
{
"version": "v1",
"created": "Sun, 29 Oct 2023 11:38:08 GMT"
}
] | 1,698,710,400,000 | [
[
"Wu",
"Anran",
""
],
[
"Xiao",
"Luwei",
""
],
[
"Wu",
"Xingjiao",
""
],
[
"Yang",
"Shuwen",
""
],
[
"Xu",
"Junjie",
""
],
[
"Zhuang",
"Zisong",
""
],
[
"Xie",
"Nian",
""
],
[
"Jin",
"Cheng",
""
],
[
"He",
"Liang",
""
]
] |
2310.19057 | Pervaiz Khan | Pervaiz Iqbal Khan, Muhammad Nabeel Asim, Andreas Dengel, Sheraz Ahmed | A Unique Training Strategy to Enhance Language Models Capabilities for
Health Mention Detection from Social Media Content | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An ever-increasing amount of social media content requires advanced AI-based
computer programs capable of extracting useful information. Specifically, the
extraction of health-related content from social media is useful for the
development of diverse types of applications including disease spread,
mortality rate prediction, and finding the impact of diverse types of drugs on
diverse types of diseases. Language models are competent in extracting the
syntactic and semantics of text. However, they face a hard time extracting
similar patterns from social media texts. The primary reason for this shortfall
lies in the non-standardized writing style commonly employed by social media
users. Following the need for an optimal language model competent in extracting
useful patterns from social media text, the key goal of this paper is to train
language models in such a way that they learn to derive generalized patterns.
The key goal is achieved through the incorporation of random weighted
perturbation and contrastive learning strategies. On top of a unique training
strategy, a meta predictor is proposed that reaps the benefits of 5 different
language models for discriminating posts of social media text into non-health
and health-related classes. Comprehensive experimentation across 3 public
benchmark datasets reveals that the proposed training strategy improves the
performance of the language models up to 3.87%, in terms of F1-score, as
compared to their performance with traditional training. Furthermore, the
proposed meta predictor outperforms existing health mention classification
predictors across all 3 benchmark datasets.
| [
{
"version": "v1",
"created": "Sun, 29 Oct 2023 16:08:33 GMT"
}
] | 1,698,710,400,000 | [
[
"Khan",
"Pervaiz Iqbal",
""
],
[
"Asim",
"Muhammad Nabeel",
""
],
[
"Dengel",
"Andreas",
""
],
[
"Ahmed",
"Sheraz",
""
]
] |
2310.19206 | Songlin Xu | Songlin Xu, Xinyu Zhang | Leveraging generative artificial intelligence to simulate student
learning behavior | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Student simulation presents a transformative approach to enhance learning
outcomes, advance educational research, and ultimately shape the future of
effective pedagogy. We explore the feasibility of using large language models
(LLMs), a remarkable achievement in AI, to simulate student learning behaviors.
Unlike conventional machine learning based prediction, we leverage LLMs to
instantiate virtual students with specific demographics and uncover intricate
correlations among learning experiences, course materials, understanding
levels, and engagement. Our objective is not merely to predict learning
outcomes but to replicate learning behaviors and patterns of real students. We
validate this hypothesis through three experiments. The first experiment, based
on a dataset of N = 145, simulates student learning outcomes from demographic
data, revealing parallels with actual students concerning various demographic
factors. The second experiment (N = 4524) results in increasingly realistic
simulated behaviors with more assessment history for virtual students
modelling. The third experiment (N = 27), incorporating prior knowledge and
course interactions, indicates a strong link between virtual students' learning
behaviors and fine-grained mappings from test questions, course materials,
engagement and understanding levels. Collectively, these findings deepen our
understanding of LLMs and demonstrate its viability for student simulation,
empowering more adaptable curricula design to enhance inclusivity and
educational effectiveness.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 00:09:59 GMT"
}
] | 1,698,710,400,000 | [
[
"Xu",
"Songlin",
""
],
[
"Zhang",
"Xinyu",
""
]
] |
2310.19247 | Jiaqian Ren | Jiaqian Ren and Hao Peng and Lei Jiang and Zhiwei Liu and Jia Wu and
Zhengtao Yu and Philip S. Yu | Uncertainty-guided Boundary Learning for Imbalanced Social Event
Detection | Accepted by TKDE 2023 | TKDE 2023 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-world social events typically exhibit a severe class-imbalance
distribution, which makes the trained detection model encounter a serious
generalization challenge. Most studies solve this problem from the frequency
perspective and emphasize the representation or classifier learning for tail
classes. While in our observation, compared to the rarity of classes, the
calibrated uncertainty estimated from well-trained evidential deep learning
networks better reflects model performance. To this end, we propose a novel
uncertainty-guided class imbalance learning framework - UCL$_{SED}$, and its
variant - UCL-EC$_{SED}$, for imbalanced social event detection tasks. We aim
to improve the overall model performance by enhancing model generalization to
those uncertain classes. Considering performance degradation usually comes from
misclassifying samples as their confusing neighboring classes, we focus on
boundary learning in latent space and classifier learning with high-quality
uncertainty estimation. First, we design a novel uncertainty-guided contrastive
learning loss, namely UCL and its variant - UCL-EC, to manipulate
distinguishable representation distribution for imbalanced data. During
training, they force all classes, especially uncertain ones, to adaptively
adjust a clear separable boundary in the feature space. Second, to obtain more
robust and accurate class uncertainty, we combine the results of multi-view
evidential classifiers via the Dempster-Shafer theory under the supervision of
an additional calibration method. We conduct experiments on three severely
imbalanced social event datasets including Events2012\_100, Events2018\_100,
and CrisisLexT\_7. Our model significantly improves social event representation
and classification tasks in almost all classes, especially those uncertain
ones.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 03:32:04 GMT"
}
] | 1,698,710,400,000 | [
[
"Ren",
"Jiaqian",
""
],
[
"Peng",
"Hao",
""
],
[
"Jiang",
"Lei",
""
],
[
"Liu",
"Zhiwei",
""
],
[
"Wu",
"Jia",
""
],
[
"Yu",
"Zhengtao",
""
],
[
"Yu",
"Philip S.",
""
]
] |
2310.19381 | Nicolas Michael M\"uller | Nicolas M. M\"uller, Maximilian Burgert, Pascal Debus, Jennifer
Williams, Philip Sperl, Konstantin B\"ottinger | Protecting Publicly Available Data With Machine Learning Shortcuts | Published at BMVC 2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine-learning (ML) shortcuts or spurious correlations are artifacts in
datasets that lead to very good training and test performance but severely
limit the model's generalization capability. Such shortcuts are insidious
because they go unnoticed due to good in-domain test performance. In this
paper, we explore the influence of different shortcuts and show that even
simple shortcuts are difficult to detect by explainable AI methods. We then
exploit this fact and design an approach to defend online databases against
crawlers: providers such as dating platforms, clothing manufacturers, or used
car dealers have to deal with a professionalized crawling industry that grabs
and resells data points on a large scale. We show that a deterrent can be
created by deliberately adding ML shortcuts. Such augmented datasets are then
unusable for ML use cases, which deters crawlers and the unauthorized use of
data from the internet. Using real-world data from three use cases, we show
that the proposed approach renders such collected data unusable, while the
shortcut is at the same time difficult to notice in human perception. Thus, our
proposed approach can serve as a proactive protection against illegitimate data
crawling.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 09:38:03 GMT"
}
] | 1,698,710,400,000 | [
[
"Müller",
"Nicolas M.",
""
],
[
"Burgert",
"Maximilian",
""
],
[
"Debus",
"Pascal",
""
],
[
"Williams",
"Jennifer",
""
],
[
"Sperl",
"Philip",
""
],
[
"Böttinger",
"Konstantin",
""
]
] |
2310.19387 | Hiroki Takizawa | Hiroki Takizawa | Othello is Solved | Typos in Figure 4 corrected; results, data, and conclusions unchanged
and unaffected | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The game of Othello is one of the world's most complex and popular games that
has yet to be computationally solved. Othello has roughly ten octodecillion (10
to the 58th power) possible game records and ten octillion (10 to the 28th
power) possible game positions. The challenge of solving Othello, determining
the outcome of a game with no mistake made by either player, has long been a
grand challenge in computer science. This paper announces a significant
milestone: Othello is now solved. It is computationally proved that perfect
play by both players lead to a draw. Strong Othello software has long been
built using heuristically designed search techniques. Solving a game provides a
solution that enables the software to play the game perfectly.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 09:48:50 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Nov 2023 17:27:54 GMT"
},
{
"version": "v3",
"created": "Tue, 2 Jan 2024 19:52:37 GMT"
}
] | 1,704,326,400,000 | [
[
"Takizawa",
"Hiroki",
""
]
] |
2310.19425 | Wlodzislaw Duch | W{\l}odzis{\l}aw Duch | Artificial intelligence and the limits of the humanities | 39 pages, 1 figure | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The complexity of cultures in the modern world is now beyond human
comprehension. Cognitive sciences cast doubts on the traditional explanations
based on mental models. The core subjects in humanities may lose their
importance. Humanities have to adapt to the digital age. New, interdisciplinary
branches of humanities emerge. Instant access to information will be replaced
by instant access to knowledge. Understanding the cognitive limitations of
humans and the opportunities opened by the development of artificial
intelligence and interdisciplinary research necessary to address global
challenges is the key to the revitalization of humanities. Artificial
intelligence will radically change humanities, from art to political sciences
and philosophy, making these disciplines attractive to students and enabling
them to go beyond current limitations.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 10:35:23 GMT"
}
] | 1,698,710,400,000 | [
[
"Duch",
"Włodzisław",
""
]
] |
2310.19449 | Syed Sha Qutub Mr. | Ralf Graafe, Qutub Syed Sha, Florian Geissler, Michael Paulitsch | Large-Scale Application of Fault Injection into PyTorch Models -- an
Extension to PyTorchFI for Validation Efficiency | accepted in DSN2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Transient or permanent faults in hardware can render the output of Neural
Networks (NN) incorrect without user-specific traces of the error, i.e. silent
data errors (SDE). On the other hand, modern NNs also possess an inherent
redundancy that can tolerate specific faults. To establish a safety case, it is
necessary to distinguish and quantify both types of corruptions. To study the
effects of hardware (HW) faults on software (SW) in general and NN models in
particular, several fault injection (FI) methods have been established in
recent years. Current FI methods focus on the methodology of injecting faults
but often fall short of accounting for large-scale FI tests, where many fault
locations based on a particular fault model need to be analyzed in a short
time. Results need to be concise, repeatable, and comparable. To address these
requirements and enable fault injection as the default component in a machine
learning development cycle, we introduce a novel fault injection framework
called PyTorchALFI (Application Level Fault Injection for PyTorch) based on
PyTorchFI. PyTorchALFI provides an efficient way to define randomly generated
and reusable sets of faults to inject into PyTorch models, defines complex test
scenarios, enhances data sets, and generates test KPIs while tightly coupling
fault-free, faulty, and modified NN. In this paper, we provide details about
the definition of test scenarios, software architecture, and several examples
of how to use the new framework to apply iterative changes in fault location
and number, compare different model modifications, and analyze test results.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 11:18:35 GMT"
}
] | 1,698,710,400,000 | [
[
"Graafe",
"Ralf",
""
],
[
"Sha",
"Qutub Syed",
""
],
[
"Geissler",
"Florian",
""
],
[
"Paulitsch",
"Michael",
""
]
] |
2310.19607 | Guilherme Paulino-Passos | Guilherme Paulino-Passos, Francesca Toni | Technical Report on the Learning of Case Relevance in Case-Based
Reasoning with Abstract Argumentation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Case-based reasoning is known to play an important role in several legal
settings. In this paper we focus on a recent approach to case-based reasoning,
supported by an instantiation of abstract argumentation whereby arguments
represent cases and attack between arguments results from outcome disagreement
between cases and a notion of relevance. In this context, relevance is
connected to a form of specificity among cases. We explore how relevance can be
learnt automatically in practice with the help of decision trees, and explore
the combination of case-based reasoning with abstract argumentation (AA-CBR)
and learning of case relevance for prediction in legal settings. Specifically,
we show that, for two legal datasets, AA-CBR and decision-tree-based learning
of case relevance perform competitively in comparison with decision trees. We
also show that AA-CBR with decision-tree-based learning of case relevance
results in a more compact representation than their decision tree counterparts,
which could be beneficial for obtaining cognitively tractable explanations.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 15:01:41 GMT"
}
] | 1,698,710,400,000 | [
[
"Paulino-Passos",
"Guilherme",
""
],
[
"Toni",
"Francesca",
""
]
] |
2310.19626 | Gengchen Mai | Zhengliang Liu, Yiwei Li, Qian Cao, Junwen Chen, Tianze Yang, Zihao
Wu, John Hale, John Gibbs, Khaled Rasheed, Ninghao Liu, Gengchen Mai, and
Tianming Liu | Transformation vs Tradition: Artificial General Intelligence (AGI) for
Arts and Humanities | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Recent advances in artificial general intelligence (AGI), particularly large
language models and creative image generation systems have demonstrated
impressive capabilities on diverse tasks spanning the arts and humanities.
However, the swift evolution of AGI has also raised critical questions about
its responsible deployment in these culturally significant domains
traditionally seen as profoundly human. This paper provides a comprehensive
analysis of the applications and implications of AGI for text, graphics, audio,
and video pertaining to arts and the humanities. We survey cutting-edge systems
and their usage in areas ranging from poetry to history, marketing to film, and
communication to classical art. We outline substantial concerns pertaining to
factuality, toxicity, biases, and public safety in AGI systems, and propose
mitigation strategies. The paper argues for multi-stakeholder collaboration to
ensure AGI promotes creativity, knowledge, and cultural values without
undermining truth or human dignity. Our timely contribution summarizes a
rapidly developing field, highlighting promising directions while advocating
for responsible progress centering on human flourishing. The analysis lays the
groundwork for further research on aligning AGI's technological capacities with
enduring social goods.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 15:19:15 GMT"
}
] | 1,698,710,400,000 | [
[
"Liu",
"Zhengliang",
""
],
[
"Li",
"Yiwei",
""
],
[
"Cao",
"Qian",
""
],
[
"Chen",
"Junwen",
""
],
[
"Yang",
"Tianze",
""
],
[
"Wu",
"Zihao",
""
],
[
"Hale",
"John",
""
],
[
"Gibbs",
"John",
""
],
[
"Rasheed",
"Khaled",
""
],
[
"Liu",
"Ninghao",
""
],
[
"Mai",
"Gengchen",
""
],
[
"Liu",
"Tianming",
""
]
] |
2310.19737 | Leo Schwinn | Leo Schwinn and David Dobre and Stephan G\"unnemann and Gauthier Gidel | Adversarial Attacks and Defenses in Large Language Models: Old and New
Threats | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the past decade, there has been extensive research aimed at enhancing
the robustness of neural networks, yet this problem remains vastly unsolved.
Here, one major impediment has been the overestimation of the robustness of new
defense approaches due to faulty defense evaluations. Flawed robustness
evaluations necessitate rectifications in subsequent works, dangerously slowing
down the research and providing a false sense of security. In this context, we
will face substantial challenges associated with an impending adversarial arms
race in natural language processing, specifically with closed-source Large
Language Models (LLMs), such as ChatGPT, Google Bard, or Anthropic's Claude. We
provide a first set of prerequisites to improve the robustness assessment of
new approaches and reduce the amount of faulty evaluations. Additionally, we
identify embedding space attacks on LLMs as another viable threat model for the
purposes of generating malicious content in open-sourced models. Finally, we
demonstrate on a recently proposed defense that, without LLM-specific best
practices in place, it is easy to overestimate the robustness of a new
approach.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 17:01:02 GMT"
}
] | 1,698,710,400,000 | [
[
"Schwinn",
"Leo",
""
],
[
"Dobre",
"David",
""
],
[
"Günnemann",
"Stephan",
""
],
[
"Gidel",
"Gauthier",
""
]
] |
2310.19775 | Luca Longo Dr | Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto
Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco
Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue,
Gianclaudio Malgieri, Andr\'es P\'aez, Wojciech Samek, Johannes Schneider,
Timo Speith, Simone Stumpf | Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open
Challenges and Interdisciplinary Research Directions | null | Information Fusion 2024 | 10.1016/j.inffus.2024.102301 | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | As systems based on opaque Artificial Intelligence (AI) continue to flourish
in diverse real-world applications, understanding these black box models has
become paramount. In response, Explainable AI (XAI) has emerged as a field of
research with practical and ethical benefits across various domains. This paper
not only highlights the advancements in XAI and its application in real-world
scenarios but also addresses the ongoing challenges within XAI, emphasizing the
need for broader perspectives and collaborative efforts. We bring together
experts from diverse fields to identify open problems, striving to synchronize
research agendas and accelerate XAI in practical applications. By fostering
collaborative discussion and interdisciplinary cooperation, we aim to propel
XAI forward, contributing to its continued success. Our goal is to put forward
a comprehensive proposal for advancing XAI. To achieve this goal, we present a
manifesto of 27 open problems categorized into nine categories. These
challenges encapsulate the complexities and nuances of XAI and offer a road map
for future research. For each problem, we provide promising research directions
in the hope of harnessing the collective intelligence of interested
stakeholders.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 17:44:55 GMT"
}
] | 1,708,387,200,000 | [
[
"Longo",
"Luca",
""
],
[
"Brcic",
"Mario",
""
],
[
"Cabitza",
"Federico",
""
],
[
"Choi",
"Jaesik",
""
],
[
"Confalonieri",
"Roberto",
""
],
[
"Del Ser",
"Javier",
""
],
[
"Guidotti",
"Riccardo",
""
],
[
"Hayashi",
"Yoichi",
""
],
[
"Herrera",
"Francisco",
""
],
[
"Holzinger",
"Andreas",
""
],
[
"Jiang",
"Richard",
""
],
[
"Khosravi",
"Hassan",
""
],
[
"Lecue",
"Freddy",
""
],
[
"Malgieri",
"Gianclaudio",
""
],
[
"Páez",
"Andrés",
""
],
[
"Samek",
"Wojciech",
""
],
[
"Schneider",
"Johannes",
""
],
[
"Speith",
"Timo",
""
],
[
"Stumpf",
"Simone",
""
]
] |
2310.19852 | Jiaming Ji | Jiaming Ji, Tianyi Qiu, Boyuan Chen, Borong Zhang, Hantao Lou, Kaile
Wang, Yawen Duan, Zhonghao He, Jiayi Zhou, Zhaowei Zhang, Fanzhi Zeng, Kwan
Yee Ng, Juntao Dai, Xuehai Pan, Aidan O'Gara, Yingshan Lei, Hua Xu, Brian
Tse, Jie Fu, Stephen McAleer, Yaodong Yang, Yizhou Wang, Song-Chun Zhu, Yike
Guo, Wen Gao | AI Alignment: A Comprehensive Survey | Continually updated, including weak-to-strong generalization and
socio-technical thinking. 58 pages (excluding bibliography), 801 references | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AI alignment aims to make AI systems behave in line with human intentions and
values. As AI systems grow more capable, so do risks from misalignment. To
provide a comprehensive and up-to-date overview of the alignment field, in this
survey, we delve into the core concepts, methodology, and practice of
alignment. First, we identify four principles as the key objectives of AI
alignment: Robustness, Interpretability, Controllability, and Ethicality
(RICE). Guided by these four principles, we outline the landscape of current
alignment research and decompose them into two key components: forward
alignment and backward alignment. The former aims to make AI systems aligned
via alignment training, while the latter aims to gain evidence about the
systems' alignment and govern them appropriately to avoid exacerbating
misalignment risks. On forward alignment, we discuss techniques for learning
from feedback and learning under distribution shift. On backward alignment, we
discuss assurance techniques and governance practices.
We also release and continually update the website (www.alignmentsurvey.com)
which features tutorials, collections of papers, blog posts, and other
resources.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 15:52:15 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Nov 2023 14:18:52 GMT"
},
{
"version": "v3",
"created": "Tue, 2 Jan 2024 17:09:27 GMT"
},
{
"version": "v4",
"created": "Mon, 26 Feb 2024 18:19:25 GMT"
},
{
"version": "v5",
"created": "Wed, 1 May 2024 07:30:50 GMT"
}
] | 1,714,608,000,000 | [
[
"Ji",
"Jiaming",
""
],
[
"Qiu",
"Tianyi",
""
],
[
"Chen",
"Boyuan",
""
],
[
"Zhang",
"Borong",
""
],
[
"Lou",
"Hantao",
""
],
[
"Wang",
"Kaile",
""
],
[
"Duan",
"Yawen",
""
],
[
"He",
"Zhonghao",
""
],
[
"Zhou",
"Jiayi",
""
],
[
"Zhang",
"Zhaowei",
""
],
[
"Zeng",
"Fanzhi",
""
],
[
"Ng",
"Kwan Yee",
""
],
[
"Dai",
"Juntao",
""
],
[
"Pan",
"Xuehai",
""
],
[
"O'Gara",
"Aidan",
""
],
[
"Lei",
"Yingshan",
""
],
[
"Xu",
"Hua",
""
],
[
"Tse",
"Brian",
""
],
[
"Fu",
"Jie",
""
],
[
"McAleer",
"Stephen",
""
],
[
"Yang",
"Yaodong",
""
],
[
"Wang",
"Yizhou",
""
],
[
"Zhu",
"Song-Chun",
""
],
[
"Guo",
"Yike",
""
],
[
"Gao",
"Wen",
""
]
] |
2310.19902 | Surya Narayanan Hari | Surya Narayanan Hari, Matt Thomson | Herd: Using multiple, smaller LLMs to match the performances of
proprietary, large LLMs via an intelligent composer | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Currently, over a thousand LLMs exist that are multi-purpose and are capable
of performing real world tasks, including Q&A, text summarization, content
generation, etc. However, accessibility, scale and reliability of free models
prevents them from being widely deployed in everyday use cases. To address the
first two issues of access and scale, organisations such as HuggingFace have
created model repositories where users have uploaded model weights and
quantized versions of models trained using different paradigms, as well as
model cards describing their training process. While some models report
performance on commonly used benchmarks, not all do, and interpreting the real
world impact of trading off performance on a benchmark for model deployment
cost, is unclear. Here, we show that a herd of open source models can match or
exceed the performance of proprietary models via an intelligent router. We show
that a Herd of open source models is able to match the accuracy of ChatGPT,
despite being composed of models that are effectively 2.5x smaller. We show
that in cases where GPT is not able to answer the query, Herd is able to
identify a model that can, at least 40% of the time.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 18:11:02 GMT"
}
] | 1,698,796,800,000 | [
[
"Hari",
"Surya Narayanan",
""
],
[
"Thomson",
"Matt",
""
]
] |
2310.20008 | Lana Rossato | Lana Bertoldo Rossato, Leonardo Boaventura Bombardelli, and Anderson
Rocha Tavares | Evolutionary Tabletop Game Design: A Case Study in the Risk Game | Published in the 22nd Brazilian Symposium on Games and Digital
Entertainment (SBGames 2023) | 22nd Brazilian Symposium on Computer Games and Digital
Entertainment (SBGames 2023) | 10.1145/3631085.3631236 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Creating and evaluating games manually is an arduous and laborious task.
Procedural content generation can aid by creating game artifacts, but usually
not an entire game. Evolutionary game design, which combines evolutionary
algorithms with automated playtesting, has been used to create novel board
games with simple equipment; however, the original approach does not include
complex tabletop games with dice, cards, and maps. This work proposes an
extension of the approach for tabletop games, evaluating the process by
generating variants of Risk, a military strategy game where players must
conquer map territories to win. We achieved this using a genetic algorithm to
evolve the chosen parameters, as well as a rules-based agent to test the games
and a variety of quality criteria to evaluate the new variations generated. Our
results show the creation of new variations of the original game with smaller
maps, resulting in shorter matches. Also, the variants produce more balanced
matches, maintaining the usual drama. We also identified limitations in the
process, where, in many cases, where the objective function was correctly
pursued, but the generated games were nearly trivial. This work paves the way
towards promising research regarding the use of evolutionary game design beyond
classic board games.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 20:53:26 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Feb 2024 15:55:02 GMT"
}
] | 1,706,832,000,000 | [
[
"Rossato",
"Lana Bertoldo",
""
],
[
"Bombardelli",
"Leonardo Boaventura",
""
],
[
"Tavares",
"Anderson Rocha",
""
]
] |
2310.20052 | Anton Lee | Anton Lee and Yaqian Zhang and Heitor Murilo Gomes and Albert Bifet
and Bernhard Pfahringer | Look At Me, No Replay! SurpriseNet: Anomaly Detection Inspired Class
Incremental Learning | null | Proceedings of the 32nd ACM international conference on
information and knowledge management, CIKM 2023, birmingham, united kingdom,
october 21-25, 2023 | 10.1145/3583780.3615236 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continual learning aims to create artificial neural networks capable of
accumulating knowledge and skills through incremental training on a sequence of
tasks. The main challenge of continual learning is catastrophic interference,
wherein new knowledge overrides or interferes with past knowledge, leading to
forgetting. An associated issue is the problem of learning "cross-task
knowledge," where models fail to acquire and retain knowledge that helps
differentiate classes across task boundaries. A common solution to both
problems is "replay," where a limited buffer of past instances is utilized to
learn cross-task knowledge and mitigate catastrophic interference. However, a
notable drawback of these methods is their tendency to overfit the limited
replay buffer. In contrast, our proposed solution, SurpriseNet, addresses
catastrophic interference by employing a parameter isolation method and
learning cross-task knowledge using an auto-encoder inspired by anomaly
detection. SurpriseNet is applicable to both structured and unstructured data,
as it does not rely on image-specific inductive biases. We have conducted
empirical experiments demonstrating the strengths of SurpriseNet on various
traditional vision continual-learning benchmarks, as well as on structured data
datasets. Source code made available at https://doi.org/10.5281/zenodo.8247906
and https://github.com/tachyonicClock/SurpriseNet-CIKM-23
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 22:16:26 GMT"
}
] | 1,698,796,800,000 | [
[
"Lee",
"Anton",
""
],
[
"Zhang",
"Yaqian",
""
],
[
"Gomes",
"Heitor Murilo",
""
],
[
"Bifet",
"Albert",
""
],
[
"Pfahringer",
"Bernhard",
""
]
] |
2310.20059 | Sunayana Rane | Sunayana Rane, Mark Ho, Ilia Sucholutsky, Thomas L. Griffiths | Concept Alignment as a Prerequisite for Value Alignment | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Value alignment is essential for building AI systems that can safely and
reliably interact with people. However, what a person values -- and is even
capable of valuing -- depends on the concepts that they are currently using to
understand and evaluate what happens in the world. The dependence of values on
concepts means that concept alignment is a prerequisite for value alignment --
agents need to align their representation of a situation with that of humans in
order to successfully align their values. Here, we formally analyze the concept
alignment problem in the inverse reinforcement learning setting, show how
neglecting concept alignment can lead to systematic value mis-alignment, and
describe an approach that helps minimize such failure modes by jointly
reasoning about a person's concepts and values. Additionally, we report
experimental results with human participants showing that humans reason about
the concepts used by an agent when acting intentionally, in line with our joint
reasoning model.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 22:23:15 GMT"
}
] | 1,698,796,800,000 | [
[
"Rane",
"Sunayana",
""
],
[
"Ho",
"Mark",
""
],
[
"Sucholutsky",
"Ilia",
""
],
[
"Griffiths",
"Thomas L.",
""
]
] |
2310.20162 | Leiyu Pan | Leiyu Pan, Supryadi and Deyi Xiong | Is Robustness Transferable across Languages in Multilingual Neural
Machine Translation? | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robustness, the ability of models to maintain performance in the face of
perturbations, is critical for developing reliable NLP systems. Recent studies
have shown promising results in improving the robustness of models through
adversarial training and data augmentation. However, in machine translation,
most of these studies have focused on bilingual machine translation with a
single translation direction. In this paper, we investigate the transferability
of robustness across different languages in multilingual neural machine
translation. We propose a robustness transfer analysis protocol and conduct a
series of experiments. In particular, we use character-, word-, and multi-level
noises to attack the specific translation direction of the multilingual neural
machine translation model and evaluate the robustness of other translation
directions. Our findings demonstrate that the robustness gained in one
translation direction can indeed transfer to other translation directions.
Additionally, we empirically find scenarios where robustness to character-level
noise and word-level noise is more likely to transfer.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2023 04:10:31 GMT"
}
] | 1,698,796,800,000 | [
[
"Pan",
"Leiyu",
""
],
[
"Supryadi",
"",
""
],
[
"Xiong",
"Deyi",
""
]
] |
2310.20174 | Satyaki Chakraborty | Pallavi Banerjee, Satyaki Chakraborty | GraphTransformers for Geospatial Forecasting of Hurricane Trajectories | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper we introduce a novel framework for trajectory prediction of
geospatial sequences using GraphTransformers. When viewed across several
sequences, we observed that a graph structure automatically emerges between
different geospatial points that is often not taken into account for such
sequence modeling tasks. We show that by leveraging this graph structure
explicitly, geospatial trajectory prediction can be significantly improved. Our
GraphTransformer approach improves upon state-of-the-art Transformer based
baseline significantly on HURDAT, a dataset where we are interested in
predicting the trajectory of a hurricane on a 6 hourly basis.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2023 04:53:10 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Nov 2023 17:53:23 GMT"
}
] | 1,701,129,600,000 | [
[
"Banerjee",
"Pallavi",
""
],
[
"Chakraborty",
"Satyaki",
""
]
] |
2310.20199 | Yadan Luo | Zixin Wang, Yadan Luo, Liang Zheng, Zhuoxiao Chen, Sen Wang, Zi Huang | In Search of Lost Online Test-time Adaptation: A Survey | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper, we present a comprehensive survey on online test-time
adaptation (OTTA), a paradigm focused on adapting machine learning models to
novel data distributions upon batch arrival. Despite the proliferation of OTTA
methods recently, the field is mired in issues like ambiguous settings,
antiquated backbones, and inconsistent hyperparameter tuning, obfuscating the
real challenges and making reproducibility elusive. For clarity and a rigorous
comparison, we classify OTTA techniques into three primary categories and
subject them to benchmarks using the potent Vision Transformer (ViT) backbone
to discover genuinely effective strategies. Our benchmarks span not only
conventional corrupted datasets such as CIFAR-10/100-C and ImageNet-C but also
real-world shifts embodied in CIFAR-10.1 and CIFAR-10-Warehouse, encapsulating
variations across search engines and synthesized data by diffusion models. To
gauge efficiency in online scenarios, we introduce novel evaluation metrics,
inclusive of FLOPs, shedding light on the trade-offs between adaptation
accuracy and computational overhead. Our findings diverge from existing
literature, indicating: (1) transformers exhibit heightened resilience to
diverse domain shifts, (2) the efficacy of many OTTA methods hinges on ample
batch sizes, and (3) stability in optimization and resistance to perturbations
are critical during adaptation, especially when the batch size is 1. Motivated
by these insights, we pointed out promising directions for future research. The
source code is made available: https://github.com/Jo-wang/OTTA_ViT_survey.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2023 05:47:33 GMT"
},
{
"version": "v2",
"created": "Sun, 31 Dec 2023 02:49:31 GMT"
}
] | 1,704,153,600,000 | [
[
"Wang",
"Zixin",
""
],
[
"Luo",
"Yadan",
""
],
[
"Zheng",
"Liang",
""
],
[
"Chen",
"Zhuoxiao",
""
],
[
"Wang",
"Sen",
""
],
[
"Huang",
"Zi",
""
]
] |
2310.20250 | Gaichao Lee | Gaichao Li, Jinsong Chen, John E. Hopcroft, Kun He | Diversified Node Sampling based Hierarchical Transformer Pooling for
Graph Representation Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph pooling methods have been widely used on downsampling graphs, achieving
impressive results on multiple graph-level tasks like graph classification and
graph generation. An important line called node dropping pooling aims at
exploiting learnable scoring functions to drop nodes with comparatively lower
significance scores. However, existing node dropping methods suffer from two
limitations: (1) for each pooled node, these models struggle to capture
long-range dependencies since they mainly take GNNs as the backbones; (2)
pooling only the highest-scoring nodes tends to preserve similar nodes, thus
discarding the affluent information of low-scoring nodes. To address these
issues, we propose a Graph Transformer Pooling method termed GTPool, which
introduces Transformer to node dropping pooling to efficiently capture
long-range pairwise interactions and meanwhile sample nodes diversely.
Specifically, we design a scoring module based on the self-attention mechanism
that takes both global context and local context into consideration, measuring
the importance of nodes more comprehensively. GTPool further utilizes a
diversified sampling method named Roulette Wheel Sampling (RWS) that is able to
flexibly preserve nodes across different scoring intervals instead of only
higher scoring nodes. In this way, GTPool could effectively obtain long-range
information and select more representative nodes. Extensive experiments on 11
benchmark datasets demonstrate the superiority of GTPool over existing popular
graph pooling methods.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2023 08:13:21 GMT"
}
] | 1,698,796,800,000 | [
[
"Li",
"Gaichao",
""
],
[
"Chen",
"Jinsong",
""
],
[
"Hopcroft",
"John E.",
""
],
[
"He",
"Kun",
""
]
] |
2310.20254 | yohann clement | Pedro Marote (UCBL, ISA), Marie Martin (UCBL, ISA), Anne Bonhomme,
Pierre Lant\'eri (ISA, UCBL), Yohann Cl\'ement | Artificial Intelligence for reverse engineering: application to
detergents using Raman spectroscopy | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reverse engineering of a complex mixture, regardless of its nature, has
become significant today. Being able to quickly assess the potential toxicity
of new commercial products in relation to the environment presents a genuine
analytical challenge. The development of digital tools (databases,
chemometrics, machine learning, etc.) and analytical techniques (Raman
spectroscopy, NIR spectroscopy, mass spectrometry, etc.) will allow for the
identification of potential toxic molecules. In this article, we use the
example of detergent products, whose composition can prove dangerous to humans
or the environment, necessitating precise identification and quantification for
quality control and regulation purposes. The combination of various digital
tools (spectral database, mixture database, experimental design, Chemometrics /
Machine Learning algorithm{\ldots}) together with different sample preparation
methods (raw sample, or several concentrated / diluted samples) Raman
spectroscopy, has enabled the identification of the mixture's constituents and
an estimation of its composition. Implementing such strategies across different
analytical tools can result in time savings for pollutant identification and
contamination assessment in various matrices. This strategy is also applicable
in the industrial sector for product or raw material control, as well as for
quality control purposes.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2023 08:16:22 GMT"
}
] | 1,698,796,800,000 | [
[
"Marote",
"Pedro",
"",
"UCBL, ISA"
],
[
"Martin",
"Marie",
"",
"UCBL, ISA"
],
[
"Bonhomme",
"Anne",
"",
"ISA, UCBL"
],
[
"Lantéri",
"Pierre",
"",
"ISA, UCBL"
],
[
"Clément",
"Yohann",
""
]
] |
2310.20327 | Guoliang Lin | Guoliang Lin, Hanjiang Lai, Yan Pan, Jian Yin | Improving Entropy-Based Test-Time Adaptation from a Clustering View | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Domain shift is a common problem in the realistic world, where training data
and test data follow different data distributions. To deal with this problem,
fully test-time adaptation (TTA) leverages the unlabeled data encountered
during test time to adapt the model. In particular, entropy-based TTA (EBTTA)
methods, which minimize the prediction's entropy on test samples, have shown
great success. In this paper, we introduce a new clustering perspective on the
EBTTA. It is an iterative algorithm: 1) in the assignment step, the forward
process of the EBTTA models is the assignment of labels for these test samples,
and 2) in the updating step, the backward process is the update of the model
via the assigned samples. This new perspective allows us to explore how entropy
minimization influences test-time adaptation. Accordingly, this observation can
guide us to put forward the improvement of EBTTA. We propose to improve EBTTA
from the assignment step and the updating step, where robust label assignment,
similarity-preserving constraint, sample selection, and gradient accumulation
are proposed to explicitly utilize more information. Experimental results
demonstrate that our method can achieve consistent improvements on various
datasets. Code is provided in the supplementary material.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2023 10:10:48 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Nov 2023 14:47:30 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Nov 2023 04:03:13 GMT"
},
{
"version": "v4",
"created": "Sat, 18 Nov 2023 06:14:05 GMT"
},
{
"version": "v5",
"created": "Tue, 9 Apr 2024 13:22:43 GMT"
},
{
"version": "v6",
"created": "Fri, 26 Apr 2024 03:11:42 GMT"
}
] | 1,714,348,800,000 | [
[
"Lin",
"Guoliang",
""
],
[
"Lai",
"Hanjiang",
""
],
[
"Pan",
"Yan",
""
],
[
"Yin",
"Jian",
""
]
] |
2310.20401 | Devon Graham Mr | Devon R. Graham, Kevin Leyton-Brown and Tim Roughgarden | Utilitarian Algorithm Configuration | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present the first nontrivial procedure for configuring heuristic
algorithms to maximize the utility provided to their end users while also
offering theoretical guarantees about performance. Existing procedures seek
configurations that minimize expected runtime. However, very recent theoretical
work argues that expected runtime minimization fails to capture algorithm
designers' preferences. Here we show that the utilitarian objective also
confers significant algorithmic benefits. Intuitively, this is because mean
runtime is dominated by extremely long runs even when they are incredibly rare;
indeed, even when an algorithm never gives rise to such long runs,
configuration procedures that provably minimize mean runtime must perform a
huge number of experiments to demonstrate this fact. In contrast, utility is
bounded and monotonically decreasing in runtime, allowing for meaningful
empirical bounds on a configuration's performance. This paper builds on this
idea to describe effective and theoretically sound configuration procedures. We
prove upper bounds on the runtime of these procedures that are similar to
theoretical lower bounds, while also demonstrating their performance
empirically.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2023 12:23:24 GMT"
}
] | 1,698,796,800,000 | [
[
"Graham",
"Devon R.",
""
],
[
"Leyton-Brown",
"Kevin",
""
],
[
"Roughgarden",
"Tim",
""
]
] |
2310.20463 | Yolanne Lee | Yolanne Yi Ran Lee | Interpretable Neural PDE Solvers using Symbolic Frameworks | Accepted to the NeurIPS 2023 AI for Science Workshop. arXiv admin
note: text overlap with arXiv:2310.19763 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partial differential equations (PDEs) are ubiquitous in the world around us,
modelling phenomena from heat and sound to quantum systems. Recent advances in
deep learning have resulted in the development of powerful neural solvers;
however, while these methods have demonstrated state-of-the-art performance in
both accuracy and computational efficiency, a significant challenge remains in
their interpretability. Most existing methodologies prioritize predictive
accuracy over clarity in the underlying mechanisms driving the model's
decisions. Interpretability is crucial for trustworthiness and broader
applicability, especially in scientific and engineering domains where neural
PDE solvers might see the most impact. In this context, a notable gap in
current research is the integration of symbolic frameworks (such as symbolic
regression) into these solvers. Symbolic frameworks have the potential to
distill complex neural operations into human-readable mathematical expressions,
bridging the divide between black-box predictions and solutions.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2023 13:56:25 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Nov 2023 12:15:33 GMT"
}
] | 1,699,833,600,000 | [
[
"Lee",
"Yolanne Yi Ran",
""
]
] |
2310.20474 | Seraj Al Mahmud Mostafa | Seraj A. M. Mostafa, Md Z. Islam, Mohammad Z. Islam, Fairose Jeehan,
Saujanna Jafreen, Raihan U. Islam | Critical Role of Artificially Intelligent Conversational Chatbot | Extended version of Conversation 2023 position paper | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificially intelligent chatbot, such as ChatGPT, represents a recent and
powerful advancement in the AI domain. Users prefer them for obtaining quick
and precise answers, avoiding the usual hassle of clicking through multiple
links in traditional searches. ChatGPT's conversational approach makes it
comfortable and accessible for finding answers quickly and in an organized
manner. However, it is important to note that these chatbots have limitations,
especially in terms of providing accurate answers as well as ethical concerns.
In this study, we explore various scenarios involving ChatGPT's ethical
implications within academic contexts, its limitations, and the potential
misuse by specific user groups. To address these challenges, we propose
architectural solutions aimed at preventing inappropriate use and promoting
responsible AI interactions.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2023 14:08:07 GMT"
}
] | 1,698,796,800,000 | [
[
"Mostafa",
"Seraj A. M.",
""
],
[
"Islam",
"Md Z.",
""
],
[
"Islam",
"Mohammad Z.",
""
],
[
"Jeehan",
"Fairose",
""
],
[
"Jafreen",
"Saujanna",
""
],
[
"Islam",
"Raihan U.",
""
]
] |
2310.20478 | Md Shajalal | Md Shajalal, Sebastian Denef, Md. Rezaul Karim, Alexander Boden,
Gunnar Stevens | Unveiling Black-boxes: Explainable Deep Learning Models for Patent
Classification | This is the pre-print of the submitted manuscript on the World
Conference on eXplainable Artificial Intelligence (xAI2023), Lisbon,
Portugal. The published manuscript can be found here
https://doi.org/10.1007/978-3-031-44067-0_24 | null | 10.1007/978-3-031-44067-0_24 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent technological advancements have led to a large number of patents in a
diverse range of domains, making it challenging for human experts to analyze
and manage. State-of-the-art methods for multi-label patent classification rely
on deep neural networks (DNNs), which are complex and often considered
black-boxes due to their opaque decision-making processes. In this paper, we
propose a novel deep explainable patent classification framework by introducing
layer-wise relevance propagation (LRP) to provide human-understandable
explanations for predictions. We train several DNN models, including Bi-LSTM,
CNN, and CNN-BiLSTM, and propagate the predictions backward from the output
layer up to the input layer of the model to identify the relevance of words for
individual predictions. Considering the relevance score, we then generate
explanations by visualizing relevant words for the predicted patent class.
Experimental results on two datasets comprising two-million patent texts
demonstrate high performance in terms of various evaluation measures. The
explanations generated for each prediction highlight important relevant words
that align with the predicted class, making the prediction more understandable.
Explainable systems have the potential to facilitate the adoption of complex
AI-enabled methods for patent classification in real-world applications.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2023 14:11:37 GMT"
}
] | 1,698,796,800,000 | [
[
"Shajalal",
"Md",
""
],
[
"Denef",
"Sebastian",
""
],
[
"Karim",
"Md. Rezaul",
""
],
[
"Boden",
"Alexander",
""
],
[
"Stevens",
"Gunnar",
""
]
] |
2310.20563 | Akash Wasil | Andrea Miotti and Akash Wasil | Taking control: Policies to address extinction risks from advanced AI | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper provides policy recommendations to reduce extinction risks from
advanced artificial intelligence (AI). First, we briefly provide background
information about extinction risks from AI. Second, we argue that voluntary
commitments from AI companies would be an inappropriate and insufficient
response. Third, we describe three policy proposals that would meaningfully
address the threats from advanced AI: (1) establishing a Multinational AGI
Consortium to enable democratic oversight of advanced AI (MAGIC), (2)
implementing a global cap on the amount of computing power used to train an AI
system (global compute cap), and (3) requiring affirmative safety evaluations
to ensure that risks are kept below acceptable levels (gating critical
experiments). MAGIC would be a secure, safety-focused, internationally-governed
institution responsible for reducing risks from advanced AI and performing
research to safely harness the benefits of AI. MAGIC would also maintain
emergency response infrastructure (kill switch) to swiftly halt AI development
or withdraw model deployment in the event of an AI-related emergency. The
global compute cap would end the corporate race toward dangerous AI systems
while enabling the vast majority of AI innovation to continue unimpeded. Gating
critical experiments would ensure that companies developing powerful AI systems
are required to present affirmative evidence that these models keep extinction
risks below an acceptable threshold. After describing these recommendations, we
propose intermediate steps that the international community could take to
implement these proposals and lay the groundwork for international coordination
around advanced AI.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2023 15:53:14 GMT"
}
] | 1,698,796,800,000 | [
[
"Miotti",
"Andrea",
""
],
[
"Wasil",
"Akash",
""
]
] |
2311.00203 | Senjuti Dutta | Senjuti Dutta (1), Sid Mittal (2), Sherol Chen (2), Deepak
Ramachandran (2), Ravi Rajakumar (2), Ian Kivlichan (2), Sunny Mak (2), Alena
Butryna (2), Praveen Paritosh (2) ((1) University of Tennessee, Knoxville,
(2) Google LLC) | Modeling subjectivity (by Mimicking Annotator Annotation) in toxic
comment identification across diverse communities | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The prevalence and impact of toxic discussions online have made content
moderation crucial.Automated systems can play a vital role in identifying
toxicity, and reducing the reliance on human moderation.Nevertheless,
identifying toxic comments for diverse communities continues to present
challenges that are addressed in this paper.The two-part goal of this study is
to(1)identify intuitive variances from annotator disagreement using
quantitative analysis and (2)model the subjectivity of these viewpoints.To
achieve our goal, we published a new
dataset\footnote{\url{https://github.com/XXX}} with expert annotators'
annotations and used two other public datasets to identify the subjectivity of
toxicity.Then leveraging the Large Language Model(LLM),we evaluate the model's
ability to mimic diverse viewpoints on toxicity by varying size of the training
data and utilizing same set of annotators as the test set used during model
training and a separate set of annotators as the test set.We conclude that
subjectivity is evident across all annotator groups, demonstrating the
shortcomings of majority-rule voting. Moving forward, subjective annotations
should serve as ground truth labels for training models for domains like
toxicity in diverse communities.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2023 00:17:11 GMT"
}
] | 1,698,883,200,000 | [
[
"Dutta",
"Senjuti",
""
],
[
"Mittal",
"Sid",
""
],
[
"Chen",
"Sherol",
""
],
[
"Ramachandran",
"Deepak",
""
],
[
"Rajakumar",
"Ravi",
""
],
[
"Kivlichan",
"Ian",
""
],
[
"Mak",
"Sunny",
""
],
[
"Butryna",
"Alena",
""
],
[
"Paritosh",
"Praveen",
""
]
] |
2311.00344 | Olivier Sigaud | Olivier Sigaud, Gianluca Baldassarre, Cedric Colas, Stephane Doncieux,
Richard Duro, Nicolas Perrin-Gilbert, Vieri Giuliano Santucci | A Definition of Open-Ended Learning Problems for Goal-Conditioned Agents | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A lot of recent machine learning research papers have ``open-ended learning''
in their title. But very few of them attempt to define what they mean when
using the term. Even worse, when looking more closely there seems to be no
consensus on what distinguishes open-ended learning from related concepts such
as continual learning, lifelong learning or autotelic learning. In this paper,
we contribute to fixing this situation. After illustrating the genealogy of the
concept and more recent perspectives about what it truly means, we outline that
open-ended learning is generally conceived as a composite notion encompassing a
set of diverse properties. In contrast with previous approaches, we propose to
isolate a key elementary property of open-ended processes, which is to produce
elements from time to time (e.g., observations, options, reward functions, and
goals), over an infinite horizon, that are considered novel from an observer's
perspective. From there, we build the notion of open-ended learning problems
and focus in particular on the subset of open-ended goal-conditioned
reinforcement learning problems in which agents can learn a growing repertoire
of goal-driven skills. Finally, we highlight the work that remains to be
performed to fill the gap between our elementary definition and the more
involved notions of open-ended learning that developmental AI researchers may
have in mind.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2023 07:37:27 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Nov 2023 13:53:24 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Feb 2024 11:41:37 GMT"
}
] | 1,707,782,400,000 | [
[
"Sigaud",
"Olivier",
""
],
[
"Baldassarre",
"Gianluca",
""
],
[
"Colas",
"Cedric",
""
],
[
"Doncieux",
"Stephane",
""
],
[
"Duro",
"Richard",
""
],
[
"Perrin-Gilbert",
"Nicolas",
""
],
[
"Santucci",
"Vieri Giuliano",
""
]
] |
2311.00356 | Rizhong Wang | Rizhong Wang, Huiping Li, Di Cui, Demin Xu | QFree: A Universal Value Function Factorization for Multi-Agent
Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Centralized training is widely utilized in the field of multi-agent
reinforcement learning (MARL) to assure the stability of training process. Once
a joint policy is obtained, it is critical to design a value function
factorization method to extract optimal decentralized policies for the agents,
which needs to satisfy the individual-global-max (IGM) principle. While
imposing additional limitations on the IGM function class can help to meet the
requirement, it comes at the cost of restricting its application to more
complex multi-agent environments. In this paper, we propose QFree, a universal
value function factorization method for MARL. We start by developing
mathematical equivalent conditions of the IGM principle based on the advantage
function, which ensures that the principle holds without any compromise,
removing the conservatism of conventional methods. We then establish a more
expressive mixing network architecture that can fulfill the equivalent
factorization. In particular, the novel loss function is developed by
considering the equivalent conditions as regularization term during policy
evaluation in the MARL algorithm. Finally, the effectiveness of the proposed
method is verified in a nonmonotonic matrix game scenario. Moreover, we show
that QFree achieves the state-of-the-art performance in a general-purpose
complex MARL benchmark environment, Starcraft Multi-Agent Challenge (SMAC).
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2023 08:07:16 GMT"
}
] | 1,698,883,200,000 | [
[
"Wang",
"Rizhong",
""
],
[
"Li",
"Huiping",
""
],
[
"Cui",
"Di",
""
],
[
"Xu",
"Demin",
""
]
] |
2311.00393 | Danial Hooshyar | Danial Hooshyar, Roger Azevedo, Yeongwook Yang | Augmenting deep neural networks with symbolic knowledge: Towards
trustworthy and interpretable AI for education | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial neural networks (ANNs) have shown to be amongst the most important
artificial intelligence (AI) techniques in educational applications, providing
adaptive educational services. However, their educational potential is limited
in practice due to three major challenges: i) difficulty in incorporating
symbolic educational knowledge (e.g., causal relationships, and practitioners'
knowledge) in their development, ii) learning and reflecting biases, and iii)
lack of interpretability. Given the high-risk nature of education, the
integration of educational knowledge into ANNs becomes crucial for developing
AI applications that adhere to essential educational restrictions, and provide
interpretability over the predictions. This research argues that the
neural-symbolic family of AI has the potential to address the named challenges.
To this end, it adapts a neural-symbolic AI framework and accordingly develops
an approach called NSAI, that injects and extracts educational knowledge into
and from deep neural networks, for modelling learners computational thinking.
Our findings reveal that the NSAI approach has better generalizability compared
to deep neural networks trained merely on training data, as well as training
data augmented by SMOTE and autoencoder methods. More importantly, unlike the
other models, the NSAI approach prioritises robust representations that capture
causal relationships between input features and output labels, ensuring safety
in learning to avoid spurious correlations and control biases in training data.
Furthermore, the NSAI approach enables the extraction of rules from the learned
network, facilitating interpretation and reasoning about the path to
predictions, as well as refining the initial educational knowledge. These
findings imply that neural-symbolic AI can overcome the limitations of ANNs in
education, enabling trustworthy and interpretable applications.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2023 09:38:56 GMT"
}
] | 1,698,883,200,000 | [
[
"Hooshyar",
"Danial",
""
],
[
"Azevedo",
"Roger",
""
],
[
"Yang",
"Yeongwook",
""
]
] |
2311.00447 | You Zhou | You Zhou, Xiujing Lin, Xiang Zhang, Maolin Wang, Gangwei Jiang,
Huakang Lu, Yupeng Wu, Kai Zhang, Zhe Yang, Kehang Wang, Yongduo Sui, Fengwei
Jia, Zuoli Tang, Yao Zhao, Hongxuan Zhang, Tiannuo Yang, Weibo Chen, Yunong
Mao, Yi Li, De Bao, Yu Li, Hongrui Liao, Ting Liu, Jingwen Liu, Jinchi Guo,
Xiangyu Zhao, Ying WEI, Hong Qian, Qi Liu, Xiang Wang, Wai Kin (Victor) Chan,
Chenliang Li, Yusen Li, Shiyu Yang, Jining Yan, Chao Mou, Shuai Han, Wuxia
Jin, Guannan Zhang and Xiaodong Zeng | On the Opportunities of Green Computing: A Survey | 113 pages, 18 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence (AI) has achieved significant advancements in
technology and research with the development over several decades, and is
widely used in many areas including computing vision, natural language
processing, time-series analysis, speech synthesis, etc. During the age of deep
learning, especially with the arise of Large Language Models, a large majority
of researchers' attention is paid on pursuing new state-of-the-art (SOTA)
results, resulting in ever increasing of model size and computational
complexity. The needs for high computing power brings higher carbon emission
and undermines research fairness by preventing small or medium-sized research
institutions and companies with limited funding in participating in research.
To tackle the challenges of computing resources and environmental impact of AI,
Green Computing has become a hot research topic. In this survey, we give a
systematic overview of the technologies used in Green Computing. We propose the
framework of Green Computing and devide it into four key components: (1)
Measures of Greenness, (2) Energy-Efficient AI, (3) Energy-Efficient Computing
Systems and (4) AI Use Cases for Sustainability. For each components, we
discuss the research progress made and the commonly used techniques to optimize
the AI efficiency. We conclude that this new research direction has the
potential to address the conflicts between resource constraints and AI
development. We encourage more researchers to put attention on this direction
and make AI more environmental friendly.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2023 11:16:41 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Nov 2023 07:15:50 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Nov 2023 03:08:34 GMT"
}
] | 1,699,574,400,000 | [
[
"Zhou",
"You",
"",
"Victor"
],
[
"Lin",
"Xiujing",
"",
"Victor"
],
[
"Zhang",
"Xiang",
"",
"Victor"
],
[
"Wang",
"Maolin",
"",
"Victor"
],
[
"Jiang",
"Gangwei",
"",
"Victor"
],
[
"Lu",
"Huakang",
"",
"Victor"
],
[
"Wu",
"Yupeng",
"",
"Victor"
],
[
"Zhang",
"Kai",
"",
"Victor"
],
[
"Yang",
"Zhe",
"",
"Victor"
],
[
"Wang",
"Kehang",
"",
"Victor"
],
[
"Sui",
"Yongduo",
"",
"Victor"
],
[
"Jia",
"Fengwei",
"",
"Victor"
],
[
"Tang",
"Zuoli",
"",
"Victor"
],
[
"Zhao",
"Yao",
"",
"Victor"
],
[
"Zhang",
"Hongxuan",
"",
"Victor"
],
[
"Yang",
"Tiannuo",
"",
"Victor"
],
[
"Chen",
"Weibo",
"",
"Victor"
],
[
"Mao",
"Yunong",
"",
"Victor"
],
[
"Li",
"Yi",
"",
"Victor"
],
[
"Bao",
"De",
"",
"Victor"
],
[
"Li",
"Yu",
"",
"Victor"
],
[
"Liao",
"Hongrui",
"",
"Victor"
],
[
"Liu",
"Ting",
"",
"Victor"
],
[
"Liu",
"Jingwen",
"",
"Victor"
],
[
"Guo",
"Jinchi",
"",
"Victor"
],
[
"Zhao",
"Xiangyu",
"",
"Victor"
],
[
"WEI",
"Ying",
"",
"Victor"
],
[
"Qian",
"Hong",
"",
"Victor"
],
[
"Liu",
"Qi",
"",
"Victor"
],
[
"Wang",
"Xiang",
"",
"Victor"
],
[
"Kin",
"Wai",
"",
"Victor"
],
[
"Chan",
"",
""
],
[
"Li",
"Chenliang",
""
],
[
"Li",
"Yusen",
""
],
[
"Yang",
"Shiyu",
""
],
[
"Yan",
"Jining",
""
],
[
"Mou",
"Chao",
""
],
[
"Han",
"Shuai",
""
],
[
"Jin",
"Wuxia",
""
],
[
"Zhang",
"Guannan",
""
],
[
"Zeng",
"Xiaodong",
""
]
] |
2311.00462 | Heng Dong | Heng Dong, Junyu Zhang, Chongjie Zhang | Leveraging Hyperbolic Embeddings for Coarse-to-Fine Robot Design | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Multi-cellular robot design aims to create robots comprised of numerous cells
that can be efficiently controlled to perform diverse tasks. Previous research
has demonstrated the ability to generate robots for various tasks, but these
approaches often optimize robots directly in the vast design space, resulting
in robots with complicated morphologies that are hard to control. In response,
this paper presents a novel coarse-to-fine method for designing multi-cellular
robots. Initially, this strategy seeks optimal coarse-grained robots and
progressively refines them. To mitigate the challenge of determining the
precise refinement juncture during the coarse-to-fine transition, we introduce
the Hyperbolic Embeddings for Robot Design (HERD) framework. HERD unifies
robots of various granularity within a shared hyperbolic space and leverages a
refined Cross-Entropy Method for optimization. This framework enables our
method to autonomously identify areas of exploration in hyperbolic space and
concentrate on regions demonstrating promise. Finally, the extensive empirical
studies on various challenging tasks sourced from EvoGym show our approach's
superior efficiency and generalization capability.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2023 11:56:32 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Nov 2023 04:27:44 GMT"
},
{
"version": "v3",
"created": "Fri, 1 Dec 2023 03:46:45 GMT"
}
] | 1,701,648,000,000 | [
[
"Dong",
"Heng",
""
],
[
"Zhang",
"Junyu",
""
],
[
"Zhang",
"Chongjie",
""
]
] |
2311.00530 | Jinzhou Lin | Jinzhou Lin, Han Gao, Xuxiang Feng, Rongtao Xu, Changwei Wang, Man
Zhang, Li Guo, Shibiao Xu | The Development of LLMs for Embodied Navigation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, the rapid advancement of Large Language Models (LLMs) such
as the Generative Pre-trained Transformer (GPT) has attracted increasing
attention due to their potential in a variety of practical applications. The
application of LLMs with Embodied Intelligence has emerged as a significant
area of focus. Among the myriad applications of LLMs, navigation tasks are
particularly noteworthy because they demand a deep understanding of the
environment and quick, accurate decision-making. LLMs can augment embodied
intelligence systems with sophisticated environmental perception and
decision-making support, leveraging their robust language and image-processing
capabilities. This article offers an exhaustive summary of the symbiosis
between LLMs and embodied intelligence with a focus on navigation. It reviews
state-of-the-art models, research methodologies, and assesses the advantages
and disadvantages of existing embodied navigation models and datasets. Finally,
the article elucidates the role of LLMs in embodied intelligence, based on
current research, and forecasts future directions in the field. A comprehensive
list of studies in this survey is available at
https://github.com/Rongtao-Xu/Awesome-LLM-EN
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2023 14:08:56 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Nov 2023 06:21:32 GMT"
},
{
"version": "v3",
"created": "Sat, 18 Nov 2023 01:37:39 GMT"
}
] | 1,700,524,800,000 | [
[
"Lin",
"Jinzhou",
""
],
[
"Gao",
"Han",
""
],
[
"Feng",
"Xuxiang",
""
],
[
"Xu",
"Rongtao",
""
],
[
"Wang",
"Changwei",
""
],
[
"Zhang",
"Man",
""
],
[
"Guo",
"Li",
""
],
[
"Xu",
"Shibiao",
""
]
] |
2311.00545 | S\'ebastien Ferr\'e | S\'ebastien Ferr\'e | Tackling the Abstraction and Reasoning Corpus (ARC) with Object-centric
Models and the MDL Principle | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The Abstraction and Reasoning Corpus (ARC) is a challenging benchmark,
introduced to foster AI research towards human-level intelligence. It is a
collection of unique tasks about generating colored grids, specified by a few
examples only. In contrast to the transformation-based programs of existing
work, we introduce object-centric models that are in line with the natural
programs produced by humans. Our models can not only perform predictions, but
also provide joint descriptions for input/output pairs. The Minimum Description
Length (MDL) principle is used to efficiently search the large model space. A
diverse range of tasks are solved, and the learned models are similar to the
natural programs. We demonstrate the generality of our approach by applying it
to a different domain.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2023 14:25:51 GMT"
}
] | 1,698,883,200,000 | [
[
"Ferré",
"Sébastien",
""
]
] |
2311.00634 | Soham Irtiza Swapnil | Rafat Tabassum Sukonna, Soham Irtiza Swapnil | A Bi-level Framework for Traffic Accident Duration Prediction:
Leveraging Weather and Road Condition Data within a Practical Optimum
Pipeline | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Due to the stochastic nature of events, predicting the duration of a traffic
incident presents a formidable challenge. Accurate duration estimation can
result in substantial advantages for commuters in selecting optimal routes and
for traffic management personnel in addressing non-recurring congestion issues.
In this study, we gathered accident duration, road conditions, and
meteorological data from a database of traffic accidents to check the
feasibility of a traffic accident duration pipeline without accident contextual
information data like accident severity and textual description. Multiple
machine learning models were employed to predict whether an accident's impact
on road traffic would be of a short-term or long-term nature, and then
utilizing a bimodal approach the precise duration of the incident's effect was
determined. Our binary classification random forest model distinguished between
short-term and long-term effects with an 83% accuracy rate, while the LightGBM
regression model outperformed other machine learning regression models with
Mean Average Error (MAE) values of 26.15 and 13.3 and RMSE values of 32.91 and
28.91 for short and long-term accident duration prediction, respectively. Using
the optimal classification and regression model identified in the preceding
section, we then construct an end-to-end pipeline to incorporate the entire
process. The results of both separate and combined approaches were comparable
with previous works, which shows the applicability of only using static
features for predicting traffic accident duration. The SHAP value analysis
identified weather conditions, wind chill and wind speed as the most
influential factors in determining the duration of an accident.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2023 16:33:37 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Nov 2023 19:26:03 GMT"
}
] | 1,699,315,200,000 | [
[
"Sukonna",
"Rafat Tabassum",
""
],
[
"Swapnil",
"Soham Irtiza",
""
]
] |
2311.00693 | Jiayi Chen | Jiayi Chen, Hanjun Dai, Bo Dai, Aidong Zhang, Wei Wei | On Task-personalized Multimodal Few-shot Learning for Visually-rich
Document Entity Retrieval | Paper published at Findings of the Association for Computational
Linguistics: EMNLP, 2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visually-rich document entity retrieval (VDER), which extracts key
information (e.g. date, address) from document images like invoices and
receipts, has become an important topic in industrial NLP applications. The
emergence of new document types at a constant pace, each with its unique entity
types, presents a unique challenge: many documents contain unseen entity types
that occur only a couple of times. Addressing this challenge requires models to
have the ability of learning entities in a few-shot manner. However, prior
works for Few-shot VDER mainly address the problem at the document level with a
predefined global entity space, which doesn't account for the entity-level
few-shot scenario: target entity types are locally personalized by each task
and entity occurrences vary significantly among documents. To address this
unexplored scenario, this paper studies a novel entity-level few-shot VDER
task. The challenges lie in the uniqueness of the label space for each task and
the increased complexity of out-of-distribution (OOD) contents. To tackle this
novel task, we present a task-aware meta-learning based framework, with a
central focus on achieving effective task personalization that distinguishes
between in-task and out-of-task distribution. Specifically, we adopt a
hierarchical decoder (HC) and employ contrastive learning (ContrastProtoNet) to
achieve this goal. Furthermore, we introduce a new dataset, FewVEX, to boost
future research in the field of entity-level few-shot VDER. Experimental
results demonstrate our approaches significantly improve the robustness of
popular meta-learning baselines.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2023 17:51:43 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Dec 2023 00:21:29 GMT"
}
] | 1,702,339,200,000 | [
[
"Chen",
"Jiayi",
""
],
[
"Dai",
"Hanjun",
""
],
[
"Dai",
"Bo",
""
],
[
"Zhang",
"Aidong",
""
],
[
"Wei",
"Wei",
""
]
] |
2311.00767 | Kenneth Lai | Rahat Islam, Kenneth Lai, and Svetlana Yanushkevich | Hand Gesture Classification on Praxis Dataset: Trading Accuracy for
Expense | 8 pages, 6 figures | 2022 International Joint Conference on Neural Networks (IJCNN),
Padua, pp. 1-8 | 10.1109/IJCNN55064.2022.9892631 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we investigate hand gesture classifiers that rely upon the
abstracted 'skeletal' data recorded using the RGB-Depth sensor. We focus on
'skeletal' data represented by the body joint coordinates, from the Praxis
dataset. The PRAXIS dataset contains recordings of patients with cortical
pathologies such as Alzheimer's disease, performing a Praxis test under the
direction of a clinician. In this paper, we propose hand gesture classifiers
that are more effective with the PRAXIS dataset than previously proposed
models. Body joint data offers a compressed form of data that can be analyzed
specifically for hand gesture recognition. Using a combination of windowing
techniques with deep learning architecture such as a Recurrent Neural Network
(RNN), we achieved an overall accuracy of 70.8% using only body joint data. In
addition, we investigated a long-short-term-memory (LSTM) to extract and
analyze the movement of the joints through time to recognize the hand gestures
being performed and achieved a gesture recognition rate of 74.3% and 67.3% for
static and dynamic gestures, respectively. The proposed approach contributed to
the task of developing an automated, accurate, and inexpensive approach to
diagnosing cortical pathologies for multiple healthcare applications.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2023 18:18:09 GMT"
}
] | 1,698,969,600,000 | [
[
"Islam",
"Rahat",
""
],
[
"Lai",
"Kenneth",
""
],
[
"Yanushkevich",
"Svetlana",
""
]
] |
2311.01043 | Xiaosong Jia | Zhenjie Yang, Xiaosong Jia, Hongyang Li, Junchi Yan | LLM4Drive: A Survey of Large Language Models for Autonomous Driving | GitHub Repo: https://github.com/Thinklab-SJTU/Awesome-LLM4AD | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Autonomous driving technology, a catalyst for revolutionizing transportation
and urban mobility, has the tend to transition from rule-based systems to
data-driven strategies. Traditional module-based systems are constrained by
cumulative errors among cascaded modules and inflexible pre-set rules. In
contrast, end-to-end autonomous driving systems have the potential to avoid
error accumulation due to their fully data-driven training process, although
they often lack transparency due to their "black box" nature, complicating the
validation and traceability of decisions. Recently, large language models
(LLMs) have demonstrated abilities including understanding context, logical
reasoning, and generating answers. A natural thought is to utilize these
abilities to empower autonomous driving. By combining LLM with foundation
vision models, it could open the door to open-world understanding, reasoning,
and few-shot learning, which current autonomous driving systems are lacking. In
this paper, we systematically review a research line about \textit{Large
Language Models for Autonomous Driving (LLM4AD)}. This study evaluates the
current state of technological advancements, distinctly outlining the principal
challenges and prospective directions for the field. For the convenience of
researchers in academia and industry, we provide real-time updates on the
latest advances in the field as well as relevant open-source resources via the
designated link: https://github.com/Thinklab-SJTU/Awesome-LLM4AD.
| [
{
"version": "v1",
"created": "Thu, 2 Nov 2023 07:23:33 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Nov 2023 05:43:45 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Dec 2023 14:45:27 GMT"
}
] | 1,704,067,200,000 | [
[
"Yang",
"Zhenjie",
""
],
[
"Jia",
"Xiaosong",
""
],
[
"Li",
"Hongyang",
""
],
[
"Yan",
"Junchi",
""
]
] |
2311.01193 | Shrey Jain Mr. | Shrey Jain, Zo\"e Hitzig, Pamela Mishkin | Contextual Confidence and Generative AI | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Generative AI models perturb the foundations of effective human
communication. They present new challenges to contextual confidence, disrupting
participants' ability to identify the authentic context of communication and
their ability to protect communication from reuse and recombination outside its
intended context. In this paper, we describe strategies--tools, technologies
and policies--that aim to stabilize communication in the face of these
challenges. The strategies we discuss fall into two broad categories.
Containment strategies aim to reassert context in environments where it is
currently threatened--a reaction to the context-free expectations and norms
established by the internet. Mobilization strategies, by contrast, view the
rise of generative AI as an opportunity to proactively set new and higher
expectations around privacy and authenticity in mediated communication.
| [
{
"version": "v1",
"created": "Thu, 2 Nov 2023 12:39:22 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Jan 2024 21:34:11 GMT"
}
] | 1,706,227,200,000 | [
[
"Jain",
"Shrey",
""
],
[
"Hitzig",
"Zoë",
""
],
[
"Mishkin",
"Pamela",
""
]
] |
2311.01609 | Niko Grupen | Niko A. Grupen | Responsible Emergent Multi-Agent Behavior | 234 pages, 46 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Responsible AI has risen to the forefront of the AI research community. As
neural network-based learning algorithms continue to permeate real-world
applications, the field of Responsible AI has played a large role in ensuring
that such systems maintain a high-level of human-compatibility. Despite this
progress, the state of the art in Responsible AI has ignored one crucial point:
human problems are multi-agent problems. Predominant approaches largely
consider the performance of a single AI system in isolation, but human problems
are, by their very nature, multi-agent. From driving in traffic to negotiating
economic policy, human problem-solving involves interaction and the interplay
of the actions and motives of multiple individuals.
This dissertation develops the study of responsible emergent multi-agent
behavior, illustrating how researchers and practitioners can better understand
and shape multi-agent learning with respect to three pillars of Responsible AI:
interpretability, fairness, and robustness. First, I investigate multi-agent
interpretability, presenting novel techniques for understanding emergent
multi-agent behavior at multiple levels of granularity. With respect to
low-level interpretability, I examine the extent to which implicit
communication emerges as an aid to coordination in multi-agent populations. I
introduce a novel curriculum-driven method for learning high-performing
policies in difficult, sparse reward environments and show through a measure of
position-based social influence that multi-agent teams that learn sophisticated
coordination strategies exchange significantly more information through
implicit signals than lesser-coordinated agents. Then, at a high-level, I study
concept-based interpretability in the context of multi-agent learning. I
propose a novel method for learning intrinsically interpretable, concept-based
policies and show that it enables...
| [
{
"version": "v1",
"created": "Thu, 2 Nov 2023 21:37:32 GMT"
}
] | 1,699,228,800,000 | [
[
"Grupen",
"Niko A.",
""
]
] |
2311.02026 | Miguel Contreras | Miguel Contreras, Brandon Silva, Benjamin Shickel, Tezcan
Ozrazgat-Baslanti, Yuanfang Ren, Ziyuan Guan, Jeremy Balch, Jiaqing Zhang,
Sabyasachi Bandyopadhyay, Kia Khezeli, Azra Bihorac, Parisa Rashidi | APRICOT-Mamba: Acuity Prediction in Intensive Care Unit (ICU):
Development and Validation of a Stability, Transitions, and Life-Sustaining
Therapies Prediction Model | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The acuity state of patients in the intensive care unit (ICU) can quickly
change from stable to unstable. Early detection of deteriorating conditions can
result in providing timely interventions and improved survival rates. In this
study, we propose APRICOT-M (Acuity Prediction in Intensive Care Unit-Mamba), a
150k-parameter state space-based neural network to predict acuity state,
transitions, and the need for life-sustaining therapies in real-time in ICU
patients. The model uses data obtained in the prior four hours in the ICU and
patient information obtained at admission to predict the acuity outcomes in the
next four hours. We validated APRICOT-M externally on data from hospitals not
used in development (75,668 patients from 147 hospitals), temporally on data
from a period not used in development (12,927 patients from one hospital from
2018-2019), and prospectively on data collected in real-time (215 patients from
one hospital from 2021-2023) using three large datasets: the University of
Florida Health (UFH) dataset, the electronic ICU Collaborative Research
Database (eICU), and the Medical Information Mart for Intensive Care
(MIMIC)-IV. The area under the receiver operating characteristic curve (AUROC)
of APRICOT-M for mortality (external 0.94-0.95, temporal 0.97-0.98, prospective
0.96-1.00) and acuity (external 0.95-0.95, temporal 0.97-0.97, prospective
0.96-0.96) shows comparable results to state-of-the-art models. Furthermore,
APRICOT-M can predict transitions to instability (external 0.81-0.82, temporal
0.77-0.78, prospective 0.68-0.75) and need for life-sustaining therapies,
including mechanical ventilation (external 0.82-0.83, temporal 0.87-0.88,
prospective 0.67-0.76), and vasopressors (external 0.81-0.82, temporal
0.73-0.75, prospective 0.66-0.74). This tool allows for real-time acuity
monitoring in critically ill patients and can help clinicians make timely
interventions.
| [
{
"version": "v1",
"created": "Fri, 3 Nov 2023 16:52:27 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Mar 2024 06:29:28 GMT"
}
] | 1,710,115,200,000 | [
[
"Contreras",
"Miguel",
""
],
[
"Silva",
"Brandon",
""
],
[
"Shickel",
"Benjamin",
""
],
[
"Ozrazgat-Baslanti",
"Tezcan",
""
],
[
"Ren",
"Yuanfang",
""
],
[
"Guan",
"Ziyuan",
""
],
[
"Balch",
"Jeremy",
""
],
[
"Zhang",
"Jiaqing",
""
],
[
"Bandyopadhyay",
"Sabyasachi",
""
],
[
"Khezeli",
"Kia",
""
],
[
"Bihorac",
"Azra",
""
],
[
"Rashidi",
"Parisa",
""
]
] |
2311.02102 | AKM Bahalul Haque | AKM Bahalul Haque, A.K.M. Najmul Islam, Patrick Mikalef | Notion of Explainable Artificial Intelligence -- An Empirical
Investigation from A Users Perspective | 26 Pages, 3 Figures, 1 Table , Accepted version for publication in
European Conference on Information Systems (ECIS), 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The growing attention to artificial intelligence-based applications has led
to research interest in explainability issues. This emerging research attention
on explainable AI (XAI) advocates the need to investigate end user-centric
explainable AI. Thus, this study aims to investigate usercentric explainable AI
and considered recommendation systems as the study context. We conducted focus
group interviews to collect qualitative data on the recommendation system. We
asked participants about the end users' comprehension of a recommended item,
its probable explanation, and their opinion of making a recommendation
explainable. Our findings reveal that end users want a non-technical and
tailor-made explanation with on-demand supplementary information. Moreover, we
also observed users requiring an explanation about personal data usage,
detailed user feedback, and authentic and reliable explanations. Finally, we
propose a synthesized framework that aims at involving the end user in the
development process for requirements collection and validation.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2023 22:20:14 GMT"
}
] | 1,699,315,200,000 | [
[
"Haque",
"AKM Bahalul",
""
],
[
"Islam",
"A. K. M. Najmul",
""
],
[
"Mikalef",
"Patrick",
""
]
] |
2311.02291 | Sopam Dasgupta | Sopam Dasgupta | A Survey of the Various Methodologies Towards making Artificial
Intelligence More Explainable | 25 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Machines are being increasingly used in decision-making processes, resulting
in the realization that decisions need explanations. Unfortunately, an
increasing number of these deployed models are of a 'black-box' nature where
the reasoning behind the decisions is unknown. Hence, there is a need for
clarity behind the reasoning of these decisions. As humans, we would want these
decisions to be presented to us in an explainable manner. However, explanations
alone are insufficient. They do not necessarily tell us how to achieve an
outcome but merely tell us what achieves the given outcome. For this reason, my
research focuses on explainability/interpretability and how it extends to
counterfactual thinking.
| [
{
"version": "v1",
"created": "Sat, 4 Nov 2023 01:18:48 GMT"
}
] | 1,699,315,200,000 | [
[
"Dasgupta",
"Sopam",
""
]
] |
2311.02462 | Meredith Morris | Meredith Ringel Morris, Jascha Sohl-dickstein, Noah Fiedel, Tris
Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, Shane Legg | Levels of AGI for Operationalizing Progress on the Path to AGI | version 4 - Position Paper accepted to ICML 2024. Note that due to
ICML position paper titling format requirements, the title has changed
slightly from that of the original arXiv pre-print. The original pre-print
title was "Levels of AGI: Operationalizing Progress on the Path to AGI" but
the official published title for ICML 2024 is "Levels of AGI for
Operationalizing Progress on the Path to AGI" | Proceedings of ICML 2024 | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We propose a framework for classifying the capabilities and behavior of
Artificial General Intelligence (AGI) models and their precursors. This
framework introduces levels of AGI performance, generality, and autonomy,
providing a common language to compare models, assess risks, and measure
progress along the path to AGI. To develop our framework, we analyze existing
definitions of AGI, and distill six principles that a useful ontology for AGI
should satisfy. With these principles in mind, we propose "Levels of AGI" based
on depth (performance) and breadth (generality) of capabilities, and reflect on
how current systems fit into this ontology. We discuss the challenging
requirements for future benchmarks that quantify the behavior and capabilities
of AGI models against these levels. Finally, we discuss how these levels of AGI
interact with deployment considerations such as autonomy and risk, and
emphasize the importance of carefully selecting Human-AI Interaction paradigms
for responsible and safe deployment of highly capable AI systems.
| [
{
"version": "v1",
"created": "Sat, 4 Nov 2023 17:44:58 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Jan 2024 21:15:45 GMT"
},
{
"version": "v3",
"created": "Wed, 22 May 2024 02:14:49 GMT"
},
{
"version": "v4",
"created": "Wed, 5 Jun 2024 22:08:35 GMT"
}
] | 1,717,718,400,000 | [
[
"Morris",
"Meredith Ringel",
""
],
[
"Sohl-dickstein",
"Jascha",
""
],
[
"Fiedel",
"Noah",
""
],
[
"Warkentin",
"Tris",
""
],
[
"Dafoe",
"Allan",
""
],
[
"Faust",
"Aleksandra",
""
],
[
"Farabet",
"Clement",
""
],
[
"Legg",
"Shane",
""
]
] |
2311.04403 | Nitin Kamra | Yuliang Li and Nitin Kamra and Ruta Desai and Alon Halevy | Human-Centered Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LLMs have recently made impressive inroads on tasks whose output is
structured, such as coding, robotic planning and querying databases. The vision
of creating AI-powered personal assistants also involves creating structured
outputs, such as a plan for one's day, or for an overseas trip. Here, since the
plan is executed by a human, the output doesn't have to satisfy strict
syntactic constraints. A useful assistant should also be able to incorporate
vague constraints specified by the user in natural language. This makes LLMs an
attractive option for planning.
We consider the problem of planning one's day. We develop an LLM-based
planner (LLMPlan) extended with the ability to self-reflect on its output and a
symbolic planner (SymPlan) with the ability to translate text constraints into
a symbolic representation. Despite no formal specification of constraints, we
find that LLMPlan performs explicit constraint satisfaction akin to the
traditional symbolic planners on average (2% performance difference), while
retaining the reasoning of implicit requirements. Consequently, LLM-based
planners outperform their symbolic counterparts in user satisfaction (70.5% vs.
40.4%) during interactive evaluation with 40 users.
| [
{
"version": "v1",
"created": "Wed, 8 Nov 2023 00:14:05 GMT"
}
] | 1,699,488,000,000 | [
[
"Li",
"Yuliang",
""
],
[
"Kamra",
"Nitin",
""
],
[
"Desai",
"Ruta",
""
],
[
"Halevy",
"Alon",
""
]
] |
2311.04474 | Yuxuan Guo | Yuxuan Guo, Yifan Hao, Rui Zhang, Enshuai Zhou, Zidong Du, Xishan
Zhang, Xinkai Song, Yuanbo Wen, Yongwei Zhao, Xuehai Zhou, Jiaming Guo, Qi
Yi, Shaohui Peng, Di Huang, Ruizhi Chen, Qi Guo, Yunji Chen | Emergent Communication for Rules Reasoning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Research on emergent communication between deep-learning-based agents has
received extensive attention due to its inspiration for linguistics and
artificial intelligence. However, previous attempts have hovered around
emerging communication under perception-oriented environmental settings, that
forces agents to describe low-level perceptual features intra image or symbol
contexts. In this work, inspired by the classic human reasoning test (namely
Raven's Progressive Matrix), we propose the Reasoning Game, a
cognition-oriented environment that encourages agents to reason and communicate
high-level rules, rather than perceived low-level contexts. Moreover, we
propose 1) an unbiased dataset (namely rule-RAVEN) as a benchmark to avoid
overfitting, 2) and a two-stage curriculum agent training method as a baseline
for more stable convergence in the Reasoning Game, where contexts and semantics
are bilaterally drifting. Experimental results show that, in the Reasoning
Game, a semantically stable and compositional language emerges to solve
reasoning problems. The emerged language helps agents apply the extracted rules
to the generalization of unseen context attributes, and to the transfer between
different context attributes or even tasks.
| [
{
"version": "v1",
"created": "Wed, 8 Nov 2023 05:57:39 GMT"
}
] | 1,699,488,000,000 | [
[
"Guo",
"Yuxuan",
""
],
[
"Hao",
"Yifan",
""
],
[
"Zhang",
"Rui",
""
],
[
"Zhou",
"Enshuai",
""
],
[
"Du",
"Zidong",
""
],
[
"Zhang",
"Xishan",
""
],
[
"Song",
"Xinkai",
""
],
[
"Wen",
"Yuanbo",
""
],
[
"Zhao",
"Yongwei",
""
],
[
"Zhou",
"Xuehai",
""
],
[
"Guo",
"Jiaming",
""
],
[
"Yi",
"Qi",
""
],
[
"Peng",
"Shaohui",
""
],
[
"Huang",
"Di",
""
],
[
"Chen",
"Ruizhi",
""
],
[
"Guo",
"Qi",
""
],
[
"Chen",
"Yunji",
""
]
] |
2311.04659 | Yiyuan Li | Yiyuan Li, Rakesh R. Menon, Sayan Ghosh, Shashank Srivastava | Pragmatic Reasoning Unlocks Quantifier Semantics for Foundation Models | EMNLP 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Generalized quantifiers (e.g., few, most) are used to indicate the
proportions predicates are satisfied (for example, some apples are red). One
way to interpret quantifier semantics is to explicitly bind these satisfactions
with percentage scopes (e.g., 30%-40% of apples are red). This approach can be
helpful for tasks like logic formalization and surface-form quantitative
reasoning (Gordon and Schubert, 2010; Roy et al., 2015). However, it remains
unclear if recent foundation models possess this ability, as they lack direct
training signals. To explore this, we introduce QuRe, a crowd-sourced dataset
of human-annotated generalized quantifiers in Wikipedia sentences featuring
percentage-equipped predicates. We explore quantifier comprehension in language
models using PRESQUE, a framework that combines natural language inference and
the Rational Speech Acts framework. Experimental results on the HVD dataset and
QuRe illustrate that PRESQUE, employing pragmatic reasoning, performs 20%
better than a literal reasoning baseline when predicting quantifier percentage
scopes, with no additional training required.
| [
{
"version": "v1",
"created": "Wed, 8 Nov 2023 13:00:06 GMT"
}
] | 1,699,488,000,000 | [
[
"Li",
"Yiyuan",
""
],
[
"Menon",
"Rakesh R.",
""
],
[
"Ghosh",
"Sayan",
""
],
[
"Srivastava",
"Shashank",
""
]
] |
2311.04778 | Roberto Confalonieri | Roberto Confalonieri and Giancarlo Guizzardi | On the Multiple Roles of Ontologies in Explainable AI | Submitted to the Neurosymbolic AI journal:
https://www.neurosymbolic-ai-journal.com/system/files/nai-paper-683.pdf | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses the different roles that explicit knowledge, in
particular ontologies, can play in Explainable AI and in the development of
human-centric explainable systems and intelligible explanations. We consider
three main perspectives in which ontologies can contribute significantly,
namely reference modelling, common-sense reasoning, and knowledge refinement
and complexity management. We overview some of the existing approaches in the
literature, and we position them according to these three proposed
perspectives. The paper concludes by discussing what challenges still need to
be addressed to enable ontology-based approaches to explanation and to evaluate
their human-understandability and effectiveness.
| [
{
"version": "v1",
"created": "Wed, 8 Nov 2023 15:57:26 GMT"
}
] | 1,699,488,000,000 | [
[
"Confalonieri",
"Roberto",
""
],
[
"Guizzardi",
"Giancarlo",
""
]
] |
2311.05227 | Carlos Mougan | Carlos Mougan, Joshua Brand | Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness
Metrics | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deontological ethics, specifically understood through Immanuel Kant, provides
a moral framework that emphasizes the importance of duties and principles,
rather than the consequences of action. Understanding that despite the
prominence of deontology, it is currently an overlooked approach in fairness
metrics, this paper explores the compatibility of a Kantian deontological
framework in fairness metrics, part of the AI alignment field. We revisit
Kant's critique of utilitarianism, which is the primary approach in AI fairness
metrics and argue that fairness principles should align with the Kantian
deontological framework. By integrating Kantian ethics into AI alignment, we
not only bring in a widely-accepted prominent moral theory but also strive for
a more morally grounded AI landscape that better balances outcomes and
procedures in pursuit of fairness and justice.
| [
{
"version": "v1",
"created": "Thu, 9 Nov 2023 09:16:02 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Feb 2024 21:22:51 GMT"
}
] | 1,709,078,400,000 | [
[
"Mougan",
"Carlos",
""
],
[
"Brand",
"Joshua",
""
]
] |
2311.05481 | Mireille Fares | Mireille Fares, Catherine Pelachaud, Nicolas Obin | META4: Semantically-Aligned Generation of Metaphoric Gestures Using
Self-Supervised Text and Speech Representation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Image Schemas are repetitive cognitive patterns that influence the way we
conceptualize and reason about various concepts present in speech. These
patterns are deeply embedded within our cognitive processes and are reflected
in our bodily expressions including gestures. Particularly, metaphoric gestures
possess essential characteristics and semantic meanings that align with Image
Schemas, to visually represent abstract concepts. The shape and form of
gestures can convey abstract concepts, such as extending the forearm and hand
or tracing a line with hand movements to visually represent the image schema of
PATH. Previous behavior generation models have primarily focused on utilizing
speech (acoustic features and text) to drive the generation model of virtual
agents. They have not considered key semantic information as those carried by
Image Schemas to effectively generate metaphoric gestures. To address this
limitation, we introduce META4, a deep learning approach that generates
metaphoric gestures from both speech and Image Schemas. Our approach has two
primary goals: computing Image Schemas from input text to capture the
underlying semantic and metaphorical meaning, and generating metaphoric
gestures driven by speech and the computed image schemas. Our approach is the
first method for generating speech driven metaphoric gestures while leveraging
the potential of Image Schemas. We demonstrate the effectiveness of our
approach and highlight the importance of both speech and image schemas in
modeling metaphoric gestures.
| [
{
"version": "v1",
"created": "Thu, 9 Nov 2023 16:16:31 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Nov 2023 10:26:29 GMT"
}
] | 1,700,611,200,000 | [
[
"Fares",
"Mireille",
""
],
[
"Pelachaud",
"Catherine",
""
],
[
"Obin",
"Nicolas",
""
]
] |
2311.05490 | Blai Bonet | Blai Bonet and Hector Geffner | General Policies, Subgoal Structure, and Planning Width | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | It has been observed that many classical planning domains with atomic goals
can be solved by means of a simple polynomial exploration procedure, called IW,
that runs in time exponential in the problem width, which in these cases is
bounded and small. Yet, while the notion of width has become part of
state-of-the-art planning algorithms such as BFWS, there is no good explanation
for why so many benchmark domains have bounded width when atomic goals are
considered. In this work, we address this question by relating bounded width
with the existence of general optimal policies that in each planning instance
are represented by tuples of atoms of bounded size. We also define the notions
of (explicit) serializations and serialized width that have a broader scope as
many domains have a bounded serialized width but no bounded width. Such
problems are solved non-optimally in polynomial time by a suitable variant of
the Serialized IW algorithm. Finally, the language of general policies and the
semantics of serializations are combined to yield a simple, meaningful, and
expressive language for specifying serializations in compact form in the form
of sketches, which can be used for encoding domain control knowledge by hand or
for learning it from small examples. Sketches express general problem
decompositions in terms of subgoals, and sketches of bounded width express
problem decompositions that can be solved in polynomial time.
| [
{
"version": "v1",
"created": "Thu, 9 Nov 2023 16:30:22 GMT"
}
] | 1,699,574,400,000 | [
[
"Bonet",
"Blai",
""
],
[
"Geffner",
"Hector",
""
]
] |
2311.05662 | Reham Alharbi Miss | Reham Alharbi and Valentina Tamma and Floriana Grasso and Terry Payne | An Experiment in Retrofitting Competency Questions for Existing
Ontologies | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Competency Questions (CQs) are a form of ontology functional requirements
expressed as natural language questions. Inspecting CQs together with the
axioms in an ontology provides critical insights into the intended scope and
applicability of the ontology. CQs also underpin a number of tasks in the
development of ontologies e.g. ontology reuse, ontology testing, requirement
specification, and the definition of patterns that implement such requirements.
Although CQs are integral to the majority of ontology engineering
methodologies, the practice of publishing CQs alongside the ontological
artefacts is not widely observed by the community. In this context, we present
an experiment in retrofitting CQs from existing ontologies. We propose
RETROFIT-CQs, a method to extract candidate CQs directly from ontologies using
Generative AI. In the paper we present the pipeline that facilitates the
extraction of CQs by leveraging Large Language Models (LLMs) and we discuss its
application to a number of existing ontologies.
| [
{
"version": "v1",
"created": "Thu, 9 Nov 2023 08:57:39 GMT"
}
] | 1,699,833,600,000 | [
[
"Alharbi",
"Reham",
""
],
[
"Tamma",
"Valentina",
""
],
[
"Grasso",
"Floriana",
""
],
[
"Payne",
"Terry",
""
]
] |
2311.05804 | Wensheng Gan | Wensheng Gan, Shicheng Wan, Philip S. Yu | Model-as-a-Service (MaaS): A Survey | Preprint. 3 figures, 1 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the increased number of parameters and data in the pre-trained model
exceeding a certain level, a foundation model (e.g., a large language model)
can significantly improve downstream task performance and emerge with some
novel special abilities (e.g., deep learning, complex reasoning, and human
alignment) that were not present before. Foundation models are a form of
generative artificial intelligence (GenAI), and Model-as-a-Service (MaaS) has
emerged as a groundbreaking paradigm that revolutionizes the deployment and
utilization of GenAI models. MaaS represents a paradigm shift in how we use AI
technologies and provides a scalable and accessible solution for developers and
users to leverage pre-trained AI models without the need for extensive
infrastructure or expertise in model training. In this paper, the introduction
aims to provide a comprehensive overview of MaaS, its significance, and its
implications for various industries. We provide a brief review of the
development history of "X-as-a-Service" based on cloud computing and present
the key technologies involved in MaaS. The development of GenAI models will
become more democratized and flourish. We also review recent application
studies of MaaS. Finally, we highlight several challenges and future issues in
this promising area. MaaS is a new deployment and service paradigm for
different AI-based models. We hope this review will inspire future research in
the field of MaaS.
| [
{
"version": "v1",
"created": "Fri, 10 Nov 2023 00:35:00 GMT"
}
] | 1,699,833,600,000 | [
[
"Gan",
"Wensheng",
""
],
[
"Wan",
"Shicheng",
""
],
[
"Yu",
"Philip S.",
""
]
] |
2311.05851 | Junya Morita | Junya Morita, Tatsuya Yui, Takeru Amaya, Ryuichiro Higashinaka, Yugo
Takeuchi | Cognitive Architecture Toward Common Ground Sharing Among Humans and
Generative AIs: Trial on Model-Model Interactions in Tangram Naming Task | Proceedings of the 2023 AAAI Fall Symposium on Integrating Cognitive
Architectures and Generative Models | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | For generative AIs to be trustworthy, establishing transparent common
grounding with humans is essential. As a preparation toward human-model common
grounding, this study examines the process of model-model common grounding. In
this context, common ground is defined as a cognitive framework shared among
agents in communication, enabling the connection of symbols exchanged between
agents to the meanings inherent in each agent. This connection is facilitated
by a shared cognitive framework among the agents involved. In this research, we
focus on the tangram naming task (TNT) as a testbed to examine the
common-ground-building process. Unlike previous models designed for this task,
our approach employs generative AIs to visualize the internal processes of the
model. In this task, the sender constructs a metaphorical image of an abstract
figure within the model and generates a detailed description based on this
image. The receiver interprets the generated description from the partner by
constructing another image and reconstructing the original abstract figure.
Preliminary results from the study show an improvement in task performance
beyond the chance level, indicating the effect of the common cognitive
framework implemented in the models. Additionally, we observed that incremental
backpropagations leveraging successful communication cases for a component of
the model led to a statistically significant increase in performance. These
results provide valuable insights into the mechanisms of common grounding made
by generative AIs, improving human communication with the evolving intelligent
machines in our future society.
| [
{
"version": "v1",
"created": "Fri, 10 Nov 2023 03:15:17 GMT"
}
] | 1,699,833,600,000 | [
[
"Morita",
"Junya",
""
],
[
"Yui",
"Tatsuya",
""
],
[
"Amaya",
"Takeru",
""
],
[
"Higashinaka",
"Ryuichiro",
""
],
[
"Takeuchi",
"Yugo",
""
]
] |
2311.05997 | Zihao Wang | Zihao Wang, Shaofei Cai, Anji Liu, Yonggang Jin, Jinbing Hou, Bowei
Zhang, Haowei Lin, Zhaofeng He, Zilong Zheng, Yaodong Yang, Xiaojian Ma,
Yitao Liang | JARVIS-1: Open-World Multi-task Agents with Memory-Augmented Multimodal
Language Models | update project page | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Achieving human-like planning and control with multimodal observations in an
open world is a key milestone for more functional generalist agents. Existing
approaches can handle certain long-horizon tasks in an open world. However,
they still struggle when the number of open-world tasks could potentially be
infinite and lack the capability to progressively enhance task completion as
game time progresses. We introduce JARVIS-1, an open-world agent that can
perceive multimodal input (visual observations and human instructions),
generate sophisticated plans, and perform embodied control, all within the
popular yet challenging open-world Minecraft universe. Specifically, we develop
JARVIS-1 on top of pre-trained multimodal language models, which map visual
observations and textual instructions to plans. The plans will be ultimately
dispatched to the goal-conditioned controllers. We outfit JARVIS-1 with a
multimodal memory, which facilitates planning using both pre-trained knowledge
and its actual game survival experiences. JARVIS-1 is the existing most general
agent in Minecraft, capable of completing over 200 different tasks using
control and observation space similar to humans. These tasks range from
short-horizon tasks, e.g., "chopping trees" to long-horizon tasks, e.g.,
"obtaining a diamond pickaxe". JARVIS-1 performs exceptionally well in
short-horizon tasks, achieving nearly perfect performance. In the classic
long-term task of $\texttt{ObtainDiamondPickaxe}$, JARVIS-1 surpasses the
reliability of current state-of-the-art agents by 5 times and can successfully
complete longer-horizon and more challenging tasks. The project page is
available at https://craftjarvis.org/JARVIS-1
| [
{
"version": "v1",
"created": "Fri, 10 Nov 2023 11:17:58 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Nov 2023 08:04:07 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Nov 2023 07:39:48 GMT"
}
] | 1,701,388,800,000 | [
[
"Wang",
"Zihao",
""
],
[
"Cai",
"Shaofei",
""
],
[
"Liu",
"Anji",
""
],
[
"Jin",
"Yonggang",
""
],
[
"Hou",
"Jinbing",
""
],
[
"Zhang",
"Bowei",
""
],
[
"Lin",
"Haowei",
""
],
[
"He",
"Zhaofeng",
""
],
[
"Zheng",
"Zilong",
""
],
[
"Yang",
"Yaodong",
""
],
[
"Ma",
"Xiaojian",
""
],
[
"Liang",
"Yitao",
""
]
] |
2311.06175 | Luiz Capretz | Hussaini Mamman, Shuib Basri, Abdullateef Oluwaqbemiga Balogun,
Abdullahi Abubakar Imam, Ganesh Kumar, Luiz Fernando Capretz | Search-Based Fairness Testing: An Overview | IEEE International Conference on Computing (ICOCO 2023), Langkawi
Island, Malaysia, pp. 89-94, October 2023 | IEEE International Conference on Computing (ICOCO 2023), Langkawi
Island, Malaysia, pp. 89-94, October 2023 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence (AI) has demonstrated remarkable capabilities in
domains such as recruitment, finance, healthcare, and the judiciary. However,
biases in AI systems raise ethical and societal concerns, emphasizing the need
for effective fairness testing methods. This paper reviews current research on
fairness testing, particularly its application through search-based testing.
Our analysis highlights progress and identifies areas of improvement in
addressing AI systems biases. Future research should focus on leveraging
established search-based testing methodologies for fairness testing.
| [
{
"version": "v1",
"created": "Fri, 10 Nov 2023 16:47:56 GMT"
}
] | 1,699,833,600,000 | [
[
"Mamman",
"Hussaini",
""
],
[
"Basri",
"Shuib",
""
],
[
"Balogun",
"Abdullateef Oluwaqbemiga",
""
],
[
"Imam",
"Abdullahi Abubakar",
""
],
[
"Kumar",
"Ganesh",
""
],
[
"Capretz",
"Luiz Fernando",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.