id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2405.19255 | Haowen Xu | Jose Tupayachi, Haowen Xu, Olufemi A. Omitaomu, Mustafa Can Camur,
Aliza Sharmin, Xueping Li | Towards Next-Generation Urban Decision Support Systems through
AI-Powered Generation of Scientific Ontology using Large Language Models -- A
Case in Optimizing Intermodal Freight Transportation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The incorporation of Artificial Intelligence (AI) models into various
optimization systems is on the rise. Yet, addressing complex urban and
environmental management problems normally requires in-depth domain science and
informatics expertise. This expertise is essential for deriving data and
simulation-driven for informed decision support. In this context, we
investigate the potential of leveraging the pre-trained Large Language Models
(LLMs). By adopting ChatGPT API as the reasoning core, we outline an integrated
workflow that encompasses natural language processing, methontology-based
prompt tuning, and transformers. This workflow automates the creation of
scenario-based ontology using existing research articles and technical manuals
of urban datasets and simulations. The outcomes of our methodology are
knowledge graphs in widely adopted ontology languages (e.g., OWL, RDF, SPARQL).
These facilitate the development of urban decision support systems by enhancing
the data and metadata modeling, the integration of complex datasets, the
coupling of multi-domain simulation models, and the formulation of
decision-making metrics and workflow. The feasibility of our methodology is
evaluated through a comparative analysis that juxtaposes our AI-generated
ontology with the well-known Pizza Ontology employed in tutorials for popular
ontology software (e.g., prot\'eg\'e). We close with a real-world case study of
optimizing the complex urban system of multi-modal freight transportation by
generating anthologies of various domain data and simulations to support
informed decision-making.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 16:40:31 GMT"
}
] | 1,717,027,200,000 | [
[
"Tupayachi",
"Jose",
""
],
[
"Xu",
"Haowen",
""
],
[
"Omitaomu",
"Olufemi A.",
""
],
[
"Camur",
"Mustafa Can",
""
],
[
"Sharmin",
"Aliza",
""
],
[
"Li",
"Xueping",
""
]
] |
2405.19444 | Zhenwen Liang | Zhenwen Liang, Dian Yu, Wenhao Yu, Wenlin Yao, Zhihan Zhang,
Xiangliang Zhang, Dong Yu | MathChat: Benchmarking Mathematical Reasoning and Instruction Following
in Multi-Turn Interactions | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have demonstrated impressive capabilities in
mathematical problem solving, particularly in single turn question answering
formats. However, real world scenarios often involve mathematical question
answering that requires multi turn or interactive information exchanges, and
the performance of LLMs on these tasks is still underexplored. This paper
introduces MathChat, a comprehensive benchmark specifically designed to
evaluate LLMs across a broader spectrum of mathematical tasks. These tasks are
structured to assess the models' abilities in multiturn interactions and open
ended generation. We evaluate the performance of various SOTA LLMs on the
MathChat benchmark, and we observe that while these models excel in single turn
question answering, they significantly underperform in more complex scenarios
that require sustained reasoning and dialogue understanding. To address the
above limitations of existing LLMs when faced with multiturn and open ended
tasks, we develop MathChat sync, a synthetic dialogue based math dataset for
LLM finetuning, focusing on improving models' interaction and instruction
following capabilities in conversations. Experimental results emphasize the
need for training LLMs with diverse, conversational instruction tuning datasets
like MathChatsync. We believe this work outlines one promising direction for
improving the multiturn mathematical reasoning abilities of LLMs, thus pushing
forward the development of LLMs that are more adept at interactive mathematical
problem solving and real world applications.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 18:45:55 GMT"
}
] | 1,717,113,600,000 | [
[
"Liang",
"Zhenwen",
""
],
[
"Yu",
"Dian",
""
],
[
"Yu",
"Wenhao",
""
],
[
"Yao",
"Wenlin",
""
],
[
"Zhang",
"Zhihan",
""
],
[
"Zhang",
"Xiangliang",
""
],
[
"Yu",
"Dong",
""
]
] |
2405.19453 | Chamani Shiranthika Jayakody Kankanamalage | Chamani Shiranthika, Parvaneh Saeedi, Ivan V. Baji\'c | Optimizing Split Points for Error-Resilient SplitFed Learning | Accepted for poster presentation at the Women in Computer Vision
(WiCV) workshop in CVPR 2024 | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Recent advancements in decentralized learning, such as Federated Learning
(FL), Split Learning (SL), and Split Federated Learning (SplitFed), have
expanded the potentials of machine learning. SplitFed aims to minimize the
computational burden on individual clients in FL and parallelize SL while
maintaining privacy. This study investigates the resilience of SplitFed to
packet loss at model split points. It explores various parameter aggregation
strategies of SplitFed by examining the impact of splitting the model at
different points-either shallow split or deep split-on the final global model
performance. The experiments, conducted on a human embryo image segmentation
task, reveal a statistically significant advantage of a deeper split point.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 19:03:27 GMT"
}
] | 1,717,113,600,000 | [
[
"Shiranthika",
"Chamani",
""
],
[
"Saeedi",
"Parvaneh",
""
],
[
"Bajić",
"Ivan V.",
""
]
] |
2405.19456 | Xisen Wang | Xisen Wang, Yigit Ihlamur | An Automated Startup Evaluation Pipeline: Startup Success Forecasting
Framework (SSFF) | For relevant code:
https://github.com/Xisen-Wang/Startup-Success-Forecasting-Framework | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evaluating startups in their early stages is a complex task that requires
detailed analysis by experts. While automating this process on a large scale
can significantly impact businesses, the inherent complexity poses challenges.
This paper addresses this challenge by introducing the Startup Success
Forecasting Framework (SSFF), a new automated system that combines traditional
machine learning with advanced language models. This intelligent agent-based
architecture is designed to reason, act, synthesize, and decide like a venture
capitalist to perform the analysis end-to-end. The SSFF is made up of three
main parts: - Prediction Block: Uses random forests and neural networks to make
predictions. - Analyst Block: Simulates VC analysis scenario and uses SOTA
prompting techniques - External Knowledge Block: Gathers real-time information
from external sources. This framework requires minimal input data about the
founder and startup description, enhances it with additional data from external
resources, and performs a detailed analysis with high accuracy, all in an
automated manner
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 19:07:42 GMT"
}
] | 1,717,113,600,000 | [
[
"Wang",
"Xisen",
""
],
[
"Ihlamur",
"Yigit",
""
]
] |
2405.19464 | Haowen Xu | Haowen Xu, Femi Omitaomu, Soheil Sabri, Xiao Li, Yongze Song | Leveraging Generative AI for Smart City Digital Twins: A Survey on the
Autonomous Generation of Data, Scenarios, 3D City Models, and Urban Designs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The digital transformation of modern cities by integrating advanced
information, communication, and computing technologies has marked the epoch of
data-driven smart city applications for efficient and sustainable urban
management. Despite their effectiveness, these applications often rely on
massive amounts of high-dimensional and multi-domain data for monitoring and
characterizing different urban sub-systems, presenting challenges in
application areas that are limited by data quality and availability, as well as
costly efforts for generating urban scenarios and design alternatives. As an
emerging research area in deep learning, Generative Artificial Intelligence
(AI) models have demonstrated their unique values in data and code generation.
This survey paper aims to explore the innovative integration of generative AI
techniques and urban digital twins to address challenges in the realm of smart
cities in various urban sectors, such as transportation and mobility
management, energy system operations, building and infrastructure management,
and urban design. The survey starts with the introduction of popular generative
AI models with their application areas, followed by a structured review of the
existing urban science applications that leverage the autonomous capability of
the generative AI techniques to facilitate (a) data augmentation for promoting
urban monitoring and predictive analytics, (b) synthetic data and scenario
generation, (c) automated 3D city modeling, and (d) generative urban design and
optimization. Based on the review, this survey discusses potential
opportunities and technical strategies that integrate generative AI models into
the next-generation urban digital twins for more reliable, scalable, and
automated management of smart cities.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 19:23:07 GMT"
}
] | 1,717,113,600,000 | [
[
"Xu",
"Haowen",
""
],
[
"Omitaomu",
"Femi",
""
],
[
"Sabri",
"Soheil",
""
],
[
"Li",
"Xiao",
""
],
[
"Song",
"Yongze",
""
]
] |
2405.19498 | Robert Johansson | Robert Johansson | Machine Psychology: Integrating Operant Conditioning with the
Non-Axiomatic Reasoning System for Advancing Artificial General Intelligence
Research | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper introduces an interdisciplinary framework called Machine
Psychology, which merges principles from operant learning psychology with a
specific Artificial Intelligence model, the Non-Axiomatic Reasoning System
(NARS), to enhance Artificial General Intelligence (AGI) research. The core
premise of this framework is that adaptation is crucial to both biological and
artificial intelligence and can be understood through operant conditioning
principles. The study assesses this approach via three operant learning tasks
using OpenNARS for Applications (ONA): simple discrimination, changing
contingencies, and conditional discrimination tasks.
In the simple discrimination task, NARS demonstrated rapid learning,
achieving perfect accuracy during both training and testing phases. The
changing contingencies task showcased NARS's adaptability, as it successfully
adjusted its behavior when task conditions were reversed. In the conditional
discrimination task, NARS handled complex learning scenarios effectively,
achieving high accuracy by forming and utilizing intricate hypotheses based on
conditional cues.
These findings support the application of operant conditioning as a framework
for creating adaptive AGI systems. NARS's ability to operate under conditions
of insufficient knowledge and resources, coupled with its sensorimotor
reasoning capabilities, establishes it as a robust model for AGI. The Machine
Psychology framework, by incorporating elements of natural intelligence such as
continuous learning and goal-driven behavior, offers a scalable and flexible
approach for real-world applications. Future research should investigate using
enhanced NARS systems, more advanced tasks, and applying this framework to
diverse, complex challenges to further progress the development of human-level
AI.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 20:23:57 GMT"
}
] | 1,717,113,600,000 | [
[
"Johansson",
"Robert",
""
]
] |
2405.19522 | Nestor Maslej | Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli,
Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons,
James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark | Artificial Intelligence Index Report 2024 | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The 2024 Index is our most comprehensive to date and arrives at an important
moment when AI's influence on society has never been more pronounced. This
year, we have broadened our scope to more extensively cover essential trends
such as technical advancements in AI, public perceptions of the technology, and
the geopolitical dynamics surrounding its development. Featuring more original
data than ever before, this edition introduces new estimates on AI training
costs, detailed analyses of the responsible AI landscape, and an entirely new
chapter dedicated to AI's impact on science and medicine. The AI Index report
tracks, collates, distills, and visualizes data related to artificial
intelligence (AI). Our mission is to provide unbiased, rigorously vetted,
broadly sourced data in order for policymakers, researchers, executives,
journalists, and the general public to develop a more thorough and nuanced
understanding of the complex field of AI. The AI Index is recognized globally
as one of the most credible and authoritative sources for data and insights on
artificial intelligence. Previous editions have been cited in major newspapers,
including the The New York Times, Bloomberg, and The Guardian, have amassed
hundreds of academic citations, and been referenced by high-level policymakers
in the United States, the United Kingdom, and the European Union, among other
places. This year's edition surpasses all previous ones in size, scale, and
scope, reflecting the growing significance that AI is coming to hold in all of
our lives.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 20:59:57 GMT"
}
] | 1,717,113,600,000 | [
[
"Maslej",
"Nestor",
""
],
[
"Fattorini",
"Loredana",
""
],
[
"Perrault",
"Raymond",
""
],
[
"Parli",
"Vanessa",
""
],
[
"Reuel",
"Anka",
""
],
[
"Brynjolfsson",
"Erik",
""
],
[
"Etchemendy",
"John",
""
],
[
"Ligett",
"Katrina",
""
],
[
"Lyons",
"Terah",
""
],
[
"Manyika",
"James",
""
],
[
"Niebles",
"Juan Carlos",
""
],
[
"Shoham",
"Yoav",
""
],
[
"Wald",
"Russell",
""
],
[
"Clark",
"Jack",
""
]
] |
2405.19606 | Zhuang Qi | Xiaming Che, Junlin Zhang, Zhuang Qi, Xin Qi | Relation Modeling and Distillation for Learning with Noisy Labels | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning with noisy labels has become an effective strategy for enhancing the
robustness of models, which enables models to better tolerate inaccurate data.
Existing methods either focus on optimizing the loss function to mitigate the
interference from noise, or design procedures to detect potential noise and
correct errors. However, their effectiveness is often compromised in
representation learning due to the dilemma where models overfit to noisy
labels. To address this issue, this paper proposes a relation modeling and
distillation framework that models inter-sample relationships via
self-supervised learning and employs knowledge distillation to enhance
understanding of latent associations, which mitigate the impact of noisy
labels. Specifically, the proposed method, termed RMDNet, includes two main
modules, where the relation modeling (RM) module implements the contrastive
learning technique to learn representations of all data, an unsupervised
approach that effectively eliminates the interference of noisy tags on feature
extraction. The relation-guided representation learning (RGRL) module utilizes
inter-sample relation learned from the RM module to calibrate the
representation distribution for noisy samples, which is capable of improving
the generalization of the model in the inference phase. Notably, the proposed
RMDNet is a plug-and-play framework that can integrate multiple methods to its
advantage. Extensive experiments were conducted on two datasets, including
performance comparison, ablation study, in-depth analysis and case study. The
results show that RMDNet can learn discriminative representations for noisy
data, which results in superior performance than the existing methods.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 01:47:27 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Jun 2024 01:59:09 GMT"
}
] | 1,717,459,200,000 | [
[
"Che",
"Xiaming",
""
],
[
"Zhang",
"Junlin",
""
],
[
"Qi",
"Zhuang",
""
],
[
"Qi",
"Xin",
""
]
] |
2405.19631 | Akul Goel | Akul Goel, Surya Narayanan Hari, Belinda Waltman, Matt Thomson | Leveraging Open-Source Large Language Models for encoding Social
Determinants of Health using an Intelligent Router | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Social Determinants of Health (SDOH) play a significant role in patient
health outcomes. The Center of Disease Control (CDC) introduced a subset of
ICD-10 codes called Z-codes in an attempt to officially recognize and measure
SDOH in the health care system. However, these codes are rarely annotated in a
patient's Electronic Health Record (EHR), and instead, in many cases, need to
be inferred from clinical notes. Previous research has shown that large
language models (LLMs) show promise on extracting unstructured data from EHRs.
However, with thousands of models to choose from with unique architectures and
training sets, it's difficult to choose one model that performs the best on
coding tasks. Further, clinical notes contain trusted health information making
the use of closed-source language models from commercial vendors difficult, so
the identification of open source LLMs that can be run within health
organizations and exhibits high performance on SDOH tasks is an urgent problem.
Here, we introduce an intelligent routing system for SDOH coding that uses a
language model router to direct medical record data to open source LLMs that
demonstrate optimal performance on specific SDOH codes. The intelligent routing
system exhibits state of the art performance of 97.4% accuracy averaged across
5 codes, including homelessness and food insecurity, on par with closed models
such as GPT-4o. In order to train the routing system and validate models, we
also introduce a synthetic data generation and validation paradigm to increase
the scale of training data without needing privacy protected medical records.
Together, we demonstrate an architecture for intelligent routing of inputs to
task-optimal language models to achieve high performance across a set of
medical coding sub-tasks.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 02:33:28 GMT"
}
] | 1,717,113,600,000 | [
[
"Goel",
"Akul",
""
],
[
"Hari",
"Surya Narayanan",
""
],
[
"Waltman",
"Belinda",
""
],
[
"Thomson",
"Matt",
""
]
] |
2405.19642 | Mengjie Gan | Mengjie Gan, Penglong Lian, Zhiheng Su, Jiyang Zhang, Jialong Huang,
Benhao Wang, Jianxiao Zou and Shicai Fan | Few-shot fault diagnosis based on multi-scale graph convolution
filtering for industry | 6 pages, 2 figures, 2 tables, 63rd IEEE Conference on Decision and
Control | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Industrial equipment fault diagnosis often encounter challenges such as the
scarcity of fault data, complex operating conditions, and varied types of
failures. Signal analysis, data statistical learning, and conventional deep
learning techniques face constraints under these conditions due to their
substantial data requirements and the necessity for transfer learning to
accommodate new failure modes. To effectively leverage information and extract
the intrinsic characteristics of faults across different domains under limited
sample conditions, this paper introduces a fault diagnosis approach employing
Multi-Scale Graph Convolution Filtering (MSGCF). MSGCF enhances the traditional
Graph Neural Network (GNN) framework by integrating both local and global
information fusion modules within the graph convolution filter block. This
advancement effectively mitigates the over-smoothing issue associated with
excessive layering of graph convolutional layers while preserving a broad
receptive field. It also reduces the risk of overfitting in few-shot diagnosis,
thereby augmenting the model's representational capacity. Experiments on the
University of Paderborn bearing dataset (PU) demonstrate that the MSGCF method
proposed herein surpasses alternative approaches in accuracy, thereby offering
valuable insights for industrial fault diagnosis in few-shot learning
scenarios.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 02:51:29 GMT"
}
] | 1,717,113,600,000 | [
[
"Gan",
"Mengjie",
""
],
[
"Lian",
"Penglong",
""
],
[
"Su",
"Zhiheng",
""
],
[
"Zhang",
"Jiyang",
""
],
[
"Huang",
"Jialong",
""
],
[
"Wang",
"Benhao",
""
],
[
"Zou",
"Jianxiao",
""
],
[
"Fan",
"Shicai",
""
]
] |
2405.19654 | Jinxia Yang | Jinxia Yang, Bing Su, Wayne Xin Zhao, Ji-Rong Wen | Unlocking the Power of Spatial and Temporal Information in Medical
Multimodal Pre-training | Accepted at ICML 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical vision-language pre-training methods mainly leverage the
correspondence between paired medical images and radiological reports. Although
multi-view spatial images and temporal sequences of image-report pairs are
available in off-the-shelf multi-modal medical datasets, most existing methods
have not thoroughly tapped into such extensive supervision signals. In this
paper, we introduce the Med-ST framework for fine-grained spatial and temporal
modeling to exploit information from multiple spatial views of chest
radiographs and temporal historical records. For spatial modeling, Med-ST
employs the Mixture of View Expert (MoVE) architecture to integrate different
visual features from both frontal and lateral views. To achieve a more
comprehensive alignment, Med-ST not only establishes the global alignment
between whole images and texts but also introduces modality-weighted local
alignment between text tokens and spatial regions of images. For temporal
modeling, we propose a novel cross-modal bidirectional cycle consistency
objective by forward mapping classification (FMC) and reverse mapping
regression (RMR). By perceiving temporal information from simple to complex,
Med-ST can learn temporal semantics. Experimental results across four distinct
tasks demonstrate the effectiveness of Med-ST, especially in temporal
classification tasks. Our code and model are available at
https://github.com/SVT-Yang/MedST.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 03:15:09 GMT"
}
] | 1,717,113,600,000 | [
[
"Yang",
"Jinxia",
""
],
[
"Su",
"Bing",
""
],
[
"Zhao",
"Wayne Xin",
""
],
[
"Wen",
"Ji-Rong",
""
]
] |
2405.19656 | Han Liu | Han Liu, Peng Cui, Bingning Wang, Jun Zhu, Xiaolin Hu | Accurate and Reliable Predictions with Mutual-Transport Ensemble | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Neural Networks (DNNs) have achieved remarkable success in a variety of
tasks, especially when it comes to prediction accuracy. However, in complex
real-world scenarios, particularly in safety-critical applications, high
accuracy alone is not enough. Reliable uncertainty estimates are crucial.
Modern DNNs, often trained with cross-entropy loss, tend to be overconfident,
especially with ambiguous samples. To improve uncertainty calibration, many
techniques have been developed, but they often compromise prediction accuracy.
To tackle this challenge, we propose the ``mutual-transport ensemble'' (MTE).
This approach introduces a co-trained auxiliary model and adaptively
regularizes the cross-entropy loss using Kullback-Leibler (KL) divergence
between the prediction distributions of the primary and auxiliary models. We
conducted extensive studies on various benchmarks to validate the effectiveness
of our method. The results show that MTE can simultaneously enhance both
accuracy and uncertainty calibration. For example, on the CIFAR-100 dataset,
our MTE method on ResNet34/50 achieved significant improvements compared to
previous state-of-the-art method, with absolute accuracy increases of
2.4%/3.7%, relative reductions in ECE of $42.3%/29.4%, and relative reductions
in classwise-ECE of 11.6%/15.3%.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 03:15:59 GMT"
}
] | 1,717,113,600,000 | [
[
"Liu",
"Han",
""
],
[
"Cui",
"Peng",
""
],
[
"Wang",
"Bingning",
""
],
[
"Zhu",
"Jun",
""
],
[
"Hu",
"Xiaolin",
""
]
] |
2405.19686 | Jingwei Sun | Jingwei Sun, Zhixu Du, Yiran Chen | Knowledge Graph Tuning: Real-time Large Language Model Personalization
based on Human Feedback | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have demonstrated remarkable proficiency in a
range of natural language processing tasks. Once deployed, LLMs encounter users
with personalized factual knowledge, and such personalized knowledge is
consistently reflected through users' interactions with the LLMs. To enhance
user experience, real-time model personalization is essential, allowing LLMs to
adapt user-specific knowledge based on user feedback during human-LLM
interactions. Existing methods mostly require back-propagation to finetune the
model parameters, which incurs high computational and memory costs. In
addition, these methods suffer from low interpretability, which will cause
unforeseen impacts on model performance during long-term use, where the user's
personalized knowledge is accumulated extensively.To address these challenges,
we propose Knowledge Graph Tuning (KGT), a novel approach that leverages
knowledge graphs (KGs) to personalize LLMs. KGT extracts personalized factual
knowledge triples from users' queries and feedback and optimizes KGs without
modifying the LLM parameters. Our method improves computational and memory
efficiency by avoiding back-propagation and ensures interpretability by making
the KG adjustments comprehensible to humans.Experiments with state-of-the-art
LLMs, including GPT-2, Llama2, and Llama3, show that KGT significantly improves
personalization performance while reducing latency and GPU memory costs.
Ultimately, KGT offers a promising solution of effective, efficient, and
interpretable real-time LLM personalization during user interactions with the
LLMs.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 04:57:03 GMT"
}
] | 1,717,113,600,000 | [
[
"Sun",
"Jingwei",
""
],
[
"Du",
"Zhixu",
""
],
[
"Chen",
"Yiran",
""
]
] |
2405.19694 | Wenjing Xie | Wenjing Xie, Juxin Niu, Chun Jason Xue, Nan Guan | Grade Like a Human: Rethinking Automated Assessment with Large Language
Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While large language models (LLMs) have been used for automated grading, they
have not yet achieved the same level of performance as humans, especially when
it comes to grading complex questions. Existing research on this topic focuses
on a particular step in the grading procedure: grading using predefined
rubrics. However, grading is a multifaceted procedure that encompasses other
crucial steps, such as grading rubrics design and post-grading review. There
has been a lack of systematic research exploring the potential of LLMs to
enhance the entire grading~process.
In this paper, we propose an LLM-based grading system that addresses the
entire grading procedure, including the following key components: 1) Developing
grading rubrics that not only consider the questions but also the student
answers, which can more accurately reflect students' performance. 2) Under the
guidance of grading rubrics, providing accurate and consistent scores for each
student, along with customized feedback. 3) Conducting post-grading review to
better ensure accuracy and fairness. Additionally, we collected a new dataset
named OS from a university operating system course and conducted extensive
experiments on both our new dataset and the widely used Mohler dataset.
Experiments demonstrate the effectiveness of our proposed approach, providing
some new insights for developing automated grading systems based on LLMs.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 05:08:15 GMT"
}
] | 1,717,113,600,000 | [
[
"Xie",
"Wenjing",
""
],
[
"Niu",
"Juxin",
""
],
[
"Xue",
"Chun Jason",
""
],
[
"Guan",
"Nan",
""
]
] |
2405.19736 | Yunlong Liu | Dayang Liang, Jinyang Lai, and Yunlong Liu | Learning Task-relevant Sequence Representations via Intrinsic Dynamics
Characteristics in Reinforcement Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Learning task-relevant state representations is crucial to solving the
problem of scene generalization in visual deep reinforcement learning. Prior
work typically establishes a self-supervised auxiliary learner, introducing
elements (e.g., rewards and actions) to extract task-relevant state information
from observations through behavioral similarity metrics. However, the methods
often ignore the inherent relationships between the elements (e.g., dynamics
relationships) that are essential for learning accurate representations, and
they are also limited to single-step metrics, which impedes the discrimination
of short-term similar task/behavior information in long-term dynamics
transitions. To solve the issues, we propose an intrinsic dynamic
characteristics-driven sequence representation learning method (DSR) over a
common DRL frame. Concretely, inspired by the fact of state transition in the
underlying system, it constrains the optimization of the encoder via modeling
the dynamics equations related to the state transition, which prompts the
latent encoding information to satisfy the state transition process and thereby
distinguishes state space and noise space. Further, to refine the ability of
encoding similar tasks based on dynamics constraints, DSR also sequentially
models inherent dynamics equation relationships from the perspective of
sequence elements' frequency domain and multi-step prediction. Finally,
experimental results show that DSR has achieved a significant performance boost
in the Distracting DMControl Benchmark, with an average of 78.9% over the
backbone baseline. Further results indicate that it also achieves the best
performance in real-world autonomous driving tasks in the CARLA simulator.
Moreover, the qualitative analysis results of t-SNE visualization validate that
our method possesses superior representation ability on visual tasks.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 06:31:03 GMT"
}
] | 1,717,113,600,000 | [
[
"Liang",
"Dayang",
""
],
[
"Lai",
"Jinyang",
""
],
[
"Liu",
"Yunlong",
""
]
] |
2405.19761 | Zhihao Chang | Zhihao Chang, Linzhu Yu, Huan Li, Sai Wu, Gang Chen, Dongxiang Zhang | Revisiting CNNs for Trajectory Similarity Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Similarity search is a fundamental but expensive operator in querying
trajectory data, due to its quadratic complexity of distance computation. To
mitigate the computational burden for long trajectories, neural networks have
been widely employed for similarity learning and each trajectory is encoded as
a high-dimensional vector for similarity search with linear complexity. Given
the sequential nature of trajectory data, previous efforts have been primarily
devoted to the utilization of RNNs or Transformers.
In this paper, we argue that the common practice of treating trajectory as
sequential data results in excessive attention to capturing long-term global
dependency between two sequences. Instead, our investigation reveals the
pivotal role of local similarity, prompting a revisit of simple CNNs for
trajectory similarity learning. We introduce ConvTraj, incorporating both 1D
and 2D convolutions to capture sequential and geo-distribution features of
trajectories, respectively. In addition, we conduct a series of theoretical
analyses to justify the effectiveness of ConvTraj. Experimental results on
three real-world large-scale datasets demonstrate that ConvTraj achieves
state-of-the-art accuracy in trajectory similarity search. Owing to the simple
network structure of ConvTraj, the training and inference speed on the Porto
dataset with 1.6 million trajectories are increased by at least $240$x and
$2.16$x, respectively. The source code and dataset can be found at
\textit{\url{https://github.com/Proudc/ConvTraj}}.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 07:16:03 GMT"
}
] | 1,717,113,600,000 | [
[
"Chang",
"Zhihao",
""
],
[
"Yu",
"Linzhu",
""
],
[
"Li",
"Huan",
""
],
[
"Wu",
"Sai",
""
],
[
"Chen",
"Gang",
""
],
[
"Zhang",
"Dongxiang",
""
]
] |
2405.19808 | Herman Cappelen | Herman Cappelen and Josh Dever | AI with Alien Content and Alien Metasemantics | 20 pages, book chapter | in Ernie Lepore and Luvell Anderson (Eds), The Oxford Handbook of
Applied Philosophy of Language, Oxford Handbooks (2024) | 10.1093/oxfordhb/9780192844118.013.47 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AlphaGo plays chess and Go in a creative and novel way. It is natural for us
to attribute contents to it, such as that it doesn't view being several pawns
behind, if it has more board space, as bad. The framework introduced in
Cappelen and Dever (2021) provides a way of thinking about the semantics and
the metasemantics of AI content: does AlphaGo entertain contents like this, and
if so, in virtue of what does a given state of the program mean that particular
content? One salient question Cappelen and Dever didn't consider was the
possibility of alien content. Alien content is content that is not or cannot be
expressed by human beings. It's highly plausible that AlphaGo, or any other
sophisticated AI system, expresses alien contents. That this is so, moreover,
is plausibly a metasemantic fact: a fact that has to do with how AI comes to
entertain content in the first place, one that will heed the vastly different
etiology of AI and human content. This chapter explores the question of alien
content in AI from a semantic and metasemantic perspective. It lays out the
logical space of possible responses to the semantic and metasemantic questions
alien content poses, considers whether and how we humans could communicate with
entities who express alien content, and points out that getting clear about
such questions might be important for more 'applied' issues in the philosophy
of AI, such as existential risk and XAI.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 08:17:15 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Jun 2024 22:27:50 GMT"
}
] | 1,717,459,200,000 | [
[
"Cappelen",
"Herman",
""
],
[
"Dever",
"Josh",
""
]
] |
2405.19816 | Sylvain Chevallier | Manon Verbockhaven (TAU, LISN), Sylvain Chevallier (TAU, LISN),
Guillaume Charpiat (TAU, LISN) | Growing Tiny Networks: Spotting Expressivity Bottlenecks and Fixing Them
Optimally | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning tasks are generally formulated as optimization problems,
where one searches for an optimal function within a certain functional space.
In practice, parameterized functional spaces are considered, in order to be
able to perform gradient descent. Typically, a neural network architecture is
chosen and fixed, and its parameters (connection weights) are optimized,
yielding an architecture-dependent result. This way of proceeding however
forces the evolution of the function during training to lie within the realm of
what is expressible with the chosen architecture, and prevents any optimization
across architectures. Costly architectural hyper-parameter optimization is
often performed to compensate for this. Instead, we propose to adapt the
architecture on the fly during training. We show that the information about
desirable architectural changes, due to expressivity bottlenecks when
attempting to follow the functional gradient, can be extracted from %the
backpropagation. To do this, we propose a mathematical definition of
expressivity bottlenecks, which enables us to detect, quantify and solve them
while training, by adding suitable neurons when and where needed. Thus, while
the standard approach requires large networks, in terms of number of neurons
per layer, for expressivity and optimization reasons, we are able to start with
very small neural networks and let them grow appropriately. As a proof of
concept, we show results~on the CIFAR dataset, matching large neural network
accuracy, with competitive training time, while removing the need for standard
architectural hyper-parameter search.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 08:23:56 GMT"
}
] | 1,717,113,600,000 | [
[
"Verbockhaven",
"Manon",
"",
"TAU, LISN"
],
[
"Chevallier",
"Sylvain",
"",
"TAU, LISN"
],
[
"Charpiat",
"Guillaume",
"",
"TAU, LISN"
]
] |
2405.19832 | Herman Cappelen | Herman Cappelen, Josh Dever and John Hawthorne | AI Safety: A Climb To Armageddon? | 20 page article | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an argument that certain AI safety measures, rather than
mitigating existential risk, may instead exacerbate it. Under certain key
assumptions - the inevitability of AI failure, the expected correlation between
an AI system's power at the point of failure and the severity of the resulting
harm, and the tendency of safety measures to enable AI systems to become more
powerful before failing - safety efforts have negative expected utility. The
paper examines three response strategies: Optimism, Mitigation, and Holism.
Each faces challenges stemming from intrinsic features of the AI safety
landscape that we term Bottlenecking, the Perfection Barrier, and Equilibrium
Fluctuation. The surprising robustness of the argument forces a re-examination
of core assumptions around AI safety and points to several avenues for further
research.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 08:41:54 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Jun 2024 22:32:46 GMT"
}
] | 1,717,459,200,000 | [
[
"Cappelen",
"Herman",
""
],
[
"Dever",
"Josh",
""
],
[
"Hawthorne",
"John",
""
]
] |
2405.19837 | Margarida Romero | Margarida Romero (LINE, COMUE UCA, ULaval, Mnemosyne) | Lifelong learning challenges in the era of artificial intelligence: a
computational thinking perspective | null | IRMBAM, Ipag, Jul 2024, Nice, France | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid advancement of artificial intelligence (AI) has brought significant
challenges to the education and workforce skills required to take advantage of
AI for human-AI collaboration in the workplace. As AI continues to reshape
industries and job markets, the need to define how AI literacy can be
considered in lifelong learning has become increasingly critical (Cetindamar et
al., 2022; Laupichler et al., 2022; Romero et al., 2023). Like any new
technology, AI is the subject of both hopes and fears, and what it entails
today presents major challenges (Cugurullo \& Acheampong, 2023; Villani et al.,
2018). It also raises profound questions about our own humanity. Will the
machine surpass the intelligence of the humans who designed it? What will be
the relationship between so-called AI and our human intelligences? How could
human-AI collaboration be regulated in a way that serves the Sustainable
Development Goals (SDGs)? This paper provides a review of the challenges of
lifelong learning in the era of AI from a computational thinking, critical
thinking, and creative competencies perspective, highlighting the implications
for management and leadership in organizations.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 08:46:11 GMT"
}
] | 1,717,113,600,000 | [
[
"Romero",
"Margarida",
"",
"LINE, COMUE UCA, ULaval, Mnemosyne"
]
] |
2405.19850 | Yuxiao Luo | Yuxiao Luo, Zhongcai Cao, Xin Jin, Kang Liu, Ling Yin | Deciphering Human Mobility: Inferring Semantics of Trajectories with
Large Language Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding human mobility patterns is essential for various applications,
from urban planning to public safety. The individual trajectory such as mobile
phone location data, while rich in spatio-temporal information, often lacks
semantic detail, limiting its utility for in-depth mobility analysis. Existing
methods can infer basic routine activity sequences from this data, lacking
depth in understanding complex human behaviors and users' characteristics.
Additionally, they struggle with the dependency on hard-to-obtain auxiliary
datasets like travel surveys. To address these limitations, this paper defines
trajectory semantic inference through three key dimensions: user occupation
category, activity sequence, and trajectory description, and proposes the
Trajectory Semantic Inference with Large Language Models (TSI-LLM) framework to
leverage LLMs infer trajectory semantics comprehensively and deeply. We adopt
spatio-temporal attributes enhanced data formatting (STFormat) and design a
context-inclusive prompt, enabling LLMs to more effectively interpret and infer
the semantics of trajectory data. Experimental validation on real-world
trajectory datasets demonstrates the efficacy of TSI-LLM in deciphering complex
human mobility patterns. This study explores the potential of LLMs in enhancing
the semantic analysis of trajectory data, paving the way for more sophisticated
and accessible human mobility research.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 08:55:48 GMT"
}
] | 1,717,113,600,000 | [
[
"Luo",
"Yuxiao",
""
],
[
"Cao",
"Zhongcai",
""
],
[
"Jin",
"Xin",
""
],
[
"Liu",
"Kang",
""
],
[
"Yin",
"Ling",
""
]
] |
2405.19915 | Huihong Shi | Huihong Shi, Xin Cheng, Wendong Mao, and Zhongfeng Wang | P$^2$-ViT: Power-of-Two Post-Training Quantization and Acceleration for
Fully Quantized Vision Transformer | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision Transformers (ViTs) have excelled in computer vision tasks but are
memory-consuming and computation-intensive, challenging their deployment on
resource-constrained devices. To tackle this limitation, prior works have
explored ViT-tailored quantization algorithms but retained floating-point
scaling factors, which yield non-negligible re-quantization overhead, limiting
ViTs' hardware efficiency and motivating more hardware-friendly solutions. To
this end, we propose \emph{P$^2$-ViT}, the first \underline{P}ower-of-Two (PoT)
\underline{p}ost-training quantization and acceleration framework to accelerate
fully quantized ViTs. Specifically, {as for quantization,} we explore a
dedicated quantization scheme to effectively quantize ViTs with PoT scaling
factors, thus minimizing the re-quantization overhead. Furthermore, we propose
coarse-to-fine automatic mixed-precision quantization to enable better
accuracy-efficiency trade-offs. {In terms of hardware,} we develop {a dedicated
chunk-based accelerator} featuring multiple tailored sub-processors to
individually handle ViTs' different types of operations, alleviating
reconfigurable overhead. Additionally, we design {a tailored row-stationary
dataflow} to seize the pipeline processing opportunity introduced by our PoT
scaling factors, thereby enhancing throughput. Extensive experiments
consistently validate P$^2$-ViT's effectiveness. {Particularly, we offer
comparable or even superior quantization performance with PoT scaling factors
when compared to the counterpart with floating-point scaling factors. Besides,
we achieve up to $\mathbf{10.1\times}$ speedup and $\mathbf{36.8\times}$ energy
saving over GPU's Turing Tensor Cores, and up to $\mathbf{1.84\times}$ higher
computation utilization efficiency against SOTA quantization-based ViT
accelerators. Codes are available at
\url{https://github.com/shihuihong214/P2-ViT}.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 10:26:36 GMT"
}
] | 1,717,113,600,000 | [
[
"Shi",
"Huihong",
""
],
[
"Cheng",
"Xin",
""
],
[
"Mao",
"Wendong",
""
],
[
"Wang",
"Zhongfeng",
""
]
] |
2405.19946 | Xuanfa Jin | Xuanfa Jin, Ziyan Wang, Yali Du, Meng Fang, Haifeng Zhang, Jun Wang | Learning to Discuss Strategically: A Case Study on One Night Ultimate
Werewolf | 27 pages, 5 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Communication is a fundamental aspect of human society, facilitating the
exchange of information and beliefs among people. Despite the advancements in
large language models (LLMs), recent agents built with these often neglect the
control over discussion tactics, which are essential in communication scenarios
and games. As a variant of the famous communication game Werewolf, One Night
Ultimate Werewolf (ONUW) requires players to develop strategic discussion
policies due to the potential role changes that increase the uncertainty and
complexity of the game. In this work, we first present the existence of the
Perfect Bayesian Equilibria (PBEs) in two scenarios of the ONUW game: one with
discussion and one without. The results showcase that the discussion greatly
changes players' utilities by affecting their beliefs, emphasizing the
significance of discussion tactics. Based on the insights obtained from the
analyses, we propose an RL-instructed language agent framework, where a
discussion policy trained by reinforcement learning (RL) is employed to
determine appropriate discussion tactics to adopt. Our experimental results on
several ONUW game settings demonstrate the effectiveness and generalizability
of our proposed framework.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 11:07:06 GMT"
}
] | 1,717,113,600,000 | [
[
"Jin",
"Xuanfa",
""
],
[
"Wang",
"Ziyan",
""
],
[
"Du",
"Yali",
""
],
[
"Fang",
"Meng",
""
],
[
"Zhang",
"Haifeng",
""
],
[
"Wang",
"Jun",
""
]
] |
2405.19956 | Jing Wen | Jing Wen | HOLMES: to Detect Adversarial Examples with Multiple Detectors | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep neural networks (DNNs) can easily be cheated by some imperceptible but
purposeful noise added to images, and erroneously classify them. Previous
defensive work mostly focused on retraining the models or detecting the noise,
but has either shown limited success rates or been attacked by new adversarial
examples. Instead of focusing on adversarial images or the interior of DNN
models, we observed that adversarial examples generated by different algorithms
can be identified based on the output of DNNs (logits). Logit can serve as an
exterior feature to train detectors. Then, we propose HOLMES (Hierarchically
Organized Light-weight Multiple dEtector System) to reinforce DNNs by detecting
potential adversarial examples to minimize the threats they may bring in
practical. HOLMES is able to distinguish \textit{unseen} adversarial examples
from multiple attacks with high accuracy and low false positive rates than
single detector systems even in an adaptive model. To ensure the diversity and
randomness of detectors in HOLMES, we use two methods: training dedicated
detectors for each label and training detectors with top-k logits. Our
effective and inexpensive strategies neither modify original DNN models nor
require its internal parameters. HOLMES is not only compatible with all kinds
of learning models (even only with external APIs), but also complementary to
other defenses to achieve higher detection rates (may also fully protect the
system against various adversarial examples).
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 11:22:55 GMT"
}
] | 1,717,113,600,000 | [
[
"Wen",
"Jing",
""
]
] |
2405.19970 | Petra Bayerl | Petra Saskia Bayerl, Babak Akhgar, Ernesto La Mattina, Barbara
Pirillo, Ioana Cotoi, Davide Ariu, Matteo Mauri, Jorge Garcia, Dimitris
Kavallieros, Antonia Kardara, Konstantina Karagiorgou | Strategies to Counter Artificial Intelligence in Law Enforcement:
Cross-Country Comparison of Citizens in Greece, Italy and Spain | 20th International Conference on Information and Knowledge
Engineering (IKE'21), 3 papges, 1 figure | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper investigates citizens' counter-strategies to the use of Artificial
Intelligence (AI) by law enforcement agencies (LEAs). Based on information from
three countries (Greece, Italy and Spain) we demonstrate disparities in the
likelihood of ten specific counter-strategies. We further identified factors
that increase the propensity for counter-strategies. Our study provides an
important new perspective to societal impacts of security-focused AI
applications by illustrating the conscious, strategic choices by citizens when
confronted with AI capabilities for LEAs.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 11:55:10 GMT"
}
] | 1,717,113,600,000 | [
[
"Bayerl",
"Petra Saskia",
""
],
[
"Akhgar",
"Babak",
""
],
[
"La Mattina",
"Ernesto",
""
],
[
"Pirillo",
"Barbara",
""
],
[
"Cotoi",
"Ioana",
""
],
[
"Ariu",
"Davide",
""
],
[
"Mauri",
"Matteo",
""
],
[
"Garcia",
"Jorge",
""
],
[
"Kavallieros",
"Dimitris",
""
],
[
"Kardara",
"Antonia",
""
],
[
"Karagiorgou",
"Konstantina",
""
]
] |
2405.20046 | Zhuang Qi | Zhuang Qi, Lei Meng, Weihao He, Ruohan Zhang, Yu Wang, Xin Qi, Xiangxu
Meng | Cross-Training with Multi-View Knowledge Fusion for Heterogenous
Federated Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated learning benefits from cross-training strategies, which enables
models to train on data from distinct sources to improve the generalization
capability. However, the data heterogeneity between sources may lead models to
gradually forget previously acquired knowledge when undergoing cross-training
to adapt to new tasks or data sources. We argue that integrating personalized
and global knowledge to gather information from multiple perspectives could
potentially improve performance. To achieve this goal, this paper presents a
novel approach that enhances federated learning through a cross-training scheme
incorporating multi-view information. Specifically, the proposed method, termed
FedCT, includes three main modules, where the consistency-aware knowledge
broadcasting module aims to optimize model assignment strategies, which
enhances collaborative advantages between clients and achieves an efficient
federated learning process. The multi-view knowledge-guided representation
learning module leverages fused prototypical knowledge from both global and
local views to enhance the preservation of local knowledge before and after
model exchange, as well as to ensure consistency between local and global
knowledge. The mixup-based feature augmentation module aggregates rich
information to further increase the diversity of feature spaces, which enables
the model to better discriminate complex samples. Extensive experiments were
conducted on four datasets in terms of performance comparison, ablation study,
in-depth analysis and case study. The results demonstrated that FedCT
alleviates knowledge forgetting from both local and global views, which enables
it outperform state-of-the-art methods.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 13:27:30 GMT"
}
] | 1,717,113,600,000 | [
[
"Qi",
"Zhuang",
""
],
[
"Meng",
"Lei",
""
],
[
"He",
"Weihao",
""
],
[
"Zhang",
"Ruohan",
""
],
[
"Wang",
"Yu",
""
],
[
"Qi",
"Xin",
""
],
[
"Meng",
"Xiangxu",
""
]
] |
2405.20121 | Dong Caiyin | Sun Zhanbo, Dong Caiyin, Ji Ang, Zhao Ruibin, Zhao Yu | A Structure-Aware Lane Graph Transformer Model for Vehicle Trajectory
Prediction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate prediction of future trajectories for surrounding vehicles is vital
for the safe operation of autonomous vehicles. This study proposes a Lane Graph
Transformer (LGT) model with structure-aware capabilities. Its key contribution
lies in encoding the map topology structure into the attention mechanism. To
address variations in lane information from different directions, four Relative
Positional Encoding (RPE) matrices are introduced to capture the local details
of the map topology structure. Additionally, two Shortest Path Distance (SPD)
matrices are employed to capture distance information between two accessible
lanes. Numerical results indicate that the proposed LGT model achieves a
significantly higher prediction performance on the Argoverse 2 dataset.
Specifically, the minFDE$_6$ metric was decreased by 60.73% compared to the
Argoverse 2 baseline model (Nearest Neighbor) and the b-minFDE$_6$ metric was
reduced by 2.65% compared to the baseline LaneGCN model. Furthermore, ablation
experiments demonstrated that the consideration of map topology structure led
to a 4.24% drop in the b-minFDE$_6$ metric, validating the effectiveness of
this model.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 14:57:16 GMT"
}
] | 1,717,113,600,000 | [
[
"Zhanbo",
"Sun",
""
],
[
"Caiyin",
"Dong",
""
],
[
"Ang",
"Ji",
""
],
[
"Ruibin",
"Zhao",
""
],
[
"Yu",
"Zhao",
""
]
] |
2405.20138 | Toshio Suzuki | Fuki Ito, Toshio Suzuki | Separation and Collapse of Equilibria Inequalities on AND-OR Trees
without Shape Constraints | 42 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Herein, we investigate the randomized complexity, which is the least cost
against the worst input, of AND-OR tree computation by imposing various
restrictions on the algorithm to find the Boolean value of the root of that
tree and no restrictions on the tree shape. When a tree satisfies a certain
condition regarding its symmetry, directional algorithms proposed by Saks and
Wigderson (1986), special randomized algorithms, are known to achieve the
randomized complexity. Furthermore, there is a known example of a tree that is
so unbalanced that no directional algorithm achieves the randomized complexity
(Vereshchagin 1998). In this study, we aim to identify where deviations arise
between the general randomized Boolean decision tree and its special case,
directional algorithms. In this paper, we show that for any AND-OR tree,
randomized depth-first algorithms, which form a broader class compared with
directional algorithms, have the same equilibrium as that of the directional
algorithms. Thus, we get the collapse result on equilibria inequalities that
holds for an arbitrary AND-OR tree. This implies that there exists a case where
even depth-first algorithms cannot be the fastest, leading to the separation
result on equilibria inequality. Additionally, a new algorithm is introduced as
a key concept for proof of the separation result.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 15:13:46 GMT"
}
] | 1,717,113,600,000 | [
[
"Ito",
"Fuki",
""
],
[
"Suzuki",
"Toshio",
""
]
] |
2405.20142 | Jingjing Guo | Chao Zhang, Weirong Cui, and Jingjing Guo | MSSC-BiMamba: Multimodal Sleep Stage Classification and Early Diagnosis
of Sleep Disorders with Bidirectional Mamba | 10 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monitoring sleep states is essential for evaluating sleep quality and
diagnosing sleep disorders. Traditional manual staging is time-consuming and
prone to subjective bias, often resulting in inconsistent outcomes. Here, we
developed an automated model for sleep staging and disorder classification to
enhance diagnostic accuracy and efficiency. Considering the characteristics of
polysomnography (PSG) multi-lead sleep monitoring, we designed a multimodal
sleep state classification model, MSSC-BiMamba, that combines an Efficient
Channel Attention (ECA) mechanism with a Bidirectional State Space Model
(BSSM). The ECA module allows for weighting data from different sensor
channels, thereby amplifying the influence of diverse sensor inputs.
Additionally, the implementation of bidirectional Mamba (BiMamba) enables the
model to effectively capture the multidimensional features and long-range
dependencies of PSG data. The developed model demonstrated impressive
performance on sleep stage classification tasks on both the ISRUC-S3 and
ISRUC-S1 datasets, respectively containing data with healthy and unhealthy
sleep patterns. Also, the model exhibited a high accuracy for sleep health
prediction when evaluated on a combined dataset consisting of ISRUC and
Sleep-EDF. Our model, which can effectively handle diverse sleep conditions, is
the first to apply BiMamba to sleep staging with multimodal PSG data, showing
substantial gains in computational and memory efficiency over traditional
Transformer-style models. This method enhances sleep health management by
making monitoring more accessible and extending advanced healthcare through
innovative technology.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 15:16:53 GMT"
},
{
"version": "v2",
"created": "Fri, 31 May 2024 03:31:23 GMT"
}
] | 1,717,372,800,000 | [
[
"Zhang",
"Chao",
""
],
[
"Cui",
"Weirong",
""
],
[
"Guo",
"Jingjing",
""
]
] |
2405.20202 | Ke Yi | Ke Yi, Yuhui Xu, Heng Chang, Chen Tang, Yuan Meng, Tong Zhang, Jia Li | One QuantLLM for ALL: Fine-tuning Quantized LLMs Once for Efficient
Deployments | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have advanced rapidly but face significant
memory demands. While quantization has shown promise for LLMs, current methods
typically require lengthy training to alleviate the performance degradation
from quantization loss. However, deploying LLMs across diverse scenarios with
different resource constraints, e.g., servers and personal computers, requires
repeated training per application, which amplifies the lengthy training
problem. Given that, it is advantageous to train a once-for-all (OFA) supernet
capable of yielding diverse optimal subnets for downstream applications through
one-shot training. Nonetheless, the scale of current language models impedes
efficiency and amplifies interference from weight sharing between subnets. We
make an initial attempt to extend the once-for-all framework to large language
models. Specifically, we decouple shared weights to eliminate the interference
and incorporate Low-Rank adapters for training efficiency. Furthermore, we
observe the imbalance allocation of training resources from the traditional
uniform sampling. A non-parametric scheduler is introduced to adjust the
sampling rate for each quantization configuration, achieving a more balanced
allocation among subnets with varying demands. We validate the approach on
LLaMA2 families, and downstream evaluation confirms our ability to maintain
high performance while significantly reducing deployment time faced with
multiple scenarios.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 16:05:15 GMT"
}
] | 1,717,113,600,000 | [
[
"Yi",
"Ke",
""
],
[
"Xu",
"Yuhui",
""
],
[
"Chang",
"Heng",
""
],
[
"Tang",
"Chen",
""
],
[
"Meng",
"Yuan",
""
],
[
"Zhang",
"Tong",
""
],
[
"Li",
"Jia",
""
]
] |
2405.20234 | Cheng'an Wei | Cheng'an Wei, Kai Chen, Yue Zhao, Yujia Gong, Lu Xiang, and Shenchen
Zhu | Context Injection Attacks on Large Language Models | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) such as ChatGPT and Llama-2 have become
prevalent in real-world applications, exhibiting impressive text generation
performance. LLMs are fundamentally developed from a scenario where the input
data remains static and lacks a clear structure. To behave interactively over
time, LLM-based chat systems must integrate additional contextual information
(i.e., chat history) into their inputs, following a pre-defined structure. This
paper identifies how such integration can expose LLMs to misleading context
from untrusted sources and fail to differentiate between system and user
inputs, allowing users to inject context. We present a systematic methodology
for conducting context injection attacks aimed at eliciting disallowed
responses by introducing fabricated context. This could lead to illegal
actions, inappropriate content, or technology misuse. Our context fabrication
strategies, acceptance elicitation and word anonymization, effectively create
misleading contexts that can be structured with attacker-customized prompt
templates, achieving injection through malicious user messages. Comprehensive
evaluations on real-world LLMs such as ChatGPT and Llama-2 confirm the efficacy
of the proposed attack with success rates reaching 97%. We also discuss
potential countermeasures that can be adopted for attack detection and
developing more secure models. Our findings provide insights into the
challenges associated with the real-world deployment of LLMs for interactive
and structured data scenarios.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 16:36:47 GMT"
}
] | 1,717,113,600,000 | [
[
"Wei",
"Cheng'an",
""
],
[
"Chen",
"Kai",
""
],
[
"Zhao",
"Yue",
""
],
[
"Gong",
"Yujia",
""
],
[
"Xiang",
"Lu",
""
],
[
"Zhu",
"Shenchen",
""
]
] |
2405.20421 | Qianqi Yan | Qianqi Yan, Xuehai He, Xiang Yue, Xin Eric Wang | Worse than Random? An Embarrassingly Simple Probing Evaluation of Large
Multimodal Models in Medical VQA | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Multimodal Models (LMMs) have shown remarkable progress in the field of
medical Visual Question Answering (Med-VQA), achieving high accuracy on
existing benchmarks. However, their reliability under robust evaluation is
questionable. This study reveals that state-of-the-art models, when subjected
to simple probing evaluation, perform worse than random guessing on medical
diagnosis questions. To address this critical evaluation problem, we introduce
the Probing Evaluation for Medical Diagnosis (ProbMed) dataset to rigorously
assess LMM performance in medical imaging through probing evaluation and
procedural diagnosis. Particularly, probing evaluation features pairing
original questions with negation questions with hallucinated attributes, while
procedural diagnosis requires reasoning across various diagnostic dimensions
for each image, including modality recognition, organ identification, clinical
findings, abnormalities, and positional grounding. Our evaluation reveals that
top-performing models like GPT-4V and Gemini Pro perform worse than random
guessing on specialized diagnostic questions, indicating significant
limitations in handling fine-grained medical inquiries. Besides, models like
LLaVA-Med struggle even with more general questions, and results from CheXagent
demonstrate the transferability of expertise across different modalities of the
same organ, showing that specialized domain knowledge is still crucial for
improving performance. This study underscores the urgent need for more robust
evaluation to ensure the reliability of LMMs in critical fields like medical
diagnosis, and current LMMs are still far from applicable to those fields.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 18:56:01 GMT"
}
] | 1,717,372,800,000 | [
[
"Yan",
"Qianqi",
""
],
[
"He",
"Xuehai",
""
],
[
"Yue",
"Xiang",
""
],
[
"Wang",
"Xin Eric",
""
]
] |
2405.20487 | Yuta Kawakami | Yuta Kawakami, Manabu Kuroki, Jin Tian | Probabilities of Causation for Continuous and Vector Variables | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Probabilities of causation (PoC) are valuable concepts for explainable
artificial intelligence and practical decision-making. PoC are originally
defined for scalar binary variables. In this paper, we extend the concept of
PoC to continuous treatment and outcome variables, and further generalize PoC
to capture causal effects between multiple treatments and multiple outcomes. In
addition, we consider PoC for a sub-population and PoC with multi-hypothetical
terms to capture more sophisticated counterfactual information useful for
decision-making. We provide a nonparametric identification theorem for each
type of PoC we introduce. Finally, we illustrate the application of our results
on a real-world dataset about education.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 21:22:26 GMT"
}
] | 1,717,372,800,000 | [
[
"Kawakami",
"Yuta",
""
],
[
"Kuroki",
"Manabu",
""
],
[
"Tian",
"Jin",
""
]
] |
2405.20519 | Shreyas Kapur | Shreyas Kapur, Erik Jenner, Stuart Russell | Diffusion On Syntax Trees For Program Synthesis | https://tree-diffusion.github.io | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Large language models generate code one token at a time. Their autoregressive
generation process lacks the feedback of observing the program's output.
Training LLMs to suggest edits directly can be challenging due to the scarcity
of rich edit data. To address these problems, we propose neural diffusion
models that operate on syntax trees of any context-free grammar. Similar to
image diffusion models, our method also inverts ``noise'' applied to syntax
trees. Rather than generating code sequentially, we iteratively edit it while
preserving syntactic validity, which makes it easy to combine this neural model
with search. We apply our approach to inverse graphics tasks, where our model
learns to convert images into programs that produce those images. Combined with
search, our model is able to write graphics programs, see the execution result,
and debug them to meet the required specifications. We additionally show how
our system can write graphics programs for hand-drawn sketches.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 22:31:16 GMT"
}
] | 1,717,372,800,000 | [
[
"Kapur",
"Shreyas",
""
],
[
"Jenner",
"Erik",
""
],
[
"Russell",
"Stuart",
""
]
] |
2405.20600 | Huiguang He | Kaicheng Fu, Changde Du, Xiaoyu Chen, Jie Peng, Huiguang He | Multi-label Class Incremental Emotion Decoding with Augmented Emotional
Semantics Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Emotion decoding plays an important role in affective human-computer
interaction. However, previous studies ignored the dynamic real-world scenario,
where human experience a blend of multiple emotions which are incrementally
integrated into the model, leading to the multi-label class incremental
learning (MLCIL) problem. Existing methods have difficulty in solving MLCIL
issue due to notorious catastrophic forgetting caused by partial label problem
and inadequate label semantics mining. In this paper, we propose an augmented
emotional semantics learning framework for multi-label class incremental
emotion decoding. Specifically, we design an augmented emotional relation graph
module with label disambiguation to handle the past-missing partial label
problem. Then, we leverage domain knowledge from affective dimension space to
alleviate future-missing partial label problem by knowledge distillation.
Besides, an emotional semantics learning module is constructed with a graph
autoencoder to obtain emotion embeddings in order to guide the
semantic-specific feature decoupling for better multi-label learning. Extensive
experiments on three datasets show the superiority of our method for improving
emotion decoding performance and mitigating forgetting on MLCIL problem.
| [
{
"version": "v1",
"created": "Fri, 31 May 2024 03:16:54 GMT"
}
] | 1,717,372,800,000 | [
[
"Fu",
"Kaicheng",
""
],
[
"Du",
"Changde",
""
],
[
"Chen",
"Xiaoyu",
""
],
[
"Peng",
"Jie",
""
],
[
"He",
"Huiguang",
""
]
] |
2405.20625 | Mudit Verma | Atharva Gundawar, Mudit Verma, Lin Guan, Karthik Valmeekam, Siddhant
Bhambri, Subbarao Kambhampati | Robust Planning with LLM-Modulo Framework: Case Study in Travel Planning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As the applicability of Large Language Models (LLMs) extends beyond
traditional text processing tasks, there is a burgeoning interest in their
potential to excel in planning and reasoning assignments, realms traditionally
reserved for System 2 cognitive competencies. Despite their perceived
versatility, the research community is still unraveling effective strategies to
harness these models in such complex domains. The recent discourse introduced
by the paper on LLM Modulo marks a significant stride, proposing a conceptual
framework that enhances the integration of LLMs into diverse planning and
reasoning activities. This workshop paper delves into the practical application
of this framework within the domain of travel planning, presenting a specific
instance of its implementation. We are using the Travel Planning benchmark by
the OSU NLP group, a benchmark for evaluating the performance of LLMs in
producing valid itineraries based on user queries presented in natural
language. While popular methods of enhancing the reasoning abilities of LLMs
such as Chain of Thought, ReAct, and Reflexion achieve a meager 0%, 0.6%, and
0% with GPT3.5-Turbo respectively, our operationalization of the LLM-Modulo
framework for TravelPlanning domain provides a remarkable improvement,
enhancing baseline performances by 4.6x for GPT4-Turbo and even more for older
models like GPT3.5-Turbo from 0% to 5%. Furthermore, we highlight the other
useful roles of LLMs in the planning pipeline, as suggested in LLM-Modulo,
which can be reliably operationalized such as extraction of useful critics and
reformulator for critics.
| [
{
"version": "v1",
"created": "Fri, 31 May 2024 05:23:35 GMT"
}
] | 1,717,372,800,000 | [
[
"Gundawar",
"Atharva",
""
],
[
"Verma",
"Mudit",
""
],
[
"Guan",
"Lin",
""
],
[
"Valmeekam",
"Karthik",
""
],
[
"Bhambri",
"Siddhant",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2405.20653 | Jiahao Yu | Jiahao Yu, Haozheng Luo, Jerry Yao-Chieh Hu, Wenbo Guo, Han Liu, Xinyu
Xing | Enhancing Jailbreak Attack Against Large Language Models through Silent
Tokens | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Along with the remarkable successes of Language language models, recent
research also started to explore the security threats of LLMs, including
jailbreaking attacks. Attackers carefully craft jailbreaking prompts such that
a target LLM will respond to the harmful question. Existing jailbreaking
attacks require either human experts or leveraging complicated algorithms to
craft jailbreaking prompts. In this paper, we introduce BOOST, a simple attack
that leverages only the eos tokens. We demonstrate that rather than
constructing complicated jailbreaking prompts, the attacker can simply append a
few eos tokens to the end of a harmful question. It will bypass the safety
alignment of LLMs and lead to successful jailbreaking attacks. We further apply
BOOST to four representative jailbreak methods and show that the attack success
rates of these methods can be significantly enhanced by simply adding eos
tokens to the prompt. To understand this simple but novel phenomenon, we
conduct empirical analyses. Our analysis reveals that adding eos tokens makes
the target LLM believe the input is much less harmful, and eos tokens have low
attention values and do not affect LLM's understanding of the harmful
questions, leading the model to actually respond to the questions. Our findings
uncover how fragile an LLM is against jailbreak attacks, motivating the
development of strong safety alignment approaches.
| [
{
"version": "v1",
"created": "Fri, 31 May 2024 07:41:03 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jun 2024 20:29:48 GMT"
}
] | 1,717,632,000,000 | [
[
"Yu",
"Jiahao",
""
],
[
"Luo",
"Haozheng",
""
],
[
"Hu",
"Jerry Yao-Chieh",
""
],
[
"Guo",
"Wenbo",
""
],
[
"Liu",
"Han",
""
],
[
"Xing",
"Xinyu",
""
]
] |
2405.20656 | Javier Naranjo-Alcazar | Javier Naranjo-Alcazar, Jordi Grau-Haro, Pedro Zuccarello, David
Almenar, Jesus Lopez-Ballester | Automatic Counting and Classification of Mosquito Eggs in Field Traps | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The analysis of the field traps where the mosquitoes insert their eggs is
vital to check that the sterile insect technique (SIT) is working properly.
This is because the number of hatched eggs may indicate that the sterile males
are not competing with the wild ones. Nowadays, the study of the traps is done
manually by microscope and is very time-consuming and prone to human error.
This paper presents an automatic trap survey. For this purpose, a device has
been designed that automatically scans the slat obtaining different overlapping
photos. Subsequently, the images are analyzed by a Mask-RCNN neural network
that segments the eggs and classifies them into 2 classes: full or hatch
| [
{
"version": "v1",
"created": "Fri, 31 May 2024 07:48:48 GMT"
}
] | 1,717,372,800,000 | [
[
"Naranjo-Alcazar",
"Javier",
""
],
[
"Grau-Haro",
"Jordi",
""
],
[
"Zuccarello",
"Pedro",
""
],
[
"Almenar",
"David",
""
],
[
"Lopez-Ballester",
"Jesus",
""
]
] |
2405.20700 | Gecheng Chen | Gecheng Chen, Zeyu Yang, Chengwen Luo, Jianqiang Li | Self-degraded contrastive domain adaptation for industrial fault
diagnosis with bi-imbalanced data | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Modern industrial fault diagnosis tasks often face the combined challenge of
distribution discrepancy and bi-imbalance. Existing domain adaptation
approaches pay little attention to the prevailing bi-imbalance, leading to poor
domain adaptation performance or even negative transfer. In this work, we
propose a self-degraded contrastive domain adaptation (Sd-CDA) diagnosis
framework to handle the domain discrepancy under the bi-imbalanced data. It
first pre-trains the feature extractor via imbalance-aware contrastive learning
based on model pruning to learn the feature representation efficiently in a
self-supervised manner. Then it forces the samples away from the domain
boundary based on supervised contrastive domain adversarial learning
(SupCon-DA) and ensures the features generated by the feature extractor are
discriminative enough. Furthermore, we propose the pruned contrastive domain
adversarial learning (PSupCon-DA) to pay automatically re-weighted attention to
the minorities to enhance the performance towards bi-imbalanced data. We show
the superiority of the proposed method via two experiments.
| [
{
"version": "v1",
"created": "Fri, 31 May 2024 08:51:57 GMT"
}
] | 1,717,372,800,000 | [
[
"Chen",
"Gecheng",
""
],
[
"Yang",
"Zeyu",
""
],
[
"Luo",
"Chengwen",
""
],
[
"Li",
"Jianqiang",
""
]
] |
2405.20705 | S\"oren Schleibaum | S\"oren Schleibaum, Lu Feng, Sarit Kraus, J\"org P. M\"uller | ADESSE: Advice Explanations in Complex Repeated Decision-Making
Environments | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the evolving landscape of human-centered AI, fostering a synergistic
relationship between humans and AI agents in decision-making processes stands
as a paramount challenge. This work considers a problem setup where an
intelligent agent comprising a neural network-based prediction component and a
deep reinforcement learning component provides advice to a human decision-maker
in complex repeated decision-making environments. Whether the human
decision-maker would follow the agent's advice depends on their beliefs and
trust in the agent and on their understanding of the advice itself. To this
end, we developed an approach named ADESSE to generate explanations about the
adviser agent to improve human trust and decision-making. Computational
experiments on a range of environments with varying model sizes demonstrate the
applicability and scalability of ADESSE. Furthermore, an interactive game-based
user study shows that participants were significantly more satisfied, achieved
a higher reward in the game, and took less time to select an action when
presented with explanations generated by ADESSE. These findings illuminate the
critical role of tailored, human-centered explanations in AI-assisted
decision-making.
| [
{
"version": "v1",
"created": "Fri, 31 May 2024 08:59:20 GMT"
}
] | 1,717,372,800,000 | [
[
"Schleibaum",
"Sören",
""
],
[
"Feng",
"Lu",
""
],
[
"Kraus",
"Sarit",
""
],
[
"Müller",
"Jörg P.",
""
]
] |
2405.20978 | Felton Fang | Feiteng Fang, Yuelin Bai, Shiwen Ni, Min Yang, Xiaojun Chen and
Ruifeng Xu | Enhancing Noise Robustness of Retrieval-Augmented Language Models with
Adaptive Adversarial Training | null | ACL 2024, Main Conference | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) exhibit substantial capabilities yet encounter
challenges, including hallucination, outdated knowledge, and untraceable
reasoning processes. Retrieval-augmented generation (RAG) has emerged as a
promising solution, integrating knowledge from external databases to mitigate
these challenges. However, inappropriate retrieved passages can potentially
hinder the LLMs' capacity to generate comprehensive and high-quality responses.
Prior RAG studies on the robustness of retrieval noises often confine
themselves to a limited set of noise types, deviating from real-world retrieval
environments and limiting practical applicability. In this study, we initially
investigate retrieval noises and categorize them into three distinct types,
reflecting real-world environments. We analyze the impact of these various
retrieval noises on the robustness of LLMs. Subsequently, we propose a novel
RAG approach known as Retrieval-augmented Adaptive Adversarial Training (RAAT).
RAAT leverages adaptive adversarial training to dynamically adjust the model's
training process in response to retrieval noises. Concurrently, it employs
multi-task learning to ensure the model's capacity to internally recognize
noisy contexts. Extensive experiments demonstrate that the LLaMA-2 7B model
trained using RAAT exhibits significant improvements in F1 and EM scores under
diverse noise conditions. For reproducibility, we release our code and data at:
https://github.com/calubkk/RAAT.
| [
{
"version": "v1",
"created": "Fri, 31 May 2024 16:24:53 GMT"
}
] | 1,717,372,800,000 | [
[
"Fang",
"Feiteng",
""
],
[
"Bai",
"Yuelin",
""
],
[
"Ni",
"Shiwen",
""
],
[
"Yang",
"Min",
""
],
[
"Chen",
"Xiaojun",
""
],
[
"Xu",
"Ruifeng",
""
]
] |
2405.21030 | Benjamin Levinstein | Daniel A. Herrmann and Benjamin A. Levinstein | Standards for Belief Representations in LLMs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As large language models (LLMs) continue to demonstrate remarkable abilities
across various domains, computer scientists are developing methods to
understand their cognitive processes, particularly concerning how (and if) LLMs
internally represent their beliefs about the world. However, this field
currently lacks a unified theoretical foundation to underpin the study of
belief in LLMs. This article begins filling this gap by proposing adequacy
conditions for a representation in an LLM to count as belief-like. We argue
that, while the project of belief measurement in LLMs shares striking features
with belief measurement as carried out in decision theory and formal
epistemology, it also differs in ways that should change how we measure belief.
Thus, drawing from insights in philosophy and contemporary practices of machine
learning, we establish four criteria that balance theoretical considerations
with practical constraints. Our proposed criteria include accuracy, coherence,
uniformity, and use, which together help lay the groundwork for a comprehensive
understanding of belief representation in LLMs. We draw on empirical work
showing the limitations of using various criteria in isolation to identify
belief representations.
| [
{
"version": "v1",
"created": "Fri, 31 May 2024 17:21:52 GMT"
}
] | 1,717,372,800,000 | [
[
"Herrmann",
"Daniel A.",
""
],
[
"Levinstein",
"Benjamin A.",
""
]
] |
2406.00216 | Michail Mamalakis Dr | Michail Mamalakis, H\'elo\"ise de Vareilles, Graham Murray, Pietro
Lio, John Suckling | The Explanation Necessity for Healthcare AI | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Explainability is often critical to the acceptable implementation of
artificial intelligence (AI). Nowhere is this more important than healthcare
where decision-making directly impacts patients and trust in AI systems is
essential. This trust is often built on the explanations and interpretations
the AI provides. Despite significant advancements in AI interpretability, there
remains the need for clear guidelines on when and to what extent explanations
are necessary in the medical context. We propose a novel categorization system
with four distinct classes of explanation necessity, guiding the level of
explanation required: patient or sample (local) level, cohort or dataset
(global) level, or both levels. We introduce a mathematical formulation that
distinguishes these categories and offers a practical framework for researchers
to determine the necessity and depth of explanations required in medical AI
applications. Three key factors are considered: the robustness of the
evaluation protocol, the variability of expert observations, and the
representation dimensionality of the application. In this perspective, we
address the question: When does an AI medical application need to be explained,
and at what level of detail?
| [
{
"version": "v1",
"created": "Fri, 31 May 2024 22:20:10 GMT"
}
] | 1,717,459,200,000 | [
[
"Mamalakis",
"Michail",
""
],
[
"de Vareilles",
"Héloïse",
""
],
[
"Murray",
"Graham",
""
],
[
"Lio",
"Pietro",
""
],
[
"Suckling",
"John",
""
]
] |
2406.00392 | Jonathan Cook | Jonathan Cook, Chris Lu, Edward Hughes, Joel Z. Leibo, Jakob Foerster | Artificial Generational Intelligence: Cultural Accumulation in
Reinforcement Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Cultural accumulation drives the open-ended and diverse progress in
capabilities spanning human history. It builds an expanding body of knowledge
and skills by combining individual exploration with inter-generational
information transmission. Despite its widespread success among humans, the
capacity for artificial learning agents to accumulate culture remains
under-explored. In particular, approaches to reinforcement learning typically
strive for improvements over only a single lifetime. Generational algorithms
that do exist fail to capture the open-ended, emergent nature of cultural
accumulation, which allows individuals to trade-off innovation and imitation.
Building on the previously demonstrated ability for reinforcement learning
agents to perform social learning, we find that training setups which balance
this with independent learning give rise to cultural accumulation. These
accumulating agents outperform those trained for a single lifetime with the
same cumulative experience. We explore this accumulation by constructing two
models under two distinct notions of a generation: episodic generations, in
which accumulation occurs via in-context learning and train-time generations,
in which accumulation occurs via in-weights learning. In-context and in-weights
cultural accumulation can be interpreted as analogous to knowledge and skill
accumulation, respectively. To the best of our knowledge, this work is the
first to present general models that achieve emergent cultural accumulation in
reinforcement learning, opening up new avenues towards more open-ended learning
systems, as well as presenting new opportunities for modelling human culture.
| [
{
"version": "v1",
"created": "Sat, 1 Jun 2024 10:33:32 GMT"
}
] | 1,717,459,200,000 | [
[
"Cook",
"Jonathan",
""
],
[
"Lu",
"Chris",
""
],
[
"Hughes",
"Edward",
""
],
[
"Leibo",
"Joel Z.",
""
],
[
"Foerster",
"Jakob",
""
]
] |
2406.00415 | Xuan Wu | Xuan Wu, Di Wang, Lijie Wen, Yubin Xiao, Chunguo Wu, Yuesong Wu,
Chaoyu Yu, Douglas L. Maskell, and You Zhou | Neural Combinatorial Optimization Algorithms for Solving Vehicle Routing
Problems: A Comprehensive Survey with Perspectives | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although several surveys on Neural Combinatorial Optimization (NCO) solvers
specifically designed to solve Vehicle Routing Problems (VRPs) have been
conducted. These existing surveys did not cover the state-of-the-art (SOTA) NCO
solvers emerged recently. More importantly, to provide a comprehensive taxonomy
of NCO solvers with up-to-date coverage, based on our thorough review of
relevant publications and preprints, we divide all NCO solvers into four
distinct categories, namely Learning to Construct, Learning to Improve,
Learning to Predict-Once, and Learning to Predict-Multiplicity solvers.
Subsequently, we present the inadequacies of the SOTA solvers, including poor
generalization, incapability to solve large-scale VRPs, inability to address
most types of VRP variants simultaneously, and difficulty in comparing these
NCO solvers with the conventional Operations Research algorithms.
Simultaneously, we propose promising and viable directions to overcome these
inadequacies. In addition, we compare the performance of representative NCO
solvers from the Reinforcement, Supervised, and Unsupervised Learning paradigms
across both small- and large-scale VRPs. Finally, following the proposed
taxonomy, we provide an accompanying web page as a live repository for NCO
solvers. Through this survey and the live repository, we hope to make the
research community of NCO solvers for VRPs more thriving.
| [
{
"version": "v1",
"created": "Sat, 1 Jun 2024 12:18:39 GMT"
}
] | 1,717,459,200,000 | [
[
"Wu",
"Xuan",
""
],
[
"Wang",
"Di",
""
],
[
"Wen",
"Lijie",
""
],
[
"Xiao",
"Yubin",
""
],
[
"Wu",
"Chunguo",
""
],
[
"Wu",
"Yuesong",
""
],
[
"Yu",
"Chaoyu",
""
],
[
"Maskell",
"Douglas L.",
""
],
[
"Zhou",
"You",
""
]
] |
2406.00537 | Lucas Vieira | Lucas Valadares Vieira, Mara Abel, Fabricio Henrique Rodrigues, Tiago
Prince Sales, Claudenir M. Fonseca | Towards an ontology of portions of matter to support multi-scale
analysis and provenance tracking | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents an ontology of portions of matter with practical
implications across scientific and industrial domains. The ontology is
developed under the Unified Foundational Ontology (UFO), which uses the concept
of quantity to represent topologically maximally self-connected portions of
matter. The proposed ontology introduces the granuleOf parthood relation,
holding between objects and portions of matter. It also discusses the
constitution of quantities by collections of granules, the representation of
sub-portions of matter, and the tracking of matter provenance between
quantities using historical relations. Lastly, a case study is presented to
demonstrate the use of the portion of matter ontology in the geology domain for
an Oil & Gas industry application. In the case study, we model how to represent
the historical relation between an original portion of rock and the
sub-portions created during the industrial process. Lastly, future research
directions are outlined, including investigating granularity levels and
defining a taxonomy of events.
| [
{
"version": "v1",
"created": "Sat, 1 Jun 2024 19:26:21 GMT"
}
] | 1,717,459,200,000 | [
[
"Vieira",
"Lucas Valadares",
""
],
[
"Abel",
"Mara",
""
],
[
"Rodrigues",
"Fabricio Henrique",
""
],
[
"Sales",
"Tiago Prince",
""
],
[
"Fonseca",
"Claudenir M.",
""
]
] |
2406.01131 | Jan Deriu | Pius von D\"aniken, Jan Deriu, Don Tuggener, Mark Cieliebak | Favi-Score: A Measure for Favoritism in Automated Preference Ratings for
Generative AI Evaluation | Accepted at ACL Main Conference | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Generative AI systems have become ubiquitous for all kinds of modalities,
which makes the issue of the evaluation of such models more pressing. One
popular approach is preference ratings, where the generated outputs of
different systems are shown to evaluators who choose their preferences. In
recent years the field shifted towards the development of automated (trained)
metrics to assess generated outputs, which can be used to create preference
ratings automatically. In this work, we investigate the evaluation of the
metrics themselves, which currently rely on measuring the correlation to human
judgments or computing sign accuracy scores.
These measures only assess how well the metric agrees with the human ratings.
However, our research shows that this does not tell the whole story. Most
metrics exhibit a disagreement with human system assessments which is often
skewed in favor of particular text generation systems, exposing a degree of
favoritism in automated metrics. This paper introduces a formal definition of
favoritism in preference metrics, and derives the Favi-Score, which measures
this phenomenon. In particular we show that favoritism is strongly related to
errors in final system rankings. Thus, we propose that preference-based metrics
ought to be evaluated on both sign accuracy scores and favoritism.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2024 09:20:46 GMT"
}
] | 1,717,459,200,000 | [
[
"von Däniken",
"Pius",
""
],
[
"Deriu",
"Jan",
""
],
[
"Tuggener",
"Don",
""
],
[
"Cieliebak",
"Mark",
""
]
] |
2406.01139 | Thomas Bolander | Thomas Bolander, Alessandro Burigana, Marco Montali | Depth-Bounded Epistemic Planning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose a novel algorithm for epistemic planning based on
dynamic epistemic logic (DEL). The novelty is that we limit the depth of
reasoning of the planning agent to an upper bound b, meaning that the planning
agent can only reason about higher-order knowledge to at most (modal) depth b.
The algorithm makes use of a novel type of canonical b-bisimulation contraction
guaranteeing unique minimal models with respect to b-bisimulation. We show our
depth-bounded planning algorithm to be sound. Additionally, we show it to be
complete with respect to planning tasks having a solution within bound b of
reasoning depth (and hence the iterative bound-deepening variant is complete in
the standard sense). For bound b of reasoning depth, the algorithm is shown to
be (b + 1)-EXPTIME complete, and furthermore fixed-parameter tractable in the
number of agents and atoms. We present both a tree search and a graph search
variant of the algorithm, and we benchmark an implementation of the tree search
version against a baseline epistemic planner.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2024 09:30:28 GMT"
}
] | 1,717,459,200,000 | [
[
"Bolander",
"Thomas",
""
],
[
"Burigana",
"Alessandro",
""
],
[
"Montali",
"Marco",
""
]
] |
2406.01140 | Qinggang Zhang | Qinggang Zhang, Keyu Duan, Junnan Dong, Pai Zheng, Xiao Huang | Logical Reasoning with Relation Network for Inductive Knowledge Graph
Completion | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Inductive knowledge graph completion (KGC) aims to infer the missing relation
for a set of newly-coming entities that never appeared in the training set.
Such a setting is more in line with reality, as real-world KGs are constantly
evolving and introducing new knowledge. Recent studies have shown promising
results using message passing over subgraphs to embed newly-coming entities for
inductive KGC. However, the inductive capability of these methods is usually
limited by two key issues. (i) KGC always suffers from data sparsity, and the
situation is even exacerbated in inductive KGC where new entities often have
few or no connections to the original KG. (ii) Cold-start problem. It is over
coarse-grained for accurate KG reasoning to generate representations for new
entities by gathering the local information from few neighbors. To this end, we
propose a novel iNfOmax RelAtion Network, namely NORAN, for inductive KG
completion. It aims to mine latent relation patterns for inductive KG
completion. Specifically, by centering on relations, NORAN provides a hyper
view towards KG modeling, where the correlations between relations can be
naturally captured as entity-independent logical evidence to conduct inductive
KGC. Extensive experiment results on five benchmarks show that our framework
substantially outperforms the state-of-the-art KGC methods.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2024 09:30:43 GMT"
}
] | 1,717,459,200,000 | [
[
"Zhang",
"Qinggang",
""
],
[
"Duan",
"Keyu",
""
],
[
"Dong",
"Junnan",
""
],
[
"Zheng",
"Pai",
""
],
[
"Huang",
"Xiao",
""
]
] |
2406.01377 | Weihao Zeng | Weihao Zeng, Joseph Campbell, Simon Stepputtis, Katia Sycara | Multi-Agent Transfer Learning via Temporal Contrastive Learning | 6 pages, 6 figures | 2024 IEEE International Conference on Robotics and Automation
(ICRA) 2024 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a novel transfer learning framework for deep
multi-agent reinforcement learning. The approach automatically combines
goal-conditioned policies with temporal contrastive learning to discover
meaningful sub-goals. The approach involves pre-training a goal-conditioned
agent, finetuning it on the target domain, and using contrastive learning to
construct a planning graph that guides the agent via sub-goals. Experiments on
multi-agent coordination Overcooked tasks demonstrate improved sample
efficiency, the ability to solve sparse-reward and long-horizon problems, and
enhanced interpretability compared to baselines. The results highlight the
effectiveness of integrating goal-conditioned policies with unsupervised
temporal abstraction learning for complex multi-agent transfer learning.
Compared to state-of-the-art baselines, our method achieves the same or better
performances while requiring only 21.7% of the training samples.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2024 14:42:14 GMT"
}
] | 1,717,459,200,000 | [
[
"Zeng",
"Weihao",
""
],
[
"Campbell",
"Joseph",
""
],
[
"Stepputtis",
"Simon",
""
],
[
"Sycara",
"Katia",
""
]
] |
2406.01759 | Christoph Wehner | Christoph Wehner and Chrysa Iliopoulou and Tarek R. Besold | From Latent to Lucid: Transforming Knowledge Graph Embeddings into
Interpretable Structures | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper introduces a post-hoc explainable AI method tailored for Knowledge
Graph Embedding models. These models are essential to Knowledge Graph
Completion yet criticized for their opaque, black-box nature. Despite their
significant success in capturing the semantics of knowledge graphs through
high-dimensional latent representations, their inherent complexity poses
substantial challenges to explainability. Unlike existing methods, our approach
directly decodes the latent representations encoded by Knowledge Graph
Embedding models, leveraging the principle that similar embeddings reflect
similar behaviors within the Knowledge Graph. By identifying distinct
structures within the subgraph neighborhoods of similarly embedded entities,
our method identifies the statistical regularities on which the models rely and
translates these insights into human-understandable symbolic rules and facts.
This bridges the gap between the abstract representations of Knowledge Graph
Embedding models and their predictive outputs, offering clear, interpretable
insights. Key contributions include a novel post-hoc explainable AI method for
Knowledge Graph Embedding models that provides immediate, faithful explanations
without retraining, facilitating real-time application even on large-scale
knowledge graphs. The method's flexibility enables the generation of
rule-based, instance-based, and analogy-based explanations, meeting diverse
user needs. Extensive evaluations show our approach's effectiveness in
delivering faithful and well-localized explanations, enhancing the transparency
and trustworthiness of Knowledge Graph Embedding models.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2024 19:54:11 GMT"
}
] | 1,717,545,600,000 | [
[
"Wehner",
"Christoph",
""
],
[
"Iliopoulou",
"Chrysa",
""
],
[
"Besold",
"Tarek R.",
""
]
] |
2406.02103 | Nir Greshler | Nir Greshler, David Ben Eli, Carmel Rabinovitz, Gabi Guetta, Liran
Gispan, Guy Zohar, Aviv Tamar | A Bayesian Approach to Online Planning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The combination of Monte Carlo tree search and neural networks has
revolutionized online planning. As neural network approximations are often
imperfect, we ask whether uncertainty estimates about the network outputs could
be used to improve planning. We develop a Bayesian planning approach that
facilitates such uncertainty quantification, inspired by classical ideas from
the meta-reasoning literature. We propose a Thompson sampling based algorithm
for searching the tree of possible actions, for which we prove the first (to
our knowledge) finite time Bayesian regret bound, and propose an efficient
implementation for a restricted family of posterior distributions. In addition
we propose a variant of the Bayes-UCB method applied to trees. Empirically, we
demonstrate that on the ProcGen Maze and Leaper environments, when the
uncertainty estimates are accurate but the neural network output is inaccurate,
our Bayesian approach searches the tree much more effectively. In addition, we
investigate whether popular uncertainty estimation methods are accurate enough
to yield significant gains in planning. Our code is available at:
https://github.com/nirgreshler/bayesian-online-planning.
| [
{
"version": "v1",
"created": "Tue, 4 Jun 2024 08:33:17 GMT"
}
] | 1,717,545,600,000 | [
[
"Greshler",
"Nir",
""
],
[
"Eli",
"David Ben",
""
],
[
"Rabinovitz",
"Carmel",
""
],
[
"Guetta",
"Gabi",
""
],
[
"Gispan",
"Liran",
""
],
[
"Zohar",
"Guy",
""
],
[
"Tamar",
"Aviv",
""
]
] |
2406.02205 | Jiapu Wang | Kai Sun, Jiapu Wang, Huajie Jiang, Yongli Hu, Baocai Yin | Query-Enhanced Adaptive Semantic Path Reasoning for Inductive Knowledge
Graph Completion | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conventional Knowledge graph completion (KGC) methods aim to infer missing
information in incomplete Knowledge Graphs (KGs) by leveraging existing
information, which struggle to perform effectively in scenarios involving
emerging entities. Inductive KGC methods can handle the emerging entities and
relations in KGs, offering greater dynamic adaptability. While existing
inductive KGC methods have achieved some success, they also face challenges,
such as susceptibility to noisy structural information during reasoning and
difficulty in capturing long-range dependencies in reasoning paths. To address
these challenges, this paper proposes the Query-Enhanced Adaptive Semantic Path
Reasoning (QASPR) framework, which simultaneously captures both the structural
and semantic information of KGs to enhance the inductive KGC task.
Specifically, the proposed QASPR employs a query-dependent masking module to
adaptively mask noisy structural information while retaining important
information closely related to the targets. Additionally, QASPR introduces a
global semantic scoring module that evaluates both the individual contributions
and the collective impact of nodes along the reasoning path within KGs. The
experimental results demonstrate that QASPR achieves state-of-the-art
performance.
| [
{
"version": "v1",
"created": "Tue, 4 Jun 2024 11:02:15 GMT"
}
] | 1,717,545,600,000 | [
[
"Sun",
"Kai",
""
],
[
"Wang",
"Jiapu",
""
],
[
"Jiang",
"Huajie",
""
],
[
"Hu",
"Yongli",
""
],
[
"Yin",
"Baocai",
""
]
] |
2406.02235 | Tuan Dam | Tuan Dam and Odalric-Ambrym Maillard and Emilie Kaufmann | Power Mean Estimation in Stochastic Monte-Carlo Tree_Search | UAI 2024 conference | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Monte-Carlo Tree Search (MCTS) is a widely-used strategy for online planning
that combines Monte-Carlo sampling with forward tree search. Its success relies
on the Upper Confidence bound for Trees (UCT) algorithm, an extension of the
UCB method for multi-arm bandits. However, the theoretical foundation of UCT is
incomplete due to an error in the logarithmic bonus term for action selection,
leading to the development of Fixed-Depth-MCTS with a polynomial exploration
bonus to balance exploration and exploitation~\citep{shah2022journal}. Both UCT
and Fixed-Depth-MCTS suffer from biased value estimation: the weighted sum
underestimates the optimal value, while the maximum valuation overestimates
it~\citep{coulom2006efficient}. The power mean estimator offers a balanced
solution, lying between the average and maximum values.
Power-UCT~\citep{dam2019generalized} incorporates this estimator for more
accurate value estimates but its theoretical analysis remains incomplete. This
paper introduces Stochastic-Power-UCT, an MCTS algorithm using the power mean
estimator and tailored for stochastic MDPs. We analyze its polynomial
convergence in estimating root node values and show that it shares the same
convergence rate of $\mathcal{O}(n^{-1/2})$, with $n$ is the number of visited
trajectories, as Fixed-Depth-MCTS, with the latter being a special case of the
former. Our theoretical results are validated with empirical tests across
various stochastic MDP environments.
| [
{
"version": "v1",
"created": "Tue, 4 Jun 2024 11:56:37 GMT"
}
] | 1,717,545,600,000 | [
[
"Dam",
"Tuan",
""
],
[
"Maillard",
"Odalric-Ambrym",
""
],
[
"Kaufmann",
"Emilie",
""
]
] |
2406.02723 | Shiqi Zhang | Shiqi Zhang, Darshan Gadginmath, Fabio Pasqualetti | Predicting AI Agent Behavior through Approximation of the
Perron-Frobenius Operator | 12 pages, 4 figures, conference | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Predicting the behavior of AI-driven agents is particularly challenging
without a preexisting model. In our paper, we address this by treating AI
agents as nonlinear dynamical systems and adopting a probabilistic perspective
to predict their statistical behavior using the Perron-Frobenius (PF) operator.
We formulate the approximation of the PF operator as an entropy minimization
problem, which can be solved by leveraging the Markovian property of the
operator and decomposing its spectrum. Our data-driven methodology
simultaneously approximates the PF operator to perform prediction of the
evolution of the agents and also predicts the terminal probability density of
AI agents, such as robotic systems and generative models. We demonstrate the
effectiveness of our prediction model through extensive experiments on
practical systems driven by AI algorithms.
| [
{
"version": "v1",
"created": "Tue, 4 Jun 2024 19:06:49 GMT"
}
] | 1,717,632,000,000 | [
[
"Zhang",
"Shiqi",
""
],
[
"Gadginmath",
"Darshan",
""
],
[
"Pasqualetti",
"Fabio",
""
]
] |
2406.03000 | Yaacov Pariente | Yaacov Pariente, Vadim Indelman | Simplification of Risk Averse POMDPs with Performance Guarantees | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Risk averse decision making under uncertainty in partially observable domains
is a fundamental problem in AI and essential for reliable autonomous agents. In
our case, the problem is modeled using partially observable Markov decision
processes (POMDPs), when the value function is the conditional value at risk
(CVaR) of the return. Calculating an optimal solution for POMDPs is
computationally intractable in general. In this work we develop a
simplification framework to speedup the evaluation of the value function, while
providing performance guarantees. We consider as simplification a
computationally cheaper belief-MDP transition model, that can correspond, e.g.,
to cheaper observation or transition models. Our contributions include general
bounds for CVaR that allow bounding the CVaR of a random variable X, using a
random variable Y, by assuming bounds between their cumulative distributions.
We then derive bounds for the CVaR value function in a POMDP setting, and show
how to bound the value function using the computationally cheaper belief-MDP
transition model and without accessing the computationally expensive model in
real-time. Then, we provide theoretical performance guarantees for the
estimated bounds. Our results apply for a general simplification of a
belief-MDP transition model and support simplification of both the observation
and state transition models simultaneously.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2024 07:05:52 GMT"
}
] | 1,717,632,000,000 | [
[
"Pariente",
"Yaacov",
""
],
[
"Indelman",
"Vadim",
""
]
] |
2406.03069 | Muhan Hou | Muhan Hou, Koen Hindriks, A.E. Eiben, Kim Baraka | "Give Me an Example Like This": Episodic Active Reinforcement Learning
from Demonstrations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement Learning (RL) has achieved great success in sequential
decision-making problems, but often at the cost of a large number of
agent-environment interactions. To improve sample efficiency, methods like
Reinforcement Learning from Expert Demonstrations (RLED) introduce external
expert demonstrations to facilitate agent exploration during the learning
process. In practice, these demonstrations, which are often collected from
human users, are costly and hence often constrained to a limited amount. How to
select the best set of human demonstrations that is most beneficial for
learning therefore becomes a major concern. This paper presents EARLY (Episodic
Active Learning from demonstration querY), an algorithm that enables a learning
agent to generate optimized queries of expert demonstrations in a
trajectory-based feature space. Based on a trajectory-level estimate of
uncertainty in the agent's current policy, EARLY determines the optimized
timing and content for feature-based queries. By querying episodic
demonstrations as opposed to isolated state-action pairs, EARLY improves the
human teaching experience and achieves better learning performance. We validate
the effectiveness of our method in three simulated navigation tasks of
increasing difficulty. The results show that our method is able to achieve
expert-level performance for all three tasks with convergence over 30\% faster
than other baseline methods when demonstrations are generated by simulated
oracle policies. The results of a follow-up pilot user study (N=18) further
validate that our method can still maintain a significantly better convergence
in the case of human expert demonstrators while achieving a better user
experience in perceived task load and consuming significantly less human time.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2024 08:52:21 GMT"
}
] | 1,717,632,000,000 | [
[
"Hou",
"Muhan",
""
],
[
"Hindriks",
"Koen",
""
],
[
"Eiben",
"A. E.",
""
],
[
"Baraka",
"Kim",
""
]
] |
2406.03091 | Sabah Binte Noor | Sabah Binte Noor and Fazlul Hasan Siddiqui | Improving Plan Execution Flexibility using Block-Substitution | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Partial-order plans in AI planning facilitate execution flexibility due to
their less-constrained nature. Maximizing plan flexibility has been studied
through the notions of plan deordering, and plan reordering. Plan deordering
removes unnecessary action orderings within a plan, while plan reordering
modifies them arbitrarily to minimize action orderings. This study, in contrast
with traditional plan deordering and reordering strategies, improves a plan's
flexibility by substituting its subplans with actions outside the plan for a
planning problem. We exploit block deordering, which eliminates orderings in a
POP by encapsulating coherent actions in blocks, to construct action blocks as
candidate subplans for substitutions. In addition, this paper introduces a
pruning technique for eliminating redundant actions within a BDPO plan. We also
evaluate our approach when combined with MaxSAT-based reorderings. Our
experimental result demonstrates a significant improvement in plan execution
flexibility on the benchmark problems from International Planning Competitions
(IPC), maintaining good coverage and execution time.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2024 09:30:48 GMT"
}
] | 1,717,632,000,000 | [
[
"Noor",
"Sabah Binte",
""
],
[
"Siddiqui",
"Fazlul Hasan",
""
]
] |
2406.03292 | Giuseppe Primiero | Greta Coraglia and Francesco A. Genco and Pellegrino Piantadosi and
Enrico Bagli and Pietro Giuffrida and Davide Posillipo and Giuseppe Primiero | Evaluating AI fairness in credit scoring with the BRIO tool | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present a method for quantitative, in-depth analyses of fairness issues in
AI systems with an application to credit scoring. To this aim we use BRIO, a
tool for the evaluation of AI systems with respect to social unfairness and,
more in general, ethically undesirable behaviours. It features a model-agnostic
bias detection module, presented in \cite{DBLP:conf/beware/CoragliaDGGPPQ23},
to which a full-fledged unfairness risk evaluation module is added. As a case
study, we focus on the context of credit scoring, analysing the UCI German
Credit Dataset \cite{misc_statlog_(german_credit_data)_144}. We apply the BRIO
fairness metrics to several, socially sensitive attributes featured in the
German Credit Dataset, quantifying fairness across various demographic
segments, with the aim of identifying potential sources of bias and
discrimination in a credit scoring model. We conclude by combining our results
with a revenue analysis.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2024 14:00:46 GMT"
}
] | 1,717,632,000,000 | [
[
"Coraglia",
"Greta",
""
],
[
"Genco",
"Francesco A.",
""
],
[
"Piantadosi",
"Pellegrino",
""
],
[
"Bagli",
"Enrico",
""
],
[
"Giuffrida",
"Pietro",
""
],
[
"Posillipo",
"Davide",
""
],
[
"Primiero",
"Giuseppe",
""
]
] |
2406.03367 | Yangfan Wu | Xinrui Lin, Yangfan Wu, Huanyu Yang, Yu Zhang, Yanyong Zhang, Jianmin
Ji | CLMASP: Coupling Large Language Models with Answer Set Programming for
Robotic Task Planning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Language Models (LLMs) possess extensive foundational knowledge and
moderate reasoning abilities, making them suitable for general task planning in
open-world scenarios. However, it is challenging to ground a LLM-generated plan
to be executable for the specified robot with certain restrictions. This paper
introduces CLMASP, an approach that couples LLMs with Answer Set Programming
(ASP) to overcome the limitations, where ASP is a non-monotonic logic
programming formalism renowned for its capacity to represent and reason about a
robot's action knowledge. CLMASP initiates with a LLM generating a basic
skeleton plan, which is subsequently tailored to the specific scenario using a
vector database. This plan is then refined by an ASP program with a robot's
action knowledge, which integrates implementation details into the skeleton,
grounding the LLM's abstract outputs in practical robot contexts. Our
experiments conducted on the VirtualHome platform demonstrate CLMASP's
efficacy. Compared to the baseline executable rate of under 2% with LLM
approaches, CLMASP significantly improves this to over 90%.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2024 15:21:44 GMT"
}
] | 1,717,632,000,000 | [
[
"Lin",
"Xinrui",
""
],
[
"Wu",
"Yangfan",
""
],
[
"Yang",
"Huanyu",
""
],
[
"Zhang",
"Yu",
""
],
[
"Zhang",
"Yanyong",
""
],
[
"Ji",
"Jianmin",
""
]
] |
2406.03501 | Roman Slowinski Prof. | Salvatore Greco and Roman S{\l}owi\'nski | Representation of preferences for multiple criteria decision aiding in a
new seven-valued logic | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The seven-valued logic considered in this paper naturally arises within the
rough set framework, allowing to distinguish vagueness due to imprecision from
ambiguity due to coarseness. Recently, we discussed its utility for reasoning
about data describing multi-attribute classification of objects. We also showed
that this logic contains, as a particular case, the celebrated Belnap
four-valued logic. Here, we present how the seven-valued logic, as well as the
other logics that derive from it, can be used to represent preferences in the
domain of Multiple Criteria Decision Aiding (MCDA). In particular, we propose
new forms of outranking and value function preference models that aggregate
multiple criteria taking into account imperfect preference information. We
demonstrate that our approach effectively addresses common challenges in
preference modeling for MCDA, such as uncertainty, imprecision, and
ill-determination of performances and preferences. To this end, we present a
specific procedure to construct a seven-valued preference relation and use it
to define recommendations that consider robustness concerns by utilizing
multiple outranking or value functions representing the decision maker s
preferences. Moreover, we discuss the main properties of the proposed
seven-valued preference structure and compare it with current approaches in
MCDA, such as ordinal regression, robust ordinal regression, stochastic
multiattribute acceptability analysis, stochastic ordinal regression, and so
on. We illustrate and discuss the application of our approach using a didactic
example. Finally, we propose directions for future research and potential
applications of the proposed methodology.
| [
{
"version": "v1",
"created": "Fri, 31 May 2024 18:59:24 GMT"
}
] | 1,717,718,400,000 | [
[
"Greco",
"Salvatore",
""
],
[
"Słowiński",
"Roman",
""
]
] |
2406.04028 | Yoann Poupart | Yoann Poupart | Contrastive Sparse Autoencoders for Interpreting Planning of
Chess-Playing Agents | Worskhop on Interpretable Policies in Reinforcement Learning @
RLC-2024, 18 pages and 15 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI led chess systems to a superhuman level, yet these systems heavily rely on
black-box algorithms. This is unsustainable in ensuring transparency to the
end-user, particularly when these systems are responsible for sensitive
decision-making. Recent interpretability work has shown that the inner
representations of Deep Neural Networks (DNNs) were fathomable and contained
human-understandable concepts. Yet, these methods are seldom contextualised and
are often based on a single hidden state, which makes them unable to interpret
multi-step reasoning, e.g. planning. In this respect, we propose contrastive
sparse autoencoders (CSAE), a novel framework for studying pairs of game
trajectories. Using CSAE, we are able to extract and interpret concepts that
are meaningful to the chess-agent plans. We primarily focused on a qualitative
analysis of the CSAE features before proposing an automated feature taxonomy.
Furthermore, to evaluate the quality of our trained CSAE, we devise sanity
checks to wave spurious correlations in our results.
| [
{
"version": "v1",
"created": "Thu, 6 Jun 2024 12:57:31 GMT"
}
] | 1,717,718,400,000 | [
[
"Poupart",
"Yoann",
""
]
] |
2406.04082 | Lovis Heindrich | Lovis Heindrich, Falk Lieder | Leveraging automatic strategy discovery to teach people how to select
better projects | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The decisions of individuals and organizations are often suboptimal because
normative decision strategies are too demanding in the real world. Recent work
suggests that some errors can be prevented by leveraging artificial
intelligence to discover and teach prescriptive decision strategies that take
people's constraints into account. So far, this line of research has been
limited to simplified decision problems. This article is the first to extend
this approach to a real-world decision problem, namely project selection. We
develop a computational method (MGPS) that automatically discovers project
selection strategies that are optimized for real people and develop an
intelligent tutor that teaches the discovered strategies. We evaluated MGPS on
a computational benchmark and tested the intelligent tutor in a training
experiment with two control conditions. MGPS outperformed a state-of-the-art
method and was more computationally efficient. Moreover, the intelligent tutor
significantly improved people's decision strategies. Our results indicate that
our method can improve human decision-making in naturalistic settings similar
to real-world project selection, a first step towards applying strategy
discovery to the real world.
| [
{
"version": "v1",
"created": "Thu, 6 Jun 2024 13:51:44 GMT"
}
] | 1,717,718,400,000 | [
[
"Heindrich",
"Lovis",
""
],
[
"Lieder",
"Falk",
""
]
] |
cs/0002002 | Miroslaw Truszczynski | Marc Denecker, Victor W. Marek, Miroslaw Truszczynski | Uniform semantic treatment of default and autoepistemic logics | Proceedings of the Seventh International Conference on Principles of
Knowledge Representation and Reasoning (KR2000); 11 pages | Artificial Intelligence Journal, 143 (2003), pp. 79--122 | null | null | cs.AI | null | We revisit the issue of connections between two leading formalisms in
nonmonotonic reasoning: autoepistemic logic and default logic. For each logic
we develop a comprehensive semantic framework based on the notion of a belief
pair. The set of all belief pairs together with the so called knowledge
ordering forms a complete lattice. For each logic, we introduce several
semantics by means of fixpoints of operators on the lattice of belief pairs.
Our results elucidate an underlying isomorphism of the respective semantic
constructions. In particular, we show that the interpretation of defaults as
modal formulas proposed by Konolige allows us to represent all semantics for
default logic in terms of the corresponding semantics for autoepistemic logic.
Thus, our results conclusively establish that default logic can indeed be
viewed as a fragment of autoepistemic logic. However, as we also demonstrate,
the semantics of Moore and Reiter are given by different operators and occupy
different locations in their corresponding families of semantics. This result
explains the source of the longstanding difficulty to formally relate these two
semantics. In the paper, we also discuss approximating skeptical reasoning with
autoepistemic and default logics and establish constructive principles behind
such approximations.
| [
{
"version": "v1",
"created": "Thu, 3 Feb 2000 21:44:57 GMT"
}
] | 1,179,878,400,000 | [
[
"Denecker",
"Marc",
""
],
[
"Marek",
"Victor W.",
""
],
[
"Truszczynski",
"Miroslaw",
""
]
] |
cs/0002003 | Miroslaw Truszczynski | Deborah East, Miroslaw Truszczynski | On the accuracy and running time of GSAT | Proceedings of the 9th Portuguese Conference on Artificial
Intelligence (EPIA'99), Lecture Notes in Artificial Intelligence, vol. 1695,
Springer-Verlag, 1999 | null | null | null | cs.AI | null | Randomized algorithms for deciding satisfiability were shown to be effective
in solving problems with thousands of variables. However, these algorithms are
not complete. That is, they provide no guarantee that a satisfying assignment,
if one exists, will be found. Thus, when studying randomized algorithms, there
are two important characteristics that need to be considered: the running time
and, even more importantly, the accuracy --- a measure of likelihood that a
satisfying assignment will be found, provided one exists. In fact, we argue
that without a reference to the accuracy, the notion of the running time for
randomized algorithms is not well-defined. In this paper, we introduce a formal
notion of accuracy. We use it to define a concept of the running time. We use
both notions to study the random walk strategy GSAT algorithm. We investigate
the dependence of accuracy on properties of input formulas such as
clause-to-variable ratio and the number of satisfying assignments. We
demonstrate that the running time of GSAT grows exponentially in the number of
variables of the input formula for randomly generated 3-CNF formulas and for
the formulas encoding 3- and 4-colorability of graphs.
| [
{
"version": "v1",
"created": "Fri, 4 Feb 2000 12:53:57 GMT"
}
] | 1,179,878,400,000 | [
[
"East",
"Deborah",
""
],
[
"Truszczynski",
"Miroslaw",
""
]
] |
cs/0002009 | Luis Rocha | Luis M. Rocha | Syntactic Autonomy: Why There is no Autonomy without Symbols and How
Self-Organization Might Evolve Them | null | null | null | null | cs.AI | null | Two different types of agency are discussed based on dynamically coherent and
incoherent couplings with an environment respectively. I propose that until a
private syntax (syntactic autonomy) is discovered by dynamically coherent
agents, there are no significant or interesting types of closure or autonomy.
When syntactic autonomy is established, then, because of a process of
description-based selected self-organization, open-ended evolution is enabled.
At this stage, agents depend, in addition to dynamics, on localized, symbolic
memory, thus adding a level of dynamical incoherence to their interaction with
the environment. Furthermore, it is the appearance of syntactic autonomy which
enables much more interesting types of closures amongst agents which share the
same syntax. To investigate how we can study the emergence of syntax from
dynamical systems, experiments with cellular automata leading to emergent
computation to solve non-trivial tasks are discussed. RNA editing is also
mentioned as a process that may have been used to obtain a primordial
biological code necessary open-ended evolution.
| [
{
"version": "v1",
"created": "Wed, 16 Feb 2000 18:09:20 GMT"
}
] | 1,179,878,400,000 | [
[
"Rocha",
"Luis M.",
""
]
] |
cs/0003008 | Ken Satoh | Ken Satoh | Consistency Management of Normal Logic Program by Top-down Abductive
Proof Procedure | null | null | null | null | cs.AI | null | This paper presents a method of computing a revision of a function-free
normal logic program. If an added rule is inconsistent with a program, that is,
if it leads to a situation such that no stable model exists for a new program,
then deletion and addition of rules are performed to avoid inconsistency. We
specify a revision by translating a normal logic program into an abductive
logic program with abducibles to represent deletion and addition of rules. To
compute such deletion and addition, we propose an adaptation of our top-down
abductive proof procedure to compute a relevant abducibles to an added rule. We
compute a minimally revised program, by choosing a minimal set of abducibles
among all the sets of abducibles computed by a top-down proof procedure.
| [
{
"version": "v1",
"created": "Sun, 5 Mar 2000 10:29:03 GMT"
}
] | 1,179,878,400,000 | [
[
"Satoh",
"Ken",
""
]
] |
cs/0003012 | John L. Pollock | John L. Pollock | Defeasible Reasoning in OSCAR | Nonmonotonic Reasoning Workshop, 2000 | null | null | null | cs.AI | null | This is a system description for the OSCAR defeasible reasoner.
| [
{
"version": "v1",
"created": "Mon, 6 Mar 2000 22:23:00 GMT"
}
] | 1,179,878,400,000 | [
[
"Pollock",
"John L.",
""
]
] |
cs/0003016 | Daniele Theseider Dupre' | Daniele Theseider Dupre' (Dipartimento di Scienze e Tecnologie
Avanzate - Universita' del Piemonte Orientale, Alessandria, Italy) | Abductive and Consistency-Based Diagnosis Revisited: a Modeling
Perspective | 5 pages, 8th Int. Workshop on Nonmonotonic Reasoning, 2000 | null | null | null | cs.AI | null | Diagnostic reasoning has been characterized logically as consistency-based
reasoning or abductive reasoning. Previous analyses in the literature have
shown, on the one hand, that choosing the (in general more restrictive)
abductive definition may be appropriate or not, depending on the content of the
knowledge base [Console&Torasso91], and, on the other hand, that, depending on
the choice of the definition the same knowledge should be expressed in
different form [Poole94].
Since in Model-Based Diagnosis a major problem is finding the right way of
abstracting the behavior of the system to be modeled, this paper discusses the
relation between modeling, and in particular abstraction in the model, and the
notion of diagnosis.
| [
{
"version": "v1",
"created": "Tue, 7 Mar 2000 11:39:53 GMT"
}
] | 1,179,878,400,000 | [
[
"Dupre'",
"Daniele Theseider",
"",
"Dipartimento di Scienze e Tecnologie\n Avanzate - Universita' del Piemonte Orientale, Alessandria, Italy"
]
] |
cs/0003020 | Antonis Kakas | Antonis Kakas | ACLP: Integrating Abduction and Constraint Solving | 6 pages | null | null | null | cs.AI | null | ACLP is a system which combines abductive reasoning and constraint solving by
integrating the frameworks of Abductive Logic Programming (ALP) and Constraint
Logic Programming (CLP). It forms a general high-level knowledge representation
environment for abductive problems in Artificial Intelligence and other areas.
In ACLP, the task of abduction is supported and enhanced by its non-trivial
integration with constraint solving facilitating its application to complex
problems. The ACLP system is currently implemented on top of the CLP language
of ECLiPSe as a meta-interpreter exploiting its underlying constraint solver
for finite domains. It has been applied to the problems of planning and
scheduling in order to test its computational effectiveness compared with the
direct use of the (lower level) constraint solving framework of CLP on which it
is built. These experiments provide evidence that the abductive framework of
ACLP does not compromise significantly the computational efficiency of the
solutions. Other experiments show the natural ability of ACLP to accommodate
easily and in a robust way new or changing requirements of the original
problem.
| [
{
"version": "v1",
"created": "Tue, 7 Mar 2000 22:47:13 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2000 12:32:34 GMT"
}
] | 1,179,878,400,000 | [
[
"Kakas",
"Antonis",
""
]
] |
cs/0003021 | Samir Chopra | Samir Chopra, Konstantinos Georgatos, Rohit Parikh | Relevance Sensitive Non-Monotonic Inference on Belief Sequences | null | null | null | null | cs.AI | null | We present a method for relevance sensitive non-monotonic inference from
belief sequences which incorporates insights pertaining to prioritized
inference and relevance sensitive, inconsistency tolerant belief revision.
Our model uses a finite, logically open sequence of propositional formulas as
a representation for beliefs and defines a notion of inference from
maxiconsistent subsets of formulas guided by two orderings: a temporal
sequencing and an ordering based on relevance relations between the conclusion
and formulas in the sequence. The relevance relations are ternary (using
context as a parameter) as opposed to standard binary axiomatizations. The
inference operation thus defined easily handles iterated revision by
maintaining a revision history, blocks the derivation of inconsistent answers
from a possibly inconsistent sequence and maintains the distinction between
explicit and implicit beliefs. In doing so, it provides a finitely presented
formalism and a plausible model of reasoning for automated agents.
| [
{
"version": "v1",
"created": "Wed, 8 Mar 2000 03:03:36 GMT"
}
] | 1,472,601,600,000 | [
[
"Chopra",
"Samir",
""
],
[
"Georgatos",
"Konstantinos",
""
],
[
"Parikh",
"Rohit",
""
]
] |
cs/0003023 | Thomas Lukasiewicz | Thomas Lukasiewicz | Probabilistic Default Reasoning with Conditional Constraints | 8 pages; to appear in Proceedings of the Eighth International
Workshop on Nonmonotonic Reasoning, Special Session on Uncertainty Frameworks
in Nonmonotonic Reasoning, Breckenridge, Colorado, USA, 9-11 April 2000 | null | null | null | cs.AI | null | We propose a combination of probabilistic reasoning from conditional
constraints with approaches to default reasoning from conditional knowledge
bases. In detail, we generalize the notions of Pearl's entailment in system Z,
Lehmann's lexicographic entailment, and Geffner's conditional entailment to
conditional constraints. We give some examples that show that the new notions
of z-, lexicographic, and conditional entailment have similar properties like
their classical counterparts. Moreover, we show that the new notions of z-,
lexicographic, and conditional entailment are proper generalizations of both
their classical counterparts and the classical notion of logical entailment for
conditional constraints.
| [
{
"version": "v1",
"created": "Wed, 8 Mar 2000 11:05:45 GMT"
}
] | 1,179,878,400,000 | [
[
"Lukasiewicz",
"Thomas",
""
]
] |
cs/0003024 | Hans Tompits | James P. Delgrande, Torsten Schaub, Hans Tompits | A Compiler for Ordered Logic Programs | null | null | null | null | cs.AI | null | This paper describes a system, called PLP, for compiling ordered logic
programs into standard logic programs under the answer set semantics. In an
ordered logic program, rules are named by unique terms, and preferences among
rules are given by a set of dedicated atoms. An ordered logic program is
transformed into a second, regular, extended logic program wherein the
preferences are respected, in that the answer sets obtained in the transformed
theory correspond with the preferred answer sets of the original theory. Since
the result of the translation is an extended logic program, existing logic
programming systems can be used as underlying reasoning engine. In particular,
PLP is conceived as a front-end to the logic programming systems dlv and
smodels.
| [
{
"version": "v1",
"created": "Wed, 8 Mar 2000 10:15:51 GMT"
}
] | 1,179,878,400,000 | [
[
"Delgrande",
"James P.",
""
],
[
"Schaub",
"Torsten",
""
],
[
"Tompits",
"Hans",
""
]
] |
cs/0003027 | Bert Van Nuffelen | Bert Van Nuffelen | SLDNFA-system | 6 pages conference:NMR2000, special track on System descriptions and
demonstration | null | null | null | cs.AI | null | The SLDNFA-system results from the LP+ project at the K.U.Leuven, which
investigates logics and proof procedures for these logics for declarative
knowledge representation. Within this project inductive definition logic
(ID-logic) is used as representation logic. Different solvers are being
developed for this logic and one of these is SLDNFA. A prototype of the system
is available and used for investigating how to solve efficiently problems
represented in ID-logic.
| [
{
"version": "v1",
"created": "Wed, 8 Mar 2000 13:22:44 GMT"
}
] | 1,179,878,400,000 | [
[
"Van Nuffelen",
"Bert",
""
]
] |
cs/0003028 | Hans Tompits | James P. Delgrande, Torsten Schaub, Hans Tompits | Logic Programs with Compiled Preferences | null | null | null | null | cs.AI | null | We describe an approach for compiling preferences into logic programs under
the answer set semantics. An ordered logic program is an extended logic program
in which rules are named by unique terms, and in which preferences among rules
are given by a set of dedicated atoms. An ordered logic program is transformed
into a second, regular, extended logic program wherein the preferences are
respected, in that the answer sets obtained in the transformed theory
correspond with the preferred answer sets of the original theory. Our approach
allows both the specification of static orderings (as found in most previous
work), in which preferences are external to a logic program, as well as
orderings on sets of rules. In large part then, we are interested in describing
a general methodology for uniformly incorporating preference information in a
logic program. Since the result of our translation is an extended logic
program, we can make use of existing implementations, such as dlv and smodels.
To this end, we have developed a compiler, available on the web, as a front-end
for these programming systems.
| [
{
"version": "v1",
"created": "Wed, 8 Mar 2000 14:09:56 GMT"
}
] | 1,179,878,400,000 | [
[
"Delgrande",
"James P.",
""
],
[
"Schaub",
"Torsten",
""
],
[
"Tompits",
"Hans",
""
]
] |
cs/0003029 | Nedra Mellouli | Nedra Mellouli, Bernadette Bouchon-Meunier | Fuzzy Approaches to Abductive Inference | 7 pages and 8 files | null | null | null | cs.AI | null | This paper proposes two kinds of fuzzy abductive inference in the framework
of fuzzy rule base. The abductive inference processes described here depend on
the semantic of the rule. We distinguish two classes of interpretation of a
fuzzy rule, certainty generation rules and possible generation rules. In this
paper we present the architecture of abductive inference in the first class of
interpretation. We give two kinds of problem that we can resolve by using the
proposed models of inference.
| [
{
"version": "v1",
"created": "Wed, 8 Mar 2000 14:56:58 GMT"
}
] | 1,179,878,400,000 | [
[
"Mellouli",
"Nedra",
""
],
[
"Bouchon-Meunier",
"Bernadette",
""
]
] |
cs/0003030 | Bert Van Nuffelen | Bert Van Nuffelen, Marc Denecker | Problem solving in ID-logic with aggregates: some experiments | 9 pages conference: NMR2000, special track on abductive reasoning | null | null | null | cs.AI | null | The goal of the LP+ project at the K.U.Leuven is to design an expressive
logic, suitable for declarative knowledge representation, and to develop
intelligent systems based on Logic Programming technology for solving
computational problems using the declarative specifications. The ID-logic is an
integration of typed classical logic and a definition logic. Different
abductive solvers for this language are being developed. This paper is a report
of the integration of high order aggregates into ID-logic and the consequences
on the solver SLDNFA.
| [
{
"version": "v1",
"created": "Wed, 8 Mar 2000 15:39:14 GMT"
}
] | 1,179,878,400,000 | [
[
"Van Nuffelen",
"Bert",
""
],
[
"Denecker",
"Marc",
""
]
] |
cs/0003031 | Robert E. Mercer | Carmen Vodislav and Robert E. Mercer | Optimal Belief Revision | NMR'2000 Workshop 6 pages | null | null | null | cs.AI | null | We propose a new approach to belief revision that provides a way to change
knowledge bases with a minimum of effort. We call this way of revising belief
states optimal belief revision. Our revision method gives special attention to
the fact that most belief revision processes are directed to a specific
informational objective. This approach to belief change is founded on notions
such as optimal context and accessibility. For the sentential model of belief
states we provide both a formal description of contexts as sub-theories
determined by three parameters and a method to construct contexts. Next, we
introduce an accessibility ordering for belief sets, which we then use for
selecting the best (optimal) contexts with respect to the processing effort
involved in the revision. Then, for finitely axiomatizable knowledge bases, we
characterize a finite accessibility ranking from which the accessibility
ordering for the entire base is generated and show how to determine the ranking
of an arbitrary sentence in the language. Finally, we define the adjustment of
the accessibility ranking of a revised base of a belief set.
| [
{
"version": "v1",
"created": "Wed, 8 Mar 2000 15:54:50 GMT"
}
] | 1,179,878,400,000 | [
[
"Vodislav",
"Carmen",
""
],
[
"Mercer",
"Robert E.",
""
]
] |
cs/0003032 | Henrik Grosskreutz | Henrik Grosskreutz, Gerhard Lakemeyer | cc-Golog: Towards More Realistic Logic-Based Robot Controllers | null | null | null | null | cs.AI | null | High-level robot controllers in realistic domains typically deal with
processes which operate concurrently, change the world continuously, and where
the execution of actions is event-driven as in ``charge the batteries as soon
as the voltage level is low''. While non-logic-based robot control languages
are well suited to express such scenarios, they fare poorly when it comes to
projecting, in a conspicuous way, how the world evolves when actions are
executed. On the other hand, a logic-based control language like \congolog,
based on the situation calculus, is well-suited for the latter. However, it has
problems expressing event-driven behavior. In this paper, we show how these
problems can be overcome by first extending the situation calculus to support
continuous change and event-driven behavior and then presenting \ccgolog, a
variant of \congolog which is based on the extended situation calculus. One
benefit of \ccgolog is that it narrows the gap in expressiveness compared to
non-logic-based control languages while preserving a semantically well-founded
projection mechanism.
| [
{
"version": "v1",
"created": "Wed, 8 Mar 2000 16:14:08 GMT"
}
] | 1,179,878,400,000 | [
[
"Grosskreutz",
"Henrik",
""
],
[
"Lakemeyer",
"Gerhard",
""
]
] |
cs/0003033 | Ilkka Niemela | Ilkka Niemela, Patrik Simons, Tommi Syrjanen | Smodels: A System for Answer Set Programming | Proceedings of the 8th International Workshop on Non-Monotonic
Reasoning, April 9-11, 2000, Breckenridge, Colorado 4 pages, uses aaai.sty | null | null | null | cs.AI | null | The Smodels system implements the stable model semantics for normal logic
programs. It handles a subclass of programs which contain no function symbols
and are domain-restricted but supports extensions including built-in functions
as well as cardinality and weight constraints. On top of this core engine more
involved systems can be built. As an example, we have implemented total and
partial stable model computation for disjunctive logic programs. An interesting
application method is based on answer set programming, i.e., encoding an
application problem as a set of rules so that its solutions are captured by the
stable models of the rules. Smodels has been applied to a number of areas
including planning, model checking, reachability analysis, product
configuration, dynamic constraint satisfaction, and feature interaction.
| [
{
"version": "v1",
"created": "Wed, 8 Mar 2000 23:25:51 GMT"
}
] | 1,179,878,400,000 | [
[
"Niemela",
"Ilkka",
""
],
[
"Simons",
"Patrik",
""
],
[
"Syrjanen",
"Tommi",
""
]
] |
cs/0003034 | Francesca Toni | Antonis Kakas, Rob Miller, Francesca Toni | E-RES: A System for Reasoning about Actions, Events and Observations | Proceedings of the 8th International Workshop on Non-Monotonic
Reasoning, April 9-11, 2000, Breckenridge, Colorado. 6 pages | null | null | null | cs.AI | null | E-RES is a system that implements the Language E, a logic for reasoning about
narratives of action occurrences and observations. E's semantics is
model-theoretic, but this implementation is based on a sound and complete
reformulation of E in terms of argumentation, and uses general computational
techniques of argumentation frameworks. The system derives sceptical
non-monotonic consequences of a given reformulated theory which exactly
correspond to consequences entailed by E's model-theory. The computation relies
on a complimentary ability of the system to derive credulous non-monotonic
consequences together with a set of supporting assumptions which is sufficient
for the (credulous) conclusion to hold. E-RES allows theories to contain
general action laws, statements about action occurrences, observations and
statements of ramifications (or universal laws). It is able to derive
consequences both forward and backward in time. This paper gives a short
overview of the theoretical basis of E-RES and illustrates its use on a variety
of examples. Currently, E-RES is being extended so that the system can be used
for planning.
| [
{
"version": "v1",
"created": "Wed, 8 Mar 2000 16:18:52 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2000 22:48:06 GMT"
}
] | 1,179,878,400,000 | [
[
"Kakas",
"Antonis",
""
],
[
"Miller",
"Rob",
""
],
[
"Toni",
"Francesca",
""
]
] |
cs/0003037 | Hans Tompits | Uwe Egly, Thomas Eiter, Hans Tompits, Stefan Woltran | QUIP - A Tool for Computing Nonmonotonic Reasoning Tasks | null | null | null | null | cs.AI | null | In this paper, we outline the prototype of an automated inference tool,
called QUIP, which provides a uniform implementation for several nonmonotonic
reasoning formalisms. The theoretical basis of QUIP is derived from well-known
results about the computational complexity of nonmonotonic logics and exploits
a representation of the different reasoning tasks in terms of quantified
boolean formulae.
| [
{
"version": "v1",
"created": "Wed, 8 Mar 2000 17:18:08 GMT"
}
] | 1,179,878,400,000 | [
[
"Egly",
"Uwe",
""
],
[
"Eiter",
"Thomas",
""
],
[
"Tompits",
"Hans",
""
],
[
"Woltran",
"Stefan",
""
]
] |
cs/0003038 | Richard Watson | Richard Watson | A Splitting Set Theorem for Epistemic Specifications | To be published in Proceedings of NMR 2000 Workshop. 6 pages | null | null | null | cs.AI | null | Over the past decade a considerable amount of research has been done to
expand logic programming languages to handle incomplete information. One such
language is the language of epistemic specifications. As is usual with logic
programming languages, the problem of answering queries is intractable in the
general case. For extended disjunctive logic programs, an idea that has proven
useful in simplifying the investigation of answer sets is the use of splitting
sets. In this paper we will present an extended definition of splitting sets
that will be applicable to epistemic specifications. Furthermore, an extension
of the splitting set theorem will be presented. Also, a characterization of
stratified epistemic specifications will be given in terms of splitting sets.
This characterization leads us to an algorithmic method of computing world
views of a subclass of epistemic logic programs.
| [
{
"version": "v1",
"created": "Wed, 8 Mar 2000 20:40:31 GMT"
}
] | 1,179,878,400,000 | [
[
"Watson",
"Richard",
""
]
] |
cs/0003039 | Ilkka Niemela | Maarit Hietalahti, Fabio Massacci, Ilkka Niemela | DES: a Challenge Problem for Nonmonotonic Reasoning Systems | 10 pages, 1 Postscript figure, uses aaai.sty and graphicx.sty | null | null | null | cs.AI | null | The US Data Encryption Standard, DES for short, is put forward as an
interesting benchmark problem for nonmonotonic reasoning systems because (i) it
provides a set of test cases of industrial relevance which shares features of
randomly generated problems and real-world problems, (ii) the representation of
DES using normal logic programs with the stable model semantics is simple and
easy to understand, and (iii) this subclass of logic programs can be seen as an
interesting special case for many other formalizations of nonmonotonic
reasoning. In this paper we present two encodings of DES as logic programs: a
direct one out of the standard specifications and an optimized one extending
the work of Massacci and Marraro. The computational properties of the encodings
are studied by using them for DES key search with the Smodels system as the
implementation of the stable model semantics. Results indicate that the
encodings and Smodels are quite competitive: they outperform state-of-the-art
SAT-checkers working with an optimized encoding of DES into SAT and are
comparable with a SAT-checker that is customized and tuned for the optimized
SAT encoding.
| [
{
"version": "v1",
"created": "Wed, 8 Mar 2000 21:49:57 GMT"
}
] | 1,179,878,400,000 | [
[
"Hietalahti",
"Maarit",
""
],
[
"Massacci",
"Fabio",
""
],
[
"Niemela",
"Ilkka",
""
]
] |
cs/0003042 | Vladimir Lifschitz | Yuliya Babovich, Esra Erdem and Vladimir Lifschitz | Fages' Theorem and Answer Set Programming | null | null | null | null | cs.AI | null | We generalize a theorem by Francois Fages that describes the relationship
between the completion semantics and the answer set semantics for logic
programs with negation as failure. The study of this relationship is important
in connection with the emergence of answer set programming. Whenever the two
semantics are equivalent, answer sets can be computed by a satisfiability
solver, and the use of answer set solvers such as smodels and dlv is
unnecessary. A logic programming representation of the blocks world due to
Ilkka Niemelae is discussed as an example.
| [
{
"version": "v1",
"created": "Thu, 9 Mar 2000 00:28:21 GMT"
}
] | 1,179,878,400,000 | [
[
"Babovich",
"Yuliya",
""
],
[
"Erdem",
"Esra",
""
],
[
"Lifschitz",
"Vladimir",
""
]
] |
cs/0003044 | Adnan | Adnan Darwiche | On the tractable counting of theory models and its application to belief
revision and truth maintenance | null | null | null | null | cs.AI | null | We introduced decomposable negation normal form (DNNF) recently as a
tractable form of propositional theories, and provided a number of powerful
logical operations that can be performed on it in polynomial time. We also
presented an algorithm for compiling any conjunctive normal form (CNF) into
DNNF and provided a structure-based guarantee on its space and time complexity.
We present in this paper a linear-time algorithm for converting an ordered
binary decision diagram (OBDD) representation of a propositional theory into an
equivalent DNNF, showing that DNNFs scale as well as OBDDs. We also identify a
subclass of DNNF which we call deterministic DNNF, d-DNNF, and show that the
previous complexity guarantees on compiling DNNF continue to hold for this
stricter subclass, which has stronger properties. In particular, we present a
new operation on d-DNNF which allows us to count its models under the
assertion, retraction and flipping of every literal by traversing the d-DNNF
twice. That is, after such traversal, we can test in constant-time: the
entailment of any literal by the d-DNNF, and the consistency of the d-DNNF
under the retraction or flipping of any literal. We demonstrate the
significance of these new operations by showing how they allow us to implement
linear-time, complete truth maintenance systems and linear-time, complete
belief revision systems for two important classes of propositional theories.
| [
{
"version": "v1",
"created": "Thu, 9 Mar 2000 08:58:15 GMT"
}
] | 1,179,878,400,000 | [
[
"Darwiche",
"Adnan",
""
]
] |
cs/0003047 | Hans-Peter Stoerr | Steffen Hoelldobler and Hans-Peter Stoerr | BDD-based reasoning in the fluent calculus - first results | 9 pages; Workshop on Nonmonotonic Reasoning 2000 (NMR 2000) | null | null | null | cs.AI | null | The paper reports on first preliminary results and insights gained in a
project aiming at implementing the fluent calculus using methods and techniques
based on binary decision diagrams. After reporting on an initial experiment
showing promising results we discuss our findings concerning various techniques
and heuristics used to speed up the reasoning process.
| [
{
"version": "v1",
"created": "Thu, 9 Mar 2000 17:18:12 GMT"
}
] | 1,179,878,400,000 | [
[
"Hoelldobler",
"Steffen",
""
],
[
"Stoerr",
"Hans-Peter",
""
]
] |
cs/0003049 | Rob Miller | Antonis Kakas, Rob Miller and Francesca Toni | Planning with Incomplete Information | Proceedings of the 8th International Workshop on Non-Monotonic
Reasoning, April 9-11, 2000, Breckenridge, Colorado | null | null | null | cs.AI | null | Planning is a natural domain of application for frameworks of reasoning about
actions and change. In this paper we study how one such framework, the Language
E, can form the basis for planning under (possibly) incomplete information. We
define two types of plans: weak and safe plans, and propose a planner, called
the E-Planner, which is often able to extend an initial weak plan into a safe
plan even though the (explicit) information available is incomplete, e.g. for
cases where the initial state is not completely known. The E-Planner is based
upon a reformulation of the Language E in argumentation terms and a natural
proof theory resulting from the reformulation. It uses an extension of this
proof theory by means of abduction for the generation of plans and adopts
argumentation-based techniques for extending weak plans into safe plans. We
provide representative examples illustrating the behaviour of the E-Planner, in
particular for cases where the status of fluents is incompletely known.
| [
{
"version": "v1",
"created": "Thu, 9 Mar 2000 22:30:27 GMT"
}
] | 1,179,878,400,000 | [
[
"Kakas",
"Antonis",
""
],
[
"Miller",
"Rob",
""
],
[
"Toni",
"Francesca",
""
]
] |
cs/0003051 | Miroslaw Truszczynski | Renata Wassermann | Local Diagnosis | null | null | null | null | cs.AI | null | In an earlier work, we have presented operations of belief change which only
affect the relevant part of a belief base. In this paper, we propose the
application of the same strategy to the problem of model-based diangosis. We
first isolate the subset of the system description which is relevant for a
given observation and then solve the diagnosis problem for this subset.
| [
{
"version": "v1",
"created": "Fri, 10 Mar 2000 22:54:55 GMT"
}
] | 1,179,878,400,000 | [
[
"Wassermann",
"Renata",
""
]
] |
cs/0003052 | James P. Delgrande | James Delgrande and Torsten Schaub | A Consistency-Based Model for Belief Change: Preliminary Report | null | null | null | null | cs.AI | null | We present a general, consistency-based framework for belief change.
Informally, in revising K by A, we begin with A and incorporate as much of K as
consistently possible. Formally, a knowledge base K and sentence A are
expressed, via renaming propositions in K, in separate languages. Using a
maximization process, we assume the languages are the same insofar as
consistently possible. Lastly, we express the resultant knowledge base in a
single language. There may be more than one way in which A can be so extended
by K: in choice revision, one such ``extension'' represents the revised state;
alternately revision consists of the intersection of all such extensions.
The most general formulation of our approach is flexible enough to express
other approaches to revision and update, the merging of knowledge bases, and
the incorporation of static and dynamic integrity constraints. Our framework
differs from work based on ordinal conditional functions, notably with respect
to iterated revision. We argue that the approach is well-suited for
implementation: the choice revision operator gives better complexity results
than general revision; the approach can be expressed in terms of a finite
knowledge base; and the scope of a revision can be restricted to just those
propositions mentioned in the sentence for revision A.
| [
{
"version": "v1",
"created": "Sat, 11 Mar 2000 06:29:02 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2000 18:02:11 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Apr 2000 19:34:15 GMT"
}
] | 1,179,878,400,000 | [
[
"Delgrande",
"James",
""
],
[
"Schaub",
"Torsten",
""
]
] |
cs/0003059 | WIlliams | Mary-Anne Williams and Aidan Sims | SATEN: An Object-Oriented Web-Based Revision and Extraction Engine | The implementation of SATEN can be found at
http://cafe.newcastle.edu.au/saten | null | null | null | cs.AI | null | SATEN is an object-oriented web-based extraction and belief revision engine.
It runs on any computer via a Java 1.1 enabled browser such as Netscape 4.
SATEN performs belief revision based on the AGM approach. The extraction and
belief revision reasoning engines operate on a user specified ranking of
information. One of the features of SATEN is that it can be used to integrate
mutually inconsistent commensuate rankings into a consistent ranking.
| [
{
"version": "v1",
"created": "Tue, 14 Mar 2000 04:58:18 GMT"
}
] | 1,179,878,400,000 | [
[
"Williams",
"Mary-Anne",
""
],
[
"Sims",
"Aidan",
""
]
] |
cs/0003061 | Deborah East | Deborah East and Miroslaw Truszczynski | dcs: An Implementation of DATALOG with Constraints | 6 pages (AAAI format), 4 ps figures; System descriptions and
demonstration Session, 8th Intl. Workshop on Non-Monotonic Reasoning | null | null | null | cs.AI | null | Answer-set programming (ASP) has emerged recently as a viable programming
paradigm. We describe here an ASP system, DATALOG with constraints or DC, based
on non-monotonic logic. Informally, DC theories consist of propositional
clauses (constraints) and of Horn rules. The semantics is a simple and natural
extension of the semantics of the propositional logic. However, thanks to the
presence of Horn rules in the system, modeling of transitive closure becomes
straightforward. We describe the syntax, use and implementation of DC and
provide experimental results.
| [
{
"version": "v1",
"created": "Tue, 14 Mar 2000 18:06:38 GMT"
}
] | 1,179,878,400,000 | [
[
"East",
"Deborah",
""
],
[
"Truszczynski",
"Miroslaw",
""
]
] |
cs/0003077 | Deborah East | Deborah East and Miroslaw Truszczynski | DATALOG with constraints - an answer-set programming system | 6 pages, 5 figures, will appear in Proceedings of AAAI-2000 | null | null | null | cs.AI | null | Answer-set programming (ASP) has emerged recently as a viable programming
paradigm well attuned to search problems in AI, constraint satisfaction and
combinatorics. Propositional logic is, arguably, the simplest ASP system with
an intuitive semantics supporting direct modeling of problem constraints.
However, for some applications, especially those requiring that transitive
closure be computed, it requires additional variables and results in large
theories. Consequently, it may not be a practical computational tool for such
problems. On the other hand, ASP systems based on nonmonotonic logics, such as
stable logic programming, can handle transitive closure computation efficiently
and, in general, yield very concise theories as problem representations. Their
semantics is, however, more complex. Searching for the middle ground, in this
paper we introduce a new nonmonotonic logic, DATALOG with constraints or DC.
Informally, DC theories consist of propositional clauses (constraints) and of
Horn rules. The semantics is a simple and natural extension of the semantics of
the propositional logic. However, thanks to the presence of Horn rules in the
system, modeling of transitive closure becomes straightforward. We describe the
syntax and semantics of DC, and study its properties. We discuss an
implementation of DC and present results of experimental study of the
effectiveness of DC, comparing it with CSAT, a satisfiability checker and
SMODELS implementation of stable logic programming. Our results show that DC is
competitive with the other two approaches, in case of many search problems,
often yielding much more efficient solutions.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2000 19:09:59 GMT"
}
] | 1,179,878,400,000 | [
[
"East",
"Deborah",
""
],
[
"Truszczynski",
"Miroslaw",
""
]
] |
cs/0003080 | Krzysztof R. Apt | Krzysztof R. Apt | Some Remarks on Boolean Constraint Propagation | 14 pages. To appear in: New Trends in Constraints, Papers from the
Joint ERCIM/Compulog-Net Workshop Cyprus, October 25-27, 1999.
Springer-Verlag Lecture Notes in Artificial Intelligence | null | null | null | cs.AI | null | We study here the well-known propagation rules for Boolean constraints. First
we propose a simple notion of completeness for sets of such rules and establish
a completeness result. Then we show an equivalence in an appropriate sense
between Boolean constraint propagation and unit propagation, a form of
resolution for propositional logic.
Subsequently we characterize one set of such rules by means of the notion of
hyper-arc consistency introduced in (Mohr and Masini 1988). Also, we clarify
the status of a similar, though different, set of rules introduced in (Simonis
1989a) and more fully in (Codognet and Diaz 1996).
| [
{
"version": "v1",
"created": "Tue, 28 Mar 2000 11:49:37 GMT"
}
] | 1,179,878,400,000 | [
[
"Apt",
"Krzysztof R.",
""
]
] |
cs/0005031 | Joseph Y. Halpern | Joseph Y. Halpern | Conditional Plausibility Measures and Bayesian Networks | null | Journal Of Artificial Intelligence Research, Volume 14, pages
359-389, 2001 | 10.1613/jair.817 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A general notion of algebraic conditional plausibility measures is defined.
Probability measures, ranking functions, possibility measures, and (under the
appropriate definitions) sets of probability measures can all be viewed as
defining algebraic conditional plausibility measures. It is shown that
algebraic conditional plausibility measures can be represented using Bayesian
networks.
| [
{
"version": "v1",
"created": "Tue, 30 May 2000 19:05:21 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Oct 2000 21:55:41 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Jun 2011 15:49:16 GMT"
}
] | 1,308,182,400,000 | [
[
"Halpern",
"Joseph Y.",
""
]
] |
cs/0006043 | Sylvain Piechowiak | S. Piechowiak, J. Rodriguez | Constraint compiling into rules formalism constraint compiling into
rules formalism for dynamic CSPs computing | 14 pages | null | null | null | cs.AI | null | In this paper we present a rule based formalism for filtering variables
domains of constraints. This formalism is well adapted for solving dynamic CSP.
We take diagnosis as an instance problem to illustrate the use of these rules.
A diagnosis problem is seen like finding all the minimal sets of constraints to
be relaxed in the constraint network that models the device to be diagnosed
| [
{
"version": "v1",
"created": "Fri, 30 Jun 2000 10:25:06 GMT"
}
] | 1,179,878,400,000 | [
[
"Piechowiak",
"S.",
""
],
[
"Rodriguez",
"J.",
""
]
] |
cs/0007004 | Alejandro Zunino | Alejandro Zunino and Analia Amandi | Brainstorm/J: a Java Framework for Intelligent Agents | 15 pages. To be published in Proceedings of the Second Argentinian
Symposium on Artificial Intelligence (ASAI'2000 - 29th JAIIO). September
2000. Tandil, Buenos Aires, Argentina. See
http://www.exa.unicen.edu.ar/~azunino | null | null | null | cs.AI | null | Despite the effort of many researchers in the area of multi-agent systems
(MAS) for designing and programming agents, a few years ago the research
community began to take into account that common features among different MAS
exists. Based on these common features, several tools have tackled the problem
of agent development on specific application domains or specific types of
agents. As a consequence, their scope is restricted to a subset of the huge
application domain of MAS. In this paper we propose a generic infrastructure
for programming agents whose name is Brainstorm/J. The infrastructure has been
implemented as an object oriented framework. As a consequence, our approach
supports a broader scope of MAS applications than previous efforts, being
flexible and reusable.
| [
{
"version": "v1",
"created": "Tue, 4 Jul 2000 16:31:40 GMT"
}
] | 1,179,878,400,000 | [
[
"Zunino",
"Alejandro",
""
],
[
"Amandi",
"Analia",
""
]
] |
cs/0010037 | Umberto Straccia | Umberto Straccia | On the relationship between fuzzy logic and four-valued relevance logic | null | null | null | null | cs.AI | null | In fuzzy propositional logic, to a proposition a partial truth in [0,1] is
assigned. It is well known that under certain circumstances, fuzzy logic
collapses to classical logic. In this paper, we will show that under dual
conditions, fuzzy logic collapses to four-valued (relevance) logic, where
propositions have truth-value true, false, unknown, or contradiction. As a
consequence, fuzzy entailment may be considered as ``in between'' four-valued
(relevance) entailment and classical entailment.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2000 14:14:26 GMT"
}
] | 1,179,878,400,000 | [
[
"Straccia",
"Umberto",
""
]
] |
cs/0011012 | Joseph Y. Halpern | Joseph Y. Halpern and Judea Pearl | Causes and Explanations: A Structural-Model Approach, Part I: Causes | Part II of the paper (on Explanation) is also on the arxiv.
Previously the two parts were submitted as one paper. To appear in the
British Journal for the Philosophy of Science | null | null | null | cs.AI | null | We propose a new definition of actual cause, using structural equations to
model counterfactuals. We show that the definition yields a plausible and
elegant account of causation that handles well examples which have caused
problems for other definitions and resolves major difficulties in the
traditional account.
| [
{
"version": "v1",
"created": "Tue, 7 Nov 2000 23:21:38 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Aug 2002 23:02:18 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Nov 2005 20:07:43 GMT"
}
] | 1,179,878,400,000 | [
[
"Halpern",
"Joseph Y.",
""
],
[
"Pearl",
"Judea",
""
]
] |
cs/0011030 | Emmanuel De Mot | Nikolay Pelov, Emmanuel De Mot, Marc Denecker | Logic Programming Approaches for Representing and Solving Constraint
Satisfaction Problems: A Comparison | 15 pages, 3 eps-figures | LPAR 2000, Lecture Notes in Artificial Intelligence, vol. 1955,
Springer, 2000, pp. 225-239 | null | null | cs.AI | null | Many logic programming based approaches can be used to describe and solve
combinatorial search problems. On the one hand there is constraint logic
programming which computes a solution as an answer substitution to a query
containing the variables of the constraint satisfaction problem. On the other
hand there are systems based on stable model semantics, abductive systems, and
first order logic model generators which compute solutions as models of some
theory. This paper compares these different approaches from the point of view
of knowledge representation (how declarative are the programs) and from the
point of view of performance (how good are they at solving typical problems).
| [
{
"version": "v1",
"created": "Tue, 21 Nov 2000 13:56:21 GMT"
}
] | 1,179,878,400,000 | [
[
"Pelov",
"Nikolay",
""
],
[
"De Mot",
"Emmanuel",
""
],
[
"Denecker",
"Marc",
""
]
] |