id
stringlengths
9
10
submitter
stringlengths
5
47
authors
stringlengths
5
1.72k
title
stringlengths
11
234
comments
stringlengths
1
491
journal-ref
stringlengths
4
396
doi
stringlengths
13
97
report-no
stringlengths
4
138
categories
stringclasses
1 value
license
stringclasses
9 values
abstract
stringlengths
29
3.66k
versions
listlengths
1
21
update_date
int64
1,180B
1,718B
authors_parsed
listlengths
1
98
2401.12467
Yuliang Liu
Haisu Guan, Jinpeng Wan, Yuliang Liu, Pengjie Wang, Kaile Zhang, Zhebin Kuang, Xinyu Wang, Xiang Bai, Lianwen Jin
An open dataset for the evolution of oracle bone characters: EVOBC
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The earliest extant Chinese characters originate from oracle bone inscriptions, which are closely related to other East Asian languages. These inscriptions hold immense value for anthropology and archaeology. However, deciphering oracle bone script remains a formidable challenge, with only approximately 1,600 of the over 4,500 extant characters elucidated to date. Further scholarly investigation is required to comprehensively understand this ancient writing system. Artificial Intelligence technology is a promising avenue for deciphering oracle bone characters, particularly concerning their evolution. However, one of the challenges is the lack of datasets mapping the evolution of these characters over time. In this study, we systematically collected ancient characters from authoritative texts and websites spanning six historical stages: Oracle Bone Characters - OBC (15th century B.C.), Bronze Inscriptions - BI (13th to 221 B.C.), Seal Script - SS (11th to 8th centuries B.C.), Spring and Autumn period Characters - SAC (770 to 476 B.C.), Warring States period Characters - WSC (475 B.C. to 221 B.C.), and Clerical Script - CS (221 B.C. to 220 A.D.). Subsequently, we constructed an extensive dataset, namely EVolution Oracle Bone Characters (EVOBC), consisting of 229,170 images representing 13,714 distinct character categories. We conducted validation and simulated deciphering on the constructed dataset, and the results demonstrate its high efficacy in aiding the study of oracle bone script. This openly accessible dataset aims to digitalize ancient Chinese scripts across multiple eras, facilitating the decipherment of oracle bone script by examining the evolution of glyph forms.
[ { "version": "v1", "created": "Tue, 23 Jan 2024 03:30:47 GMT" }, { "version": "v2", "created": "Tue, 13 Feb 2024 08:21:50 GMT" } ]
1,707,868,800,000
[ [ "Guan", "Haisu", "" ], [ "Wan", "Jinpeng", "" ], [ "Liu", "Yuliang", "" ], [ "Wang", "Pengjie", "" ], [ "Zhang", "Kaile", "" ], [ "Kuang", "Zhebin", "" ], [ "Wang", "Xinyu", "" ], [ "Bai", "Xiang", "" ], [ "Jin", "Lianwen", "" ] ]
2401.12557
Xiaoxi Wang
Xiaoxi Wang
Balancing the AI Strength of Roles in Self-Play Training with Regret Matching+
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
When training artificial intelligence for games encompassing multiple roles, the development of a generalized model capable of controlling any character within the game presents a viable option. This strategy not only conserves computational resources and time during the training phase but also reduces resource requirements during deployment. training such a generalized model often encounters challenges related to uneven capabilities when controlling different roles. A simple method is introduced based on Regret Matching+, which facilitates a more balanced performance of strength by the model when controlling various roles.
[ { "version": "v1", "created": "Tue, 23 Jan 2024 08:27:38 GMT" }, { "version": "v2", "created": "Thu, 1 Feb 2024 03:22:22 GMT" } ]
1,706,832,000,000
[ [ "Wang", "Xiaoxi", "" ] ]
2401.12599
Demiao Lin
Demiao Lin (chatdoc.com)
Revolutionizing Retrieval-Augmented Generation with Enhanced PDF Structure Recognition
18 pages, 16 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
With the rapid development of Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) has become a predominant method in the field of professional knowledge-based question answering. Presently, major foundation model companies have opened up Embedding and Chat API interfaces, and frameworks like LangChain have already integrated the RAG process. It appears that the key models and steps in RAG have been resolved, leading to the question: are professional knowledge QA systems now approaching perfection? This article discovers that current primary methods depend on the premise of accessing high-quality text corpora. However, since professional documents are mainly stored in PDFs, the low accuracy of PDF parsing significantly impacts the effectiveness of professional knowledge-based QA. We conducted an empirical RAG experiment across hundreds of questions from the corresponding real-world professional documents. The results show that, ChatDOC, a RAG system equipped with a panoptic and pinpoint PDF parser, retrieves more accurate and complete segments, and thus better answers. Empirical experiments show that ChatDOC is superior to baseline on nearly 47% of questions, ties for 38% of cases, and falls short on only 15% of cases. It shows that we may revolutionize RAG with enhanced PDF structure recognition.
[ { "version": "v1", "created": "Tue, 23 Jan 2024 09:54:36 GMT" } ]
1,706,054,400,000
[ [ "Lin", "Demiao", "", "chatdoc.com" ] ]
2401.12666
Rui Zhang
Hong Zhou, Rui Zhang, Peifeng Lai, Chaoran Guo, Yong Wang, Zhida Sun and Junjie Li
EL-VIT: Probing Vision Transformer with Interactive Visualization
10 pages, 7 figures, conference
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Nowadays, Vision Transformer (ViT) is widely utilized in various computer vision tasks, owing to its unique self-attention mechanism. However, the model architecture of ViT is complex and often challenging to comprehend, leading to a steep learning curve. ViT developers and users frequently encounter difficulties in interpreting its inner workings. Therefore, a visualization system is needed to assist ViT users in understanding its functionality. This paper introduces EL-VIT, an interactive visual analytics system designed to probe the Vision Transformer and facilitate a better understanding of its operations. The system consists of four layers of visualization views. The first three layers include model overview, knowledge background graph, and model detail view. These three layers elucidate the operation process of ViT from three perspectives: the overall model architecture, detailed explanation, and mathematical operations, enabling users to understand the underlying principles and the transition process between layers. The fourth interpretation view helps ViT users and experts gain a deeper understanding by calculating the cosine similarity between patches. Our two usage scenarios demonstrate the effectiveness and usability of EL-VIT in helping ViT users understand the working mechanism of ViT.
[ { "version": "v1", "created": "Tue, 23 Jan 2024 11:21:32 GMT" } ]
1,706,054,400,000
[ [ "Zhou", "Hong", "" ], [ "Zhang", "Rui", "" ], [ "Lai", "Peifeng", "" ], [ "Guo", "Chaoran", "" ], [ "Wang", "Yong", "" ], [ "Sun", "Zhida", "" ], [ "Li", "Junjie", "" ] ]
2401.12672
Sen Lin
Yun Peng, Sen Lin, Qian Chen, Lyu Xu, Xiaojun Ren, Yafei Li, Jianliang Xu
ChatGraph: Chat with Your Graphs
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph analysis is fundamental in real-world applications. Traditional approaches rely on SPARQL-like languages or clicking-and-dragging interfaces to interact with graph data. However, these methods either require users to possess high programming skills or support only a limited range of graph analysis functionalities. To address the limitations, we propose a large language model (LLM)-based framework called ChatGraph. With ChatGraph, users can interact with graphs through natural language, making it easier to use and more flexible than traditional approaches. The core of ChatGraph lies in generating chains of graph analysis APIs based on the understanding of the texts and graphs inputted in the user prompts. To achieve this, ChatGraph consists of three main modules: an API retrieval module that searches for relevant APIs, a graph-aware LLM module that enables the LLM to comprehend graphs, and an API chain-oriented finetuning module that guides the LLM in generating API chains.
[ { "version": "v1", "created": "Tue, 23 Jan 2024 11:29:19 GMT" } ]
1,706,054,400,000
[ [ "Peng", "Yun", "" ], [ "Lin", "Sen", "" ], [ "Chen", "Qian", "" ], [ "Xu", "Lyu", "" ], [ "Ren", "Xiaojun", "" ], [ "Li", "Yafei", "" ], [ "Xu", "Jianliang", "" ] ]
2401.12700
Chenwang Wu
Qingyang Wang, Chenwang Wu, Defu Lian, Enhong Chen
Securing Recommender System via Cooperative Training
arXiv admin note: text overlap with arXiv:2210.13762
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recommender systems are often susceptible to well-crafted fake profiles, leading to biased recommendations. Among existing defense methods, data-processing-based methods inevitably exclude normal samples, while model-based methods struggle to enjoy both generalization and robustness. To this end, we suggest integrating data processing and the robust model to propose a general framework, Triple Cooperative Defense (TCD), which employs three cooperative models that mutually enhance data and thereby improve recommendation robustness. Furthermore, Considering that existing attacks struggle to balance bi-level optimization and efficiency, we revisit poisoning attacks in recommender systems and introduce an efficient attack strategy, Co-training Attack (Co-Attack), which cooperatively optimizes the attack optimization and model training, considering the bi-level setting while maintaining attack efficiency. Moreover, we reveal a potential reason for the insufficient threat of existing attacks is their default assumption of optimizing attacks in undefended scenarios. This overly optimistic setting limits the potential of attacks. Consequently, we put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process, thoroughly exploring CoAttack's attack potential in the cooperative training of attack and defense. Extensive experiments on three real datasets demonstrate TCD's superiority in enhancing model robustness. Additionally, we verify that the two proposed attack strategies significantly outperform existing attacks, with game-based GCoAttack posing a greater poisoning threat than CoAttack.
[ { "version": "v1", "created": "Tue, 23 Jan 2024 12:07:20 GMT" } ]
1,706,054,400,000
[ [ "Wang", "Qingyang", "" ], [ "Wu", "Chenwang", "" ], [ "Lian", "Defu", "" ], [ "Chen", "Enhong", "" ] ]
2401.12846
Lior Limonad
Dirk Fahland, Fabiana Fournier, Lior Limonad, Inna Skarbovsky, Ava J.E. Swevels
How well can large language models explain business processes?
39 pages, 12 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) are likely to play a prominent role in future AI-augmented business process management systems (ABPMSs) catering functionalities across all system lifecycle stages. One such system's functionality is Situation-Aware eXplainability (SAX), which relates to generating causally sound and yet human-interpretable explanations that take into account the process context in which the explained condition occurred. In this paper, we present the SAX4BPM framework developed to generate SAX explanations. The SAX4BPM suite consists of a set of services and a central knowledge repository. The functionality of these services is to elicit the various knowledge ingredients that underlie SAX explanations. A key innovative component among these ingredients is the causal process execution view. In this work, we integrate the framework with an LLM to leverage its power to synthesize the various input ingredients for the sake of improved SAX explanations. Since the use of LLMs for SAX is also accompanied by a certain degree of doubt related to its capacity to adequately fulfill SAX along with its tendency for hallucination and lack of inherent capacity to reason, we pursued a methodological evaluation of the quality of the generated explanations. To this aim, we developed a designated scale and conducted a rigorous user study. Our findings show that the input presented to the LLMs aided with the guard-railing of its performance, yielding SAX explanations having better-perceived fidelity. This improvement is moderated by the perception of trust and curiosity. More so, this improvement comes at the cost of the perceived interpretability of the explanation.
[ { "version": "v1", "created": "Tue, 23 Jan 2024 15:29:26 GMT" } ]
1,706,659,200,000
[ [ "Fahland", "Dirk", "" ], [ "Fournier", "Fabiana", "" ], [ "Limonad", "Lior", "" ], [ "Skarbovsky", "Inna", "" ], [ "Swevels", "Ava J. E.", "" ] ]
2401.12869
Zhiruo Wang
Zhiruo Wang, Daniel Fried, Graham Neubig
TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Language models (LMs) can solve tasks such as answering questions about tables or images by writing programs. However, using primitive functions often leads to verbose and error-prone programs, and higher-level functions require expert design. To enable better solutions without human labor, we ask code LMs to curate reusable high-level functions, and use them to write solutions. We present TROVE, a training-free method of inducing a verifiable and efficient toolbox of functions, by generating via using, growing, and periodically trimming the toolbox. On 11 datasets from math, table question answering, and image reasoning tasks, TROVE consistently yields simpler solutions with higher accuracy than baselines using CODELLAMA and previous methods using GPT, while using 79-98% smaller toolboxes. TROVE further enables 31% faster and 13% more accurate human verification than baselines. With the same pipeline, it creates diverse functions for varied tasks and datasets, providing insights into their individual characteristics.
[ { "version": "v1", "created": "Tue, 23 Jan 2024 16:03:17 GMT" } ]
1,706,054,400,000
[ [ "Wang", "Zhiruo", "" ], [ "Fried", "Daniel", "" ], [ "Neubig", "Graham", "" ] ]
2401.12917
Lancelot Da Costa
Lancelot Da Costa, Samuel Tenka, Dominic Zhao, Noor Sajid
Active Inference as a Model of Agency
Accepted in RLDM2022 for the workshop 'RL as a model of agency'
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Is there a canonical way to think of agency beyond reward maximisation? In this paper, we show that any type of behaviour complying with physically sound assumptions about how macroscopic biological agents interact with the world canonically integrates exploration and exploitation in the sense of minimising risk and ambiguity about states of the world. This description, known as active inference, refines the free energy principle, a popular descriptive framework for action and perception originating in neuroscience. Active inference provides a normative Bayesian framework to simulate and model agency that is widely used in behavioural neuroscience, reinforcement learning (RL) and robotics. The usefulness of active inference for RL is three-fold. \emph{a}) Active inference provides a principled solution to the exploration-exploitation dilemma that usefully simulates biological agency. \emph{b}) It provides an explainable recipe to simulate behaviour, whence behaviour follows as an explainable mixture of exploration and exploitation under a generative world model, and all differences in behaviour are explicit in differences in world model. \emph{c}) This framework is universal in the sense that it is theoretically possible to rewrite any RL algorithm conforming to the descriptive assumptions of active inference as an active inference algorithm. Thus, active inference can be used as a tool to uncover and compare the commitments and assumptions of more specific models of agency.
[ { "version": "v1", "created": "Tue, 23 Jan 2024 17:09:25 GMT" } ]
1,706,054,400,000
[ [ "Da Costa", "Lancelot", "" ], [ "Tenka", "Samuel", "" ], [ "Zhao", "Dominic", "" ], [ "Sajid", "Noor", "" ] ]
2401.12920
Rei Tamaru
Rei Tamaru, Yang Cheng, Steven Parker, Ernie Perry, Bin Ran, Soyoung Ahn
Truck Parking Usage Prediction with Decomposed Graph Neural Networks
10 pages, 5 figures, 3 tables, Manuscript for IEEE Transactions on Intelligent Transportation Systems
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Truck parking on freight corridors faces various challenges, such as insufficient parking spaces and compliance with Hour-of-Service (HOS) regulations. These constraints often result in unauthorized parking practices, causing safety concerns. To enhance the safety of freight operations, providing accurate parking usage prediction proves to be a cost-effective solution. Despite the existing research demonstrating satisfactory accuracy for predicting individual truck parking site usage, few approaches have been proposed for predicting usage with spatial dependencies of multiple truck parking sites. We present the Regional Temporal Graph Neural Network (RegT-GCN) as a predictive framework for assessing parking usage across the entire state to provide better truck parking information and mitigate unauthorized parking. The framework leverages the topological structures of truck parking site distributions and historical parking data to predict occupancy rates across a state. To achieve this, we introduce a Regional Decomposition approach, which effectively captures the geographical characteristics. We also introduce the spatial module working efficiently with the temporal module. Evaluation results demonstrate that the proposed model surpasses other baseline models, improving the performance by more than $20\%$ compared with the original model. The proposed model allows truck parking sites' percipience of the topological structures and provides higher performance.
[ { "version": "v1", "created": "Tue, 23 Jan 2024 17:14:01 GMT" } ]
1,706,054,400,000
[ [ "Tamaru", "Rei", "" ], [ "Cheng", "Yang", "" ], [ "Parker", "Steven", "" ], [ "Perry", "Ernie", "" ], [ "Ran", "Bin", "" ], [ "Ahn", "Soyoung", "" ] ]
2401.13752
Hana Chockler
Hana Chockler and Joseph Y. Halpern
Explaining Image Classifiers
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
We focus on explaining image classifiers, taking the work of Mothilal et al. [2021] (MMTS) as our point of departure. We observe that, although MMTS claim to be using the definition of explanation proposed by Halpern [2016], they do not quite do so. Roughly speaking, Halpern's definition has a necessity clause and a sufficiency clause. MMTS replace the necessity clause by a requirement that, as we show, implies it. Halpern's definition also allows agents to restrict the set of options considered. While these difference may seem minor, as we show, they can have a nontrivial impact on explanations. We also show that, essentially without change, Halpern's definition can handle two issues that have proved difficult for other approaches: explanations of absence (when, for example, an image classifier for tumors outputs "no tumor") and explanations of rare events (such as tumors).
[ { "version": "v1", "created": "Wed, 24 Jan 2024 19:12:38 GMT" } ]
1,706,227,200,000
[ [ "Chockler", "Hana", "" ], [ "Halpern", "Joseph Y.", "" ] ]
2401.13883
Ryo Kuroiwa
Ryo Kuroiwa, J. Christopher Beck
Domain-Independent Dynamic Programming
Manuscript submitted to Artificial Intelligence
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
For combinatorial optimization problems, model-based paradigms such as mixed-integer programming (MIP) and constraint programming (CP) aim to decouple modeling and solving a problem: the `holy grail' of declarative problem solving. We propose domain-independent dynamic programming (DIDP), a new model-based paradigm based on dynamic programming (DP). While DP is not new, it has typically been implemented as a problem-specific method. We introduce Dynamic Programming Description Language (DyPDL), a formalism to define DP models based on a state transition system, inspired by AI planning. We show that heuristic search algorithms can be used to solve DyPDL models and propose seven DIDP solvers. We experimentally compare our DIDP solvers with commercial MIP and CP solvers (solving MIP and CP models, respectively) on common benchmark instances of eleven combinatorial optimization problem classes. We show that DIDP outperforms MIP in nine problem classes, CP also in nine problem classes, and both MIP and CP in seven.
[ { "version": "v1", "created": "Thu, 25 Jan 2024 01:48:09 GMT" }, { "version": "v2", "created": "Fri, 31 May 2024 21:05:34 GMT" } ]
1,717,459,200,000
[ [ "Kuroiwa", "Ryo", "" ], [ "Beck", "J. Christopher", "" ] ]
2401.14153
Javier Carbo
J. Carbo, N. Sanchez, J. M. Molina
Agent-based Simulation with Netlogo to Evaluate AmI Scenarios
null
null
10.1057/jos.2016.10
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper an agent-based simulation is developed in order to evaluate an AmI scenario based on agents. Many AmI applications are implemented through agents but they are not compared to any other existing alternative in order to evaluate the relative benefits of using them. The proposal simulation environment developed in Netlogo analyse such benefits using two evaluation criteria: First, measuring agent satisfaction of different types of desires along the execution. Second, measuring time savings obtained through a correct use of context information. So, here, a previously suggested agent architecture, an ontology and a 12-steps protocol to provide AmI services in airports, is evaluated using a NetLogo simulation environment. The present work uses a NetLogo model considering scalability problems of this application domain but using FIPA and BDI extensions to be coherent with our previous works and our previous JADE implementation of them. The NetLogo model presented simulates an airport with agent users passing through several zones located in a specific order in a map: passport controls, check-in counters of airline companies, boarding gates, different types of shopping. Although initial data in simulations are generated randomly, and the model is just an approximation of real-world airports, the definition of this case of use of Ambient Intelligence through NetLogo agents opens an interesting way to evaluate the benefits of using Ambient Intelligence, which is a significant contribution to the final development of them.
[ { "version": "v1", "created": "Thu, 25 Jan 2024 13:05:06 GMT" } ]
1,706,227,200,000
[ [ "Carbo", "J.", "" ], [ "Sanchez", "N.", "" ], [ "Molina", "J. M.", "" ] ]
2401.14511
Sascha Ossowski
Joaqu\'in Arias, Mar Moreno-Rebato, Jos\'e A. Rodr\'iguez-Garc\'ia, Sascha Ossowski
Automated legal reasoning with discretion to act using s(LAW)
null
Artificial Intelligence and Law (2023)
10.1007/s10506-023-09376-5
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Automated legal reasoning and its application in smart contracts and automated decisions are increasingly attracting interest. In this context, ethical and legal concerns make it necessary for automated reasoners to justify in human-understandable terms the advice given. Logic Programming, specially Answer Set Programming, has a rich semantics and has been used to very concisely express complex knowledge. However, modelling discretionality to act and other vague concepts such as ambiguity cannot be expressed in top-down execution models based on Prolog, and in bottom-up execution models based on ASP the justifications are incomplete and/or not scalable. We propose to use s(CASP), a top-down execution model for predicate ASP, to model vague concepts following a set of patterns. We have implemented a framework, called s(LAW), to model, reason, and justify the applicable legislation and validate it by translating (and benchmarking) a representative use case, the criteria for the admission of students in the "Comunidad de Madrid".
[ { "version": "v1", "created": "Thu, 25 Jan 2024 21:11:08 GMT" } ]
1,706,486,400,000
[ [ "Arias", "Joaquín", "" ], [ "Moreno-Rebato", "Mar", "" ], [ "Rodríguez-García", "José A.", "" ], [ "Ossowski", "Sascha", "" ] ]
2401.14636
Felipe Trevizan
Johannes Schmalz, Felipe Trevizan
Efficient Constraint Generation for Stochastic Shortest Path Problems
Extended version of AAAI 2024 paper
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Current methods for solving Stochastic Shortest Path Problems (SSPs) find states' costs-to-go by applying Bellman backups, where state-of-the-art methods employ heuristics to select states to back up and prune. A fundamental limitation of these algorithms is their need to compute the cost-to-go for every applicable action during each state backup, leading to unnecessary computation for actions identified as sub-optimal. We present new connections between planning and operations research and, using this framework, we address this issue of unnecessary computation by introducing an efficient version of constraint generation for SSPs. This technique allows algorithms to ignore sub-optimal actions and avoid computing their costs-to-go. We also apply our novel technique to iLAO* resulting in a new algorithm, CG-iLAO*. Our experiments show that CG-iLAO* ignores up to 57% of iLAO*'s actions and it solves problems up to 8x and 3x faster than LRTDP and iLAO*.
[ { "version": "v1", "created": "Fri, 26 Jan 2024 04:00:07 GMT" } ]
1,706,486,400,000
[ [ "Schmalz", "Johannes", "" ], [ "Trevizan", "Felipe", "" ] ]
2401.14743
Takanori Ugai
Takanori Ugai, Shusaku Egami, Swe Nwe Nwe Htun, Kouji Kozaki, Takahiro Kawamura, Ken Fukuda
Synthetic Multimodal Dataset for Empowering Safety and Well-being in Home Environments
7 pages, 2 figures,4 tables
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper presents a synthetic multimodal dataset of daily activities that fuses video data from a 3D virtual space simulator with knowledge graphs depicting the spatiotemporal context of the activities. The dataset is developed for the Knowledge Graph Reasoning Challenge for Social Issues (KGRC4SI), which focuses on identifying and addressing hazardous situations in the home environment. The dataset is available to the public as a valuable resource for researchers and practitioners developing innovative solutions recognizing human behaviors to enhance safety and well-being in
[ { "version": "v1", "created": "Fri, 26 Jan 2024 10:05:41 GMT" } ]
1,706,486,400,000
[ [ "Ugai", "Takanori", "" ], [ "Egami", "Shusaku", "" ], [ "Htun", "Swe Nwe Nwe", "" ], [ "Kozaki", "Kouji", "" ], [ "Kawamura", "Takahiro", "" ], [ "Fukuda", "Ken", "" ] ]
2401.14933
Idoia Berges
Idoia Berges, Jes\'us Berm\'udez, Arantza Illarramendi
SSDOnt: an Ontology for representing Single-Subject Design Studies
This document is the Accepted Manuscript version of a Published Work that appeared in final form in Methods of Information in Medicine 57(01/02) : 55-61 (2018), copyright 2018 Schattauer. To access the final edited and published work see https://doi.org/10.3414/ME17-01-0109
Methods of Information in Medicine 57(01/02) : 55-61 (2018)
10.3414/ME17-01-0109
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Single-Subject Design is used in several areas such as education and biomedicine. However, no suited formal vocabulary exists for annotating the detailed configuration and the results of this type of research studies with the appropriate granularity for looking for information about them. Therefore, the search for those study designs relies heavily on a syntactical search on the abstract, keywords or full text of the publications about the study, which entails some limitations. Objective: To present SSDOnt, a specific purpose ontology for describing and annotating single-subject design studies, so that complex questions can be asked about them afterwards. Methods: The ontology was developed following the NeOn methodology. Once the requirements of the ontology were defined, a formal model was described in a Description Logic and later implemented in the ontology language OWL 2 DL. Results: We show how the ontology provides a reference model with a suitable terminology for the annotation and searching of single-subject design studies and their main components, such as the phases, the intervention types, the outcomes and the results. Some mappings with terms of related ontologies have been established. We show as proof-of-concept that classes in the ontology can be easily extended to annotate more precise information about specific interventions and outcomes such as those related to autism. Moreover, we provide examples of some types of queries that can be posed to the ontology. Conclusions: SSDOnt has achieved the purpose of covering the descriptions of the domain of single-subject research studies.
[ { "version": "v1", "created": "Fri, 26 Jan 2024 15:11:31 GMT" } ]
1,706,486,400,000
[ [ "Berges", "Idoia", "" ], [ "Bermúdez", "Jesús", "" ], [ "Illarramendi", "Arantza", "" ] ]
2401.15188
Yixue Zhao
Sheng Yu, Narjes Nourzad, Randye J. Semple, Yixue Zhao, Emily Zhou, Bhaskar Krishnamachari
CAREForMe: Contextual Multi-Armed Bandit Recommendation Framework for Mental Health
MOBILESoft 2024
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The COVID-19 pandemic has intensified the urgency for effective and accessible mental health interventions in people's daily lives. Mobile Health (mHealth) solutions, such as AI Chatbots and Mindfulness Apps, have gained traction as they expand beyond traditional clinical settings to support daily life. However, the effectiveness of current mHealth solutions is impeded by the lack of context-awareness, personalization, and modularity to foster their reusability. This paper introduces CAREForMe, a contextual multi-armed bandit (CMAB) recommendation framework for mental health. Designed with context-awareness, personalization, and modularity at its core, CAREForMe harnesses mobile sensing and integrates online learning algorithms with user clustering capability to deliver timely, personalized recommendations. With its modular design, CAREForMe serves as both a customizable recommendation framework to guide future research, and a collaborative platform to facilitate interdisciplinary contributions in mHealth research. We showcase CAREForMe's versatility through its implementation across various platforms (e.g., Discord, Telegram) and its customization to diverse recommendation features.
[ { "version": "v1", "created": "Fri, 26 Jan 2024 20:18:25 GMT" } ]
1,706,572,800,000
[ [ "Yu", "Sheng", "" ], [ "Nourzad", "Narjes", "" ], [ "Semple", "Randye J.", "" ], [ "Zhao", "Yixue", "" ], [ "Zhou", "Emily", "" ], [ "Krishnamachari", "Bhaskar", "" ] ]
2401.15196
Jiachen Xi
Jiachen Xi, Alfredo Garcia, Petar Momcilovic
Regularized Q-Learning with Linear Function Approximation
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several successful reinforcement learning algorithms make use of regularization to promote multi-modal policies that exhibit enhanced exploration and robustness. With functional approximation, the convergence properties of some of these algorithms (e.g. soft Q-learning) are not well understood. In this paper, we consider a single-loop algorithm for minimizing the projected Bellman error with finite time convergence guarantees in the case of linear function approximation. The algorithm operates on two scales: a slower scale for updating the target network of the state-action values, and a faster scale for approximating the Bellman backups in the subspace of the span of basis vectors. We show that, under certain assumptions, the proposed algorithm converges to a stationary point in the presence of Markovian noise. In addition, we provide a performance guarantee for the policies derived from the proposed algorithm.
[ { "version": "v1", "created": "Fri, 26 Jan 2024 20:45:40 GMT" } ]
1,706,572,800,000
[ [ "Xi", "Jiachen", "" ], [ "Garcia", "Alfredo", "" ], [ "Momcilovic", "Petar", "" ] ]
2401.15443
Zibin Dong
Zibin Dong, Jianye Hao, Yifu Yuan, Fei Ni, Yitian Wang, Pengyi Li and Yan Zheng
DiffuserLite: Towards Real-time Diffusion Planning
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Diffusion planning has been recognized as an effective decision-making paradigm in various domains. The capability of conditionally generating high-quality long-horizon trajectories makes it a promising research direction. However, existing diffusion planning methods suffer from low decision-making frequencies due to the expensive iterative sampling cost. To address this issue, we introduce DiffuserLite, a super fast and lightweight diffusion planning framework. DiffuserLite employs a planning refinement process (PRP) to generate coarse-to-fine-grained trajectories, significantly reducing the modeling of redundant information and leading to notable increases in decision-making frequency. Our experimental results demonstrate that DiffuserLite achieves a decision-making frequency of $122$Hz ($112.7$x faster than previous mainstream frameworks) and reaches state-of-the-art performance on D4RL benchmarks. In addition, our neat DiffuserLite framework can serve as a flexible plugin to enhance decision frequency in other diffusion planning algorithms, providing a structural design reference for future works. More details and visualizations are available at https://diffuserlite.github.io/.
[ { "version": "v1", "created": "Sat, 27 Jan 2024 15:30:49 GMT" }, { "version": "v2", "created": "Tue, 30 Jan 2024 04:43:27 GMT" }, { "version": "v3", "created": "Wed, 31 Jan 2024 02:50:41 GMT" }, { "version": "v4", "created": "Fri, 2 Feb 2024 08:57:16 GMT" } ]
1,707,091,200,000
[ [ "Dong", "Zibin", "" ], [ "Hao", "Jianye", "" ], [ "Yuan", "Yifu", "" ], [ "Ni", "Fei", "" ], [ "Wang", "Yitian", "" ], [ "Li", "Pengyi", "" ], [ "Zheng", "Yan", "" ] ]
2401.15621
Sergey Zeltyn Dr.
Alon Oved, Segev Shlomov, Sergey Zeltyn, Nir Mashkif and Avi Yaeli
SNAP: Semantic Stories for Next Activity Prediction
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting the next activity in an ongoing process is one of the most common classification tasks in the business process management (BPM) domain. It allows businesses to optimize resource allocation, enhance operational efficiency, and aids in risk mitigation and strategic decision-making. This provides a competitive edge in the rapidly evolving confluence of BPM and AI. Existing state-of-the-art AI models for business process prediction do not fully capitalize on available semantic information within process event logs. As current advanced AI-BPM systems provide semantically-richer textual data, the need for novel adequate models grows. To address this gap, we propose the novel SNAP method that leverages language foundation models by constructing semantic contextual stories from the process historical event logs and using them for the next activity prediction. We compared the SNAP algorithm with nine state-of-the-art models on six benchmark datasets and show that SNAP significantly outperforms them, especially for datasets with high levels of semantic content.
[ { "version": "v1", "created": "Sun, 28 Jan 2024 10:20:15 GMT" }, { "version": "v2", "created": "Thu, 14 Mar 2024 17:22:37 GMT" } ]
1,710,460,800,000
[ [ "Oved", "Alon", "" ], [ "Shlomov", "Segev", "" ], [ "Zeltyn", "Sergey", "" ], [ "Mashkif", "Nir", "" ], [ "Yaeli", "Avi", "" ] ]
2401.16045
Lingning Song
Lingning Song and Yi Zu and Shan Lu and Jieyue He
Type-based Neural Link Prediction Adapter for Complex Query Answering
11 pages, 3 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Answering complex logical queries on incomplete knowledge graphs (KGs) is a fundamental and challenging task in multi-hop reasoning. Recent work defines this task as an end-to-end optimization problem, which significantly reduces the training cost and enhances the generalization of the model by a pretrained link predictors for query answering. However, most existing proposals ignore the critical semantic knowledge inherently available in KGs, such as type information, which could help answer complex logical queries. To this end, we propose TypE-based Neural Link Prediction Adapter (TENLPA), a novel model that constructs type-based entity-relation graphs to discover the latent relationships between entities and relations by leveraging type information in KGs. Meanwhile, in order to effectively combine type information with complex logical queries, an adaptive learning mechanism is introduced, which is trained by back-propagating during the complex query answering process to achieve adaptive adjustment of neural link predictors. Experiments on 3 standard datasets show that TENLPA model achieves state-of-the-art performance on complex query answering with good generalization and robustness.
[ { "version": "v1", "created": "Mon, 29 Jan 2024 10:54:28 GMT" } ]
1,706,572,800,000
[ [ "Song", "Lingning", "" ], [ "Zu", "Yi", "" ], [ "Lu", "Shan", "" ], [ "He", "Jieyue", "" ] ]
2401.16119
Xuefeng Liang
Ying Zhou, Xuefeng Liang, Han Chen, Yin Zhao, Xin Chen, Lida Yu
Triple Disentangled Representation Learning for Multimodal Affective Analysis
14 pages, 6 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal learning has exhibited a significant advantage in affective analysis tasks owing to the comprehensive information of various modalities, particularly the complementary information. Thus, many emerging studies focus on disentangling the modality-invariant and modality-specific representations from input data and then fusing them for prediction. However, our study shows that modality-specific representations may contain information that is irrelevant or conflicting with the tasks, which downgrades the effectiveness of learned multimodal representations. We revisit the disentanglement issue, and propose a novel triple disentanglement approach, TriDiRA, which disentangles the modality-invariant, effective modality-specific and ineffective modality-specific representations from input data. By fusing only the modality-invariant and effective modality-specific representations, TriDiRA can significantly alleviate the impact of irrelevant and conflicting information across modalities during model training. Extensive experiments conducted on four benchmark datasets demonstrate the effectiveness and generalization of our triple disentanglement, which outperforms SOTA methods.
[ { "version": "v1", "created": "Mon, 29 Jan 2024 12:45:27 GMT" }, { "version": "v2", "created": "Mon, 8 Apr 2024 08:19:19 GMT" } ]
1,712,620,800,000
[ [ "Zhou", "Ying", "" ], [ "Liang", "Xuefeng", "" ], [ "Chen", "Han", "" ], [ "Zhao", "Yin", "" ], [ "Chen", "Xin", "" ], [ "Yu", "Lida", "" ] ]
2401.16124
Klaus Strauch
Javier Romero, Torsten Schaub, Klaus Strauch
On the generalization of learned constraints for ASP solving in temporal domains
28 pages, 3 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The representation of a dynamic problem in ASP usually boils down to using copies of variables and constraints, one for each time stamp, no matter whether it is directly encoded or via an action or temporal language. The multiplication of variables and constraints is commonly done during grounding and the solver is completely ignorant about the temporal relationship among the different instances. On the other hand, a key factor in the performance of today's ASP solvers is conflict-driven constraint learning. Our question is now whether a constraint learned for particular time steps can be generalized and reused at other time stamps, and ultimately whether this enhances the overall solver performance on temporal problems. Knowing full well the domain of time, we study conditions under which learned dynamic constraints can be generalized. We propose a simple translation of the original logic program such that, for the translated programs, the learned constraints can be generalized to other time points. Additionally, we identify a property of temporal problems that allows us to generalize all learned constraints to all time steps. It turns out that this property is satisfied by many planning problems. Finally, we empirically evaluate the impact of adding the generalized constraints to an ASP solver
[ { "version": "v1", "created": "Mon, 29 Jan 2024 12:49:09 GMT" } ]
1,706,572,800,000
[ [ "Romero", "Javier", "" ], [ "Schaub", "Torsten", "" ], [ "Strauch", "Klaus", "" ] ]
2401.16270
Steven Schockaert
Victor Charpenay, Steven Schockaert
Capturing Knowledge Graphs and Rules with Octagon Embeddings
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Region based knowledge graph embeddings represent relations as geometric regions. This has the advantage that the rules which are captured by the model are made explicit, making it straightforward to incorporate prior knowledge and to inspect learned models. Unfortunately, existing approaches are severely restricted in their ability to model relational composition, and hence also their ability to model rules, thus failing to deliver on the main promise of region based models. With the aim of addressing these limitations, we investigate regions which are composed of axis-aligned octagons. Such octagons are particularly easy to work with, as intersections and compositions can be straightforwardly computed, while they are still sufficiently expressive to model arbitrary knowledge graphs. Among others, we also show that our octagon embeddings can properly capture a non-trivial class of rule bases. Finally, we show that our model achieves competitive experimental results.
[ { "version": "v1", "created": "Mon, 29 Jan 2024 16:18:54 GMT" } ]
1,706,572,800,000
[ [ "Charpenay", "Victor", "" ], [ "Schockaert", "Steven", "" ] ]
2401.16398
Federico Malato
Federco Malato, Florian Leopold, Andrew Melnik, Ville Hautamaki
Zero-shot Imitation Policy via Search in Demonstration Dataset
null
null
10.1109/ICASSP48485.2024.10447339
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Behavioral cloning uses a dataset of demonstrations to learn a policy. To overcome computationally expensive training procedures and address the policy adaptation problem, we propose to use latent spaces of pre-trained foundation models to index a demonstration dataset, instantly access similar relevant experiences, and copy behavior from these situations. Actions from a selected similar situation can be performed by the agent until representations of the agent's current situation and the selected experience diverge in the latent space. Thus, we formulate our control problem as a dynamic search problem over a dataset of experts' demonstrations. We test our approach on BASALT MineRL-dataset in the latent representation of a Video Pre-Training model. We compare our model to state-of-the-art, Imitation Learning-based Minecraft agents. Our approach can effectively recover meaningful demonstrations and show human-like behavior of an agent in the Minecraft environment in a wide variety of scenarios. Experimental results reveal that performance of our search-based approach clearly wins in terms of accuracy and perceptual evaluation over learning-based models.
[ { "version": "v1", "created": "Mon, 29 Jan 2024 18:38:29 GMT" } ]
1,712,620,800,000
[ [ "Malato", "Federco", "" ], [ "Leopold", "Florian", "" ], [ "Melnik", "Andrew", "" ], [ "Hautamaki", "Ville", "" ] ]
2401.16580
Jaejin Lee
Jaejin Lee, Seho Kee, Mani Janakiram and George Runger
Attention-based Reinforcement Learning for Combinatorial Optimization: Application to Job Shop Scheduling Problem
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Job shop scheduling problems represent a significant and complex facet of combinatorial optimization problems, which have traditionally been addressed through either exact or approximate solution methodologies. However, the practical application of these solutions is often challenged due to the complexity of real-world problems. Even when utilizing an approximate solution approach, the time required to identify a near-optimal solution can be prohibitively extensive, and the solutions derived are generally not applicable to new problems. This study proposes an innovative attention-based reinforcement learning method specifically designed for the category of job shop scheduling problems. This method integrates a policy gradient reinforcement learning approach with a modified transformer architecture. A key finding of this research is the ability of our trained learners within the proposed method to be repurposed for larger-scale problems that were not part of the initial training set. Furthermore, empirical evidence demonstrates that our approach surpasses the results of recent studies and outperforms commonly implemented heuristic rules. This suggests that our method offers a promising avenue for future research and practical application in the field of job shop scheduling problems.
[ { "version": "v1", "created": "Mon, 29 Jan 2024 21:31:54 GMT" }, { "version": "v2", "created": "Mon, 18 Mar 2024 17:57:22 GMT" } ]
1,710,806,400,000
[ [ "Lee", "Jaejin", "" ], [ "Kee", "Seho", "" ], [ "Janakiram", "Mani", "" ], [ "Runger", "George", "" ] ]
2401.17436
Paolo Burelli
Jeppe Theiss Kristensen, Paolo Burelli
Difficulty Modelling in Mobile Puzzle Games: An Empirical Study on Different Methods to Combine Player Analytics and Simulated Data
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Difficulty is one of the key drivers of player engagement and it is often one of the aspects that designers tweak most to optimise the player experience; operationalising it is, therefore, a crucial task for game development studios. A common practice consists of creating metrics out of data collected by player interactions with the content; however, this allows for estimation only after the content is released and does not consider the characteristics of potential future players. In this article, we present a number of potential solutions for the estimation of difficulty under such conditions, and we showcase the results of a comparative study intended to understand which method and which types of data perform better in different scenarios. The results reveal that models trained on a combination of cohort statistics and simulated data produce the most accurate estimations of difficulty in all scenarios. Furthermore, among these models, artificial neural networks show the most consistent results.
[ { "version": "v1", "created": "Tue, 30 Jan 2024 20:51:42 GMT" } ]
1,706,745,600,000
[ [ "Kristensen", "Jeppe Theiss", "" ], [ "Burelli", "Paolo", "" ] ]
2401.17527
Haotian Ling
Haotian Ling, Zhihai Wang, Jie Wang
Learning to Stop Cut Generation for Efficient Mixed-Integer Linear Programming
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cutting planes (cuts) play an important role in solving mixed-integer linear programs (MILPs), as they significantly tighten the dual bounds and improve the solving performance. A key problem for cuts is when to stop cuts generation, which is important for the efficiency of solving MILPs. However, many modern MILP solvers employ hard-coded heuristics to tackle this problem, which tends to neglect underlying patterns among MILPs from certain applications. To address this challenge, we formulate the cuts generation stopping problem as a reinforcement learning problem and propose a novel hybrid graph representation model (HYGRO) to learn effective stopping strategies. An appealing feature of HYGRO is that it can effectively capture both the dynamic and static features of MILPs, enabling dynamic decision-making for the stopping strategies. To the best of our knowledge, HYGRO is the first data-driven method to tackle the cuts generation stopping problem. By integrating our approach with modern solvers, experiments demonstrate that HYGRO significantly improves the efficiency of solving MILPs compared to competitive baselines, achieving up to 31% improvement.
[ { "version": "v1", "created": "Wed, 31 Jan 2024 01:09:40 GMT" }, { "version": "v2", "created": "Fri, 2 Feb 2024 05:54:58 GMT" } ]
1,707,091,200,000
[ [ "Ling", "Haotian", "" ], [ "Wang", "Zhihai", "" ], [ "Wang", "Jie", "" ] ]
2401.17710
Pakizar Shamoi Dr
Ayana Adilova and Pakizar Shamoi
Aesthetic Preference Prediction in Interior Design: Fuzzy Approach
Submitted to IEEE conference for consideration
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interior design is all about creating spaces that look and feel good. However, the subjective nature of aesthetic preferences presents a significant challenge in defining and quantifying what makes an interior design visually appealing. The current paper addresses this gap by introducing a novel methodology for quantifying and predicting aesthetic preferences in interior design. Our study combines fuzzy logic with image processing techniques. We collected a dataset of interior design images from social media platforms, focusing on essential visual attributes such as color harmony, lightness, and complexity. We integrate these features using weighted average to compute a general aesthetic score. Our approach considers individual color preferences in calculating the overall aesthetic preference. We initially gather user ratings for primary colors like red, brown, and others to understand their preferences. Then, we use the pixel count of the top five dominant colors in the image to get the color scheme preference. The color scheme preference and the aesthetic score are then passed as inputs to the fuzzy inference system to calculate an overall preference score. This score represents a comprehensive measure of the user's preference for a particular interior design, considering their color choices and general aesthetic appeal. We used the 2AFC (Two-Alternative Forced Choice) method to validate our methodology, achieving a notable hit rate of 0.7. This study can help designers and professionals better understand and meet people's interior design preferences, especially in a world that relies heavily on digital media.
[ { "version": "v1", "created": "Wed, 31 Jan 2024 09:59:59 GMT" } ]
1,706,745,600,000
[ [ "Adilova", "Ayana", "" ], [ "Shamoi", "Pakizar", "" ] ]
2401.17749
Xiao Shao
Xiao Shao, Weifu Jiang, Fei Zuo, Mengqing Liu
SwarmBrain: Embodied agent for real-time strategy game StarCraft II via large language models
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) have recently garnered significant accomplishments in various exploratory tasks, even surpassing the performance of traditional reinforcement learning-based methods that have historically dominated the agent-based field. The purpose of this paper is to investigate the efficacy of LLMs in executing real-time strategy war tasks within the StarCraft II gaming environment. In this paper, we introduce SwarmBrain, an embodied agent leveraging LLM for real-time strategy implementation in the StarCraft II game environment. The SwarmBrain comprises two key components: 1) a Overmind Intelligence Matrix, powered by state-of-the-art LLMs, is designed to orchestrate macro-level strategies from a high-level perspective. This matrix emulates the overarching consciousness of the Zerg intelligence brain, synthesizing strategic foresight with the aim of allocating resources, directing expansion, and coordinating multi-pronged assaults. 2) a Swarm ReflexNet, which is agile counterpart to the calculated deliberation of the Overmind Intelligence Matrix. Due to the inherent latency in LLM reasoning, the Swarm ReflexNet employs a condition-response state machine framework, enabling expedited tactical responses for fundamental Zerg unit maneuvers. In the experimental setup, SwarmBrain is in control of the Zerg race in confrontation with an Computer-controlled Terran adversary. Experimental results show the capacity of SwarmBrain to conduct economic augmentation, territorial expansion, and tactical formulation, and it shows the SwarmBrain is capable of achieving victory against Computer players set at different difficulty levels.
[ { "version": "v1", "created": "Wed, 31 Jan 2024 11:14:29 GMT" } ]
1,706,745,600,000
[ [ "Shao", "Xiao", "" ], [ "Jiang", "Weifu", "" ], [ "Zuo", "Fei", "" ], [ "Liu", "Mengqing", "" ] ]
2401.17783
Mar\'ia Asunci\'on Padilla Rasc\'on
M.A. Padilla-Rascon, P. Gonzalez, C.J. Carmona
SDRDPy: An application to graphically visualize the knowledge obtained with supervised descriptive rule algorithms
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
SDRDPy is a desktop application that allows experts an intuitive graphic and tabular representation of the knowledge extracted by any supervised descriptive rule discovery algorithm. The application is able to provide an analysis of the data showing the relevant information of the data set and the relationship between the rules, data and the quality measures associated for each rule regardless of the tool where algorithm has been executed. All of the information is presented in a user-friendly application in order to facilitate expert analysis and also the exportation of reports in different formats.
[ { "version": "v1", "created": "Wed, 31 Jan 2024 12:26:59 GMT" } ]
1,706,745,600,000
[ [ "Padilla-Rascon", "M. A.", "" ], [ "Gonzalez", "P.", "" ], [ "Carmona", "C. J.", "" ] ]
2402.00048
Bruno Sartini
Bruno Sartini
IICONGRAPH: improved Iconographic and Iconological Statements in Knowledge Graphs
18 pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Iconography and iconology are fundamental domains when it comes to understanding artifacts of cultural heritage. Iconography deals with the study and interpretation of visual elements depicted in artifacts and their symbolism, while iconology delves deeper, exploring the underlying cultural and historical meanings. Despite the advances in representing cultural heritage with Linked Open Data (LOD), recent studies show persistent gaps in the representation of iconographic and iconological statements in current knowledge graphs (KGs). To address them, this paper presents IICONGRAPH, a KG that was created by refining and extending the iconographic and iconological statements of ArCo (the Italian KG of cultural heritage) and Wikidata. The development of IICONGRAPH was also driven by a series of requirements emerging from research case studies that were unattainable in the non-reengineered versions of the KGs. The evaluation results demonstrate that IICONGRAPH not only outperforms ArCo and Wikidata through domain-specific assessments from the literature but also serves as a robust platform for addressing the formulated research questions. IICONGRAPH is released and documented in accordance with the FAIR principles to guarantee the resource's reusability. The algorithms used to create it and assess the research questions have also been made available to ensure transparency and reproducibility. While future work focuses on ingesting more data into the KG, and on implementing it as a backbone of LLM-based question answering systems, the current version of IICONGRAPH still emerges as a valuable asset, contributing to the evolving landscape of cultural heritage representation within Knowledge Graphs, the Semantic Web, and beyond.
[ { "version": "v1", "created": "Wed, 24 Jan 2024 15:44:16 GMT" } ]
1,706,832,000,000
[ [ "Sartini", "Bruno", "" ] ]
2402.00064
Javier Carbo
Javier Carbo, Jose M Molina, Miguel A Patricio
Merging plans with incomplete knowledge about actions and goals through an agent-based reputation system
null
null
10.1016/j.eswa.2018.07.062
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Managing transition plans is one of the major problems of people with cognitive disabilities. Therefore, finding an automated way to generate such plans would be a helpful tool for this community. In this paper we have specifically proposed and compared different alternative ways to merge plans formed by sequences of actions of unknown similarities between goals and actions executed by several operator agents which cooperate between them applying such actions over some passive elements (node agents) that require additional executions of another plan after some time of use. Such ignorance of the similarities between plan actions and goals would justify the use of a distributed recommendation system that would provide an useful plan to be applied for a certain goal to a given operator agent, generated from the known results of previous executions of different plans by other operator agents. Here we provide the general framework of execution (agent system), and the different merging algorithms applied to this problem. The proposed agent system would act as an useful cognitive assistant for people with intelectual disabilities such as autism.
[ { "version": "v1", "created": "Mon, 29 Jan 2024 11:34:59 GMT" } ]
1,706,832,000,000
[ [ "Carbo", "Javier", "" ], [ "Molina", "Jose M", "" ], [ "Patricio", "Miguel A", "" ] ]
2402.00076
Daniel Karapetyan Dr
Sahil Patel and Daniel Karapetyan
Exploitation Strategies in Conditional Markov Chain Search: A case study on the three-index assignment problem
14 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Conditional Markov Chain Search (CMCS) is a framework for automated design of metaheuristics for discrete combinatorial optimisation problems. Given a set of algorithmic components such as hill climbers and mutations, CMCS decides in which order to apply those components. The decisions are dictated by the CMCS configuration that can be learnt offline. CMCS does not have an acceptance criterion; any moves are accepted by the framework. As a result, it is particularly good in exploration but is not as good at exploitation. In this study, we explore several extensions of the framework to improve its exploitation abilities. To perform a computational study, we applied the framework to the three-index assignment problem. The results of our experiments showed that a two-stage CMCS is indeed superior to a single-stage CMCS.
[ { "version": "v1", "created": "Tue, 30 Jan 2024 22:13:46 GMT" } ]
1,706,832,000,000
[ [ "Patel", "Sahil", "" ], [ "Karapetyan", "Daniel", "" ] ]
2402.00083
Kenya Andrews
Kenya Andrews and Mesrob Ohannessian and Tanya Berger-Wolf
Modeling Access Differences to Reduce Disparity in Resource Allocation
Association for Computing Machinery (2022)
null
10.1145/3551624.3555302
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Motivated by COVID-19 vaccine allocation, where vulnerable subpopulations are simultaneously more impacted in terms of health and more disadvantaged in terms of access to the vaccine, we formalize and study the problem of resource allocation when there are inherent access differences that correlate with advantage and disadvantage. We identify reducing resource disparity as a key goal in this context and show its role as a proxy to more nuanced downstream impacts. We develop a concrete access model that helps quantify how a given allocation translates to resource flow for the advantaged vs. the disadvantaged, based on the access gap between them. We then provide a methodology for access-aware allocation. Intuitively, the resulting allocation leverages more vaccines in locations with higher vulnerable populations to mitigate the access gap and reduce overall disparity. Surprisingly, knowledge of the access gap is often not needed to perform access-aware allocation. To support this formalism, we provide empirical evidence for our access model and show that access-aware allocation can significantly reduce resource disparity and thus improve downstream outcomes. We demonstrate this at various scales, including at county, state, national, and global levels.
[ { "version": "v1", "created": "Wed, 31 Jan 2024 05:25:12 GMT" } ]
1,706,832,000,000
[ [ "Andrews", "Kenya", "" ], [ "Ohannessian", "Mesrob", "" ], [ "Berger-Wolf", "Tanya", "" ] ]
2402.00262
Qun Ma
Qun Ma, Xiao Xue, Deyu Zhou, Xiangning Yu, Donghua Liu, Xuwen Zhang, Zihan Zhao, Yifan Shen, Peilin Ji, Juanjuan Li, Gang Wang, Wanpeng Ma
Computational Experiments Meet Large Language Model Based Agents: A Survey and Perspective
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational experiments have emerged as a valuable method for studying complex systems, involving the algorithmization of counterfactuals. However, accurately representing real social systems in Agent-based Modeling (ABM) is challenging due to the diverse and intricate characteristics of humans, including bounded rationality and heterogeneity. To address this limitation, the integration of Large Language Models (LLMs) has been proposed, enabling agents to possess anthropomorphic abilities such as complex reasoning and autonomous learning. These agents, known as LLM-based Agent, offer the potential to enhance the anthropomorphism lacking in ABM. Nonetheless, the absence of explicit explainability in LLMs significantly hinders their application in the social sciences. Conversely, computational experiments excel in providing causal analysis of individual behaviors and complex phenomena. Thus, combining computational experiments with LLM-based Agent holds substantial research potential. This paper aims to present a comprehensive exploration of this fusion. Primarily, it outlines the historical development of agent structures and their evolution into artificial societies, emphasizing their importance in computational experiments. Then it elucidates the advantages that computational experiments and LLM-based Agents offer each other, considering the perspectives of LLM-based Agent for computational experiments and vice versa. Finally, this paper addresses the challenges and future trends in this research domain, offering guidance for subsequent related studies.
[ { "version": "v1", "created": "Thu, 1 Feb 2024 01:17:46 GMT" } ]
1,706,832,000,000
[ [ "Ma", "Qun", "" ], [ "Xue", "Xiao", "" ], [ "Zhou", "Deyu", "" ], [ "Yu", "Xiangning", "" ], [ "Liu", "Donghua", "" ], [ "Zhang", "Xuwen", "" ], [ "Zhao", "Zihan", "" ], [ "Shen", "Yifan", "" ], [ "Ji", "Peilin", "" ], [ "Li", "Juanjuan", "" ], [ "Wang", "Gang", "" ], [ "Ma", "Wanpeng", "" ] ]
2402.00468
Biswajit Sadhu
Biswajit Sadhu, Trijit Sadhu, S. Anand
RadDQN: a Deep Q Learning-based Architecture for Finding Time-efficient Minimum Radiation Exposure Pathway
12 pages, 7 main figures, code link (GitHub)
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent advancements in deep reinforcement learning (DRL) techniques have sparked its multifaceted applications in the automation sector. Managing complex decision-making problems with DRL encourages its use in the nuclear industry for tasks such as optimizing radiation exposure to the personnel during normal operating conditions and potential accidental scenarios. However, the lack of efficient reward function and effective exploration strategy thwarted its implementation in the development of radiation-aware autonomous unmanned aerial vehicle (UAV) for achieving maximum radiation protection. Here, in this article, we address these intriguing issues and introduce a deep Q-learning based architecture (RadDQN) that operates on a radiation-aware reward function to provide time-efficient minimum radiation-exposure pathway in a radiation zone. We propose a set of unique exploration strategies that fine-tune the extent of exploration and exploitation based on the state-wise variation in radiation exposure during training. Further, we benchmark the predicted path with grid-based deterministic method. We demonstrate that the formulated reward function in conjugation with adequate exploration strategy is effective in handling several scenarios with drastically different radiation field distributions. When compared to vanilla DQN, our model achieves a superior convergence rate and higher training stability.
[ { "version": "v1", "created": "Thu, 1 Feb 2024 10:15:39 GMT" } ]
1,706,832,000,000
[ [ "Sadhu", "Biswajit", "" ], [ "Sadhu", "Trijit", "" ], [ "Anand", "S.", "" ] ]
2402.00591
Nicolas Lazzari
Nicolas Lazzari, Stefano De Giorgis, Aldo Gangemi, Valentina Presutti
Sandra -- A Neuro-Symbolic Reasoner Based On Descriptions And Situations
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper presents sandra, a neuro-symbolic reasoner combining vectorial representations with deductive reasoning. Sandra builds a vector space constrained by an ontology and performs reasoning over it. The geometric nature of the reasoner allows its combination with neural networks, bridging the gap with symbolic knowledge representations. Sandra is based on the Description and Situation (DnS) ontology design pattern, a formalization of frame semantics. Given a set of facts (a situation) it allows to infer all possible perspectives (descriptions) that can provide a plausible interpretation for it, even in presence of incomplete information. We prove that our method is correct with respect to the DnS model. We experiment with two different tasks and their standard benchmarks, demonstrating that, without increasing complexity, sandra (i) outperforms all the baselines (ii) provides interpretability in the classification process, and (iii) allows control over the vector space, which is designed a priori.
[ { "version": "v1", "created": "Thu, 1 Feb 2024 13:37:53 GMT" }, { "version": "v2", "created": "Fri, 2 Feb 2024 08:58:41 GMT" }, { "version": "v3", "created": "Mon, 25 Mar 2024 10:52:20 GMT" } ]
1,711,411,200,000
[ [ "Lazzari", "Nicolas", "" ], [ "De Giorgis", "Stefano", "" ], [ "Gangemi", "Aldo", "" ], [ "Presutti", "Valentina", "" ] ]
2402.00738
Guangzheng Hu
Guangzheng Hu, Yuanheng Zhu, Haoran Li, Dongbin Zhao
FM3Q: Factorized Multi-Agent MiniMax Q-Learning for Two-Team Zero-Sum Markov Game
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many real-world applications involve some agents that fall into two teams, with payoffs that are equal within the same team but of opposite sign across the opponent team. The so-called two-team zero-sum Markov games (2t0sMGs) can be resolved with reinforcement learning in recent years. However, existing methods are thus inefficient in light of insufficient consideration of intra-team credit assignment, data utilization and computational intractability. In this paper, we propose the individual-global-minimax (IGMM) principle to ensure the coherence between two-team minimax behaviors and the individual greedy behaviors through Q functions in 2t0sMGs. Based on it, we present a novel multi-agent reinforcement learning framework, Factorized Multi-Agent MiniMax Q-Learning (FM3Q), which can factorize the joint minimax Q function into individual ones and iteratively solve for the IGMM-satisfied minimax Q functions for 2t0sMGs. Moreover, an online learning algorithm with neural networks is proposed to implement FM3Q and obtain the deterministic and decentralized minimax policies for two-team players. A theoretical analysis is provided to prove the convergence of FM3Q. Empirically, we use three environments to evaluate the learning efficiency and final performance of FM3Q and show its superiority on 2t0sMGs.
[ { "version": "v1", "created": "Thu, 1 Feb 2024 16:37:21 GMT" } ]
1,706,832,000,000
[ [ "Hu", "Guangzheng", "" ], [ "Zhu", "Yuanheng", "" ], [ "Li", "Haoran", "" ], [ "Zhao", "Dongbin", "" ] ]
2402.00901
Alex Grzankowski
Alex Grzankowski
Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, !Blackbox Interpretability"#is wrongheaded. But there is a better way. There is an exciting and emerging discipline of !Inner Interpretability"#(and specifically Mechanistic Interpretability) that aims to uncover the internal activations and weights of models in order to understand what they represent and the algorithms they implement. In my view, a crucial mistake in Black-box Interpretability is the failure to appreciate that how processes are carried out matters when it comes to intelligence and understanding. I can#t pretend to have a full story that provides both necessary and sufficient conditions for being intelligent, but I do think that Inner Interpretability dovetails nicely with plausible philosophical views of what intelligence requires. So the conclusion is modest, but the important point in my view is seeing how to get the research on the right track. Towards the end of the paper, I will show how some of the philosophical concepts can be used to further refine how Inner Interpretability is approached, so the paper helps draw out a profitable, future two-way exchange between Philosophers and Computer Scientists.
[ { "version": "v1", "created": "Wed, 31 Jan 2024 23:22:13 GMT" } ]
1,707,091,200,000
[ [ "Grzankowski", "Alex", "" ] ]
2402.01276
Jiaqi Shao
Jiaqi Shao, Tao Lin, Xuanyu Cao, Bing Luo
Federated Unlearning: a Perspective of Stability and Fairness
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper explores the multifaceted consequences of federated unlearning (FU) with data heterogeneity. We introduce key metrics for FU assessment, concentrating on verification, global stability, and local fairness, and investigate the inherent trade-offs. Furthermore, we formulate the unlearning process with data heterogeneity through an optimization framework. Our key contribution lies in a comprehensive theoretical analysis of the trade-offs in FU and provides insights into data heterogeneity's impacts on FU. Leveraging these insights, we propose FU mechanisms to manage the trade-offs, guiding further development for FU mechanisms. We empirically validate that our FU mechanisms effectively balance trade-offs, confirming insights derived from our theoretical analysis.
[ { "version": "v1", "created": "Fri, 2 Feb 2024 10:05:25 GMT" }, { "version": "v2", "created": "Mon, 5 Feb 2024 16:11:29 GMT" }, { "version": "v3", "created": "Mon, 12 Feb 2024 05:00:44 GMT" }, { "version": "v4", "created": "Sat, 1 Jun 2024 15:18:50 GMT" } ]
1,717,459,200,000
[ [ "Shao", "Jiaqi", "" ], [ "Lin", "Tao", "" ], [ "Cao", "Xuanyu", "" ], [ "Luo", "Bing", "" ] ]
2402.01499
Willem van der Maden
Willem van der Maden, Derek Lomas, Paul Hekkert
Developing and Evaluating a Design Method for Positive Artificial Intelligence
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
As artificial intelligence (AI) continues advancing, ensuring positive societal impacts becomes critical, especially as AI systems become increasingly ubiquitous in various aspects of life. However, developing "AI for good" poses substantial challenges around aligning systems with complex human values. Presently, we lack mature methods for addressing these challenges. This article presents and evaluates the Positive AI design method aimed at addressing this gap. The method provides a human-centered process to translate wellbeing aspirations into concrete practices. First, we explain the method's four key steps: contextualizing, operationalizing, optimizing, and implementing wellbeing supported by continuous measurement for feedback cycles. We then present a multiple case study where novice designers applied the method, revealing strengths and weaknesses related to efficacy and usability. Next, an expert evaluation study assessed the quality of the resulting concepts, rating them moderately high for feasibility, desirability, and plausibility of achieving intended wellbeing benefits. Together, these studies provide preliminary validation of the method's ability to improve AI design, while surfacing areas needing refinement like developing support for complex steps. Proposed adaptations such as examples and evaluation heuristics could address weaknesses. Further research should examine sustained application over multiple projects. This human-centered approach shows promise for realizing the vision of 'AI for Wellbeing' that does not just avoid harm, but actively benefits humanity.
[ { "version": "v1", "created": "Fri, 2 Feb 2024 15:31:08 GMT" }, { "version": "v2", "created": "Mon, 4 Mar 2024 12:52:13 GMT" } ]
1,709,596,800,000
[ [ "van der Maden", "Willem", "" ], [ "Lomas", "Derek", "" ], [ "Hekkert", "Paul", "" ] ]
2402.01602
Debarun Bhattacharjya
Debarun Bhattacharjya, Junkyu Lee, Don Joven Agravante, Balaji Ganesan, Radu Marinescu
Foundation Model Sherpas: Guiding Foundation Models through Knowledge and Reasoning
9 pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Foundation models (FMs) such as large language models have revolutionized the field of AI by showing remarkable performance in various tasks. However, they exhibit numerous limitations that prevent their broader adoption in many real-world systems, which often require a higher bar for trustworthiness and usability. Since FMs are trained using loss functions aimed at reconstructing the training corpus in a self-supervised manner, there is no guarantee that the model's output aligns with users' preferences for a specific task at hand. In this survey paper, we propose a conceptual framework that encapsulates different modes by which agents could interact with FMs and guide them suitably for a set of tasks, particularly through knowledge augmentation and reasoning. Our framework elucidates agent role categories such as updating the underlying FM, assisting with prompting the FM, and evaluating the FM output. We also categorize several state-of-the-art approaches into agent interaction protocols, highlighting the nature and extent of involvement of the various agent roles. The proposed framework provides guidance for future directions to further realize the power of FMs in practical AI systems.
[ { "version": "v1", "created": "Fri, 2 Feb 2024 18:00:35 GMT" } ]
1,707,091,200,000
[ [ "Bhattacharjya", "Debarun", "" ], [ "Lee", "Junkyu", "" ], [ "Agravante", "Don Joven", "" ], [ "Ganesan", "Balaji", "" ], [ "Marinescu", "Radu", "" ] ]
2402.03640
Abdelrahman Hosny
Abdelrahman Hosny, Sherief Reda
torchmSAT: A GPU-Accelerated Approximation To The Maximum Satisfiability Problem
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The remarkable achievements of machine learning techniques in analyzing discrete structures have drawn significant attention towards their integration into combinatorial optimization algorithms. Typically, these methodologies improve existing solvers by injecting learned models within the solving loop to enhance the efficiency of the search process. In this work, we derive a single differentiable function capable of approximating solutions for the Maximum Satisfiability Problem (MaxSAT). Then, we present a novel neural network architecture to model our differentiable function, and progressively solve MaxSAT using backpropagation. This approach eliminates the need for labeled data or a neural network training phase, as the training process functions as the solving algorithm. Additionally, we leverage the computational power of GPUs to accelerate these computations. Experimental results on challenging MaxSAT instances show that our proposed methodology outperforms two existing MaxSAT solvers, and is on par with another in terms of solution cost, without necessitating any training or access to an underlying SAT solver. Given that numerous NP-hard problems can be reduced to MaxSAT, our novel technique paves the way for a new generation of solvers poised to benefit from neural network GPU acceleration.
[ { "version": "v1", "created": "Tue, 6 Feb 2024 02:33:00 GMT" } ]
1,707,264,000,000
[ [ "Hosny", "Abdelrahman", "" ], [ "Reda", "Sherief", "" ] ]
2402.03824
Giuseppe Paolo Dr
Giuseppe Paolo, Jonas Gonzalez-Billandon, Bal\'azs K\'egl
A call for embodied AI
Published in ICML 2024 Position paper track
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose Embodied AI as the next fundamental step in the pursuit of Artificial General Intelligence, juxtaposing it against current AI advancements, particularly Large Language Models. We traverse the evolution of the embodiment concept across diverse fields - philosophy, psychology, neuroscience, and robotics - to highlight how EAI distinguishes itself from the classical paradigm of static learning. By broadening the scope of Embodied AI, we introduce a theoretical framework based on cognitive architectures, emphasizing perception, action, memory, and learning as essential components of an embodied agent. This framework is aligned with Friston's active inference principle, offering a comprehensive approach to EAI development. Despite the progress made in the field of AI, substantial challenges, such as the formulation of a novel AI learning theory and the innovation of advanced hardware, persist. Our discussion lays down a foundational guideline for future Embodied AI research. Highlighting the importance of creating Embodied AI agents capable of seamless communication, collaboration, and coexistence with humans and other intelligent entities within real-world environments, we aim to steer the AI community towards addressing the multifaceted challenges and seizing the opportunities that lie ahead in the quest for AGI.
[ { "version": "v1", "created": "Tue, 6 Feb 2024 09:11:20 GMT" }, { "version": "v2", "created": "Tue, 28 May 2024 15:07:37 GMT" } ]
1,716,940,800,000
[ [ "Paolo", "Giuseppe", "" ], [ "Gonzalez-Billandon", "Jonas", "" ], [ "Kégl", "Balázs", "" ] ]
2402.04338
Islambek Saymanov
Islambek Saymanov
Logical recognition method for solving the problem of identification in the Internet of Things
I will rework and improve it and post it again
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new area of application of methods of algebra of logic and to valued logic, which has emerged recently, is the problem of recognizing a variety of objects and phenomena, medical or technical diagnostics, constructing modern machines, checking test problems, etc., which can be reduced to constructing an optimal extension of the logical function to the entire feature space. For example, in logical recognition systems, logical methods based on discrete analysis and propositional calculus based on it are used to build their own recognition algorithms. In the general case, the use of a logical recognition method provides for the presence of logical connections expressed by the optimal continuation of a k-valued function over the entire feature space, in which the variables are the logical features of the objects or phenomena being recognized. The goal of this work is to develop a logical method for object recognition consisting of a reference table with logical features and classes of non-intersecting objects, which are specified as vectors from a given feature space. The method consists of considering the reference table as a logical function that is not defined everywhere and constructing an optimal continuation of the logical function to the entire feature space, which determines the extension of classes to the entire space.
[ { "version": "v1", "created": "Tue, 6 Feb 2024 19:20:58 GMT" }, { "version": "v2", "created": "Tue, 13 Feb 2024 16:05:50 GMT" } ]
1,707,868,800,000
[ [ "Saymanov", "Islambek", "" ] ]
2402.04370
Yueyang Wang
Yueyang Wang, Aravinda Ramakrishnan Srinivasan, Jussi P.P. Jokinen, Antti Oulasvirta, Gustav Markkula
Pedestrian crossing decisions can be explained by bounded optimal decision-making under noisy visual perception
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper presents a model of pedestrian crossing decisions, based on the theory of computational rationality. It is assumed that crossing decisions are boundedly optimal, with bounds on optimality arising from human cognitive limitations. While previous models of pedestrian behaviour have been either 'black-box' machine learning models or mechanistic models with explicit assumptions about cognitive factors, we combine both approaches. Specifically, we model mechanistically noisy human visual perception and assumed rewards in crossing, but we use reinforcement learning to learn bounded optimal behaviour policy. The model reproduces a larger number of known empirical phenomena than previous models, in particular: (1) the effect of the time to arrival of an approaching vehicle on whether the pedestrian accepts the gap, the effect of the vehicle's speed on both (2) gap acceptance and (3) pedestrian timing of crossing in front of yielding vehicles, and (4) the effect on this crossing timing of the stopping distance of the yielding vehicle. Notably, our findings suggest that behaviours previously framed as 'biases' in decision-making, such as speed-dependent gap acceptance, might instead be a product of rational adaptation to the constraints of visual perception. Our approach also permits fitting the parameters of cognitive constraints and rewards per individual, to better account for individual differences. To conclude, by leveraging both RL and mechanistic modelling, our model offers novel insights about pedestrian behaviour, and may provide a useful foundation for more accurate and scalable pedestrian models.
[ { "version": "v1", "created": "Tue, 6 Feb 2024 20:13:34 GMT" } ]
1,707,350,400,000
[ [ "Wang", "Yueyang", "" ], [ "Srinivasan", "Aravinda Ramakrishnan", "" ], [ "Jokinen", "Jussi P. P.", "" ], [ "Oulasvirta", "Antti", "" ], [ "Markkula", "Gustav", "" ] ]
2402.04382
Sopam Dasgupta
Sopam Dasgupta, Farhad Shakerin, Joaqu\'in Arias, Elmer Salazar, Gopal Gupta
Counterfactual Generation with Answer Set Programming
16 Pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Machine learning models that automate decision-making are increasingly being used in consequential areas such as loan approvals, pretrial bail approval, hiring, and many more. Unfortunately, most of these models are black-boxes, i.e., they are unable to reveal how they reach these prediction decisions. A need for transparency demands justification for such predictions. An affected individual might also desire explanations to understand why a decision was made. Ethical and legal considerations may further require informing the individual of changes in the input attribute that could be made to produce a desirable outcome. This paper focuses on the latter problem of automatically generating counterfactual explanations. We propose a framework Counterfactual Generation with s(CASP) (CFGS) that utilizes answer set programming (ASP) and the s(CASP) goal-directed ASP system to automatically generate counterfactual explanations from rules generated by rule-based machine learning (RBML) algorithms. In our framework, we show how counterfactual explanations are computed and justified by imagining worlds where some or all factual assumptions are altered/changed. More importantly, we show how we can navigate between these worlds, namely, go from our original world/scenario where we obtain an undesired outcome to the imagined world/scenario where we obtain a desired/favourable outcome.
[ { "version": "v1", "created": "Tue, 6 Feb 2024 20:39:49 GMT" } ]
1,707,350,400,000
[ [ "Dasgupta", "Sopam", "" ], [ "Shakerin", "Farhad", "" ], [ "Arias", "Joaquín", "" ], [ "Salazar", "Elmer", "" ], [ "Gupta", "Gopal", "" ] ]
2402.04938
Luis Costero
Jennifer Hern\'andez-B\'ecares, Luis Costero, Pedro Pablo G\'omez-Mart\'in
An approach to automated videogame beta testing
null
Entertainment Computing, Elsevier. 18. pp 79 to 92. (2017)
10.1016/j.entcom.2016.08.002
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Videogames developed in the 1970s and 1980s were modest programs created in a couple of months by a single person, who played the roles of designer, artist and programmer. Since then, videogames have evolved to become a multi-million dollar industry. Today, AAA game development involves hundreds of people working together over several years. Management and engineering requirements have changed at the same pace. Although many of the processes have been adapted over time, this is not quite true for quality assurance tasks, which are still done mainly manually by human beta testers due to the specific peculiarities of videogames. This paper presents an approach to automate this beta testing.
[ { "version": "v1", "created": "Wed, 7 Feb 2024 15:16:21 GMT" } ]
1,707,350,400,000
[ [ "Hernández-Bécares", "Jennifer", "" ], [ "Costero", "Luis", "" ], [ "Gómez-Martín", "Pedro Pablo", "" ] ]
2402.05048
Leonardo Bezerra
Leonardo C. T. Bezerra, Alexander E. I. Brownlee, Luana Ferraz Alvarenga, Renan Cipriano Moioli, Thais Vasconcelos Batista
How VADER is your AI? Towards a definition of artificial intelligence systems appropriate for regulation
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Artificial intelligence (AI) has driven many information and communication technology (ICT) breakthroughs. Nonetheless, the scope of ICT systems has expanded far beyond AI since the Turing test proposal. Critically, recent AI regulation proposals adopt AI definitions affecting ICT techniques, approaches, and systems that are not AI. In some cases, even works from mathematics, statistics, and engineering would be affected. Worryingly, AI misdefinitions are observed from Western societies to the Global South. In this paper, we propose a framework to score how validated as appropriately-defined for regulation (VADER) an AI definition is. Our online, publicly-available VADER framework scores the coverage of premises that should underlie AI definitions for regulation, which aim to (i) reproduce principles observed in other successful technology regulations, and (ii) include all AI techniques and approaches while excluding non-AI works. Regarding the latter, our score is based on a dataset of representative AI, non-AI ICT, and non-ICT examples. We demonstrate our contribution by reviewing the AI regulation proposals of key players, namely the United States, United Kingdom, European Union, and Brazil. Importantly, none of the proposals assessed achieve the appropriateness score, ranging from a revision need to a concrete risk to ICT systems and works from other fields.
[ { "version": "v1", "created": "Wed, 7 Feb 2024 17:41:15 GMT" }, { "version": "v2", "created": "Wed, 14 Feb 2024 12:02:45 GMT" } ]
1,707,955,200,000
[ [ "Bezerra", "Leonardo C. T.", "" ], [ "Brownlee", "Alexander E. I.", "" ], [ "Alvarenga", "Luana Ferraz", "" ], [ "Moioli", "Renan Cipriano", "" ], [ "Batista", "Thais Vasconcelos", "" ] ]
2402.05829
Jacek Karwowski
Raymond Douglas, Jacek Karwowski, Chan Bae, Andis Draguns, Victoria Krakovna
Limitations of Agents Simulated by Predictive Models
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
There is increasing focus on adapting predictive models into agent-like systems, most notably AI assistants based on language models. We outline two structural reasons for why these models can fail when turned into agents. First, we discuss auto-suggestive delusions. Prior work has shown theoretically that models fail to imitate agents that generated the training data if the agents relied on hidden observations: the hidden observations act as confounding variables, and the models treat actions they generate as evidence for nonexistent observations. Second, we introduce and formally study a related, novel limitation: predictor-policy incoherence. When a model generates a sequence of actions, the model's implicit prediction of the policy that generated those actions can serve as a confounding variable. The result is that models choose actions as if they expect future actions to be suboptimal, causing them to be overly conservative. We show that both of those failures are fixed by including a feedback loop from the environment, that is, re-training the models on their own actions. We give simple demonstrations of both limitations using Decision Transformers and confirm that empirical results agree with our conceptual and formal analysis. Our treatment provides a unifying view of those failure modes, and informs the question of why fine-tuning offline learned policies with online learning makes them more effective.
[ { "version": "v1", "created": "Thu, 8 Feb 2024 17:08:08 GMT" } ]
1,707,436,800,000
[ [ "Douglas", "Raymond", "" ], [ "Karwowski", "Jacek", "" ], [ "Bae", "Chan", "" ], [ "Draguns", "Andis", "" ], [ "Krakovna", "Victoria", "" ] ]
2402.06500
Charles Assaad
Lei Zan, Charles K. Assaad, Emilie Devijver, Eric Gaussier
On the Fly Detection of Root Causes from Observed Data with Application to IT Systems
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper introduces a new structural causal model tailored for representing threshold-based IT systems and presents a new algorithm designed to rapidly detect root causes of anomalies in such systems. When root causes are not causally related, the method is proven to be correct; while an extension is proposed based on the intervention of an agent to relax this assumption. Our algorithm and its agent-based extension leverage causal discovery from offline data and engage in subgraph traversal when encountering new anomalies in online data. Our extensive experiments demonstrate the superior performance of our methods, even when applied to data generated from alternative structural causal models or real IT monitoring data.
[ { "version": "v1", "created": "Fri, 9 Feb 2024 16:10:19 GMT" } ]
1,707,696,000,000
[ [ "Zan", "Lei", "" ], [ "Assaad", "Charles K.", "" ], [ "Devijver", "Emilie", "" ], [ "Gaussier", "Eric", "" ] ]
2402.06673
Yongchen Zhou
Yongchen Zhou, Richard Jiang
Advancing Explainable AI Toward Human-Like Intelligence: Forging the Path to Artificial Brain
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The intersection of Artificial Intelligence (AI) and neuroscience in Explainable AI (XAI) is pivotal for enhancing transparency and interpretability in complex decision-making processes. This paper explores the evolution of XAI methodologies, ranging from feature-based to human-centric approaches, and delves into their applications in diverse domains, including healthcare and finance. The challenges in achieving explainability in generative models, ensuring responsible AI practices, and addressing ethical implications are discussed. The paper further investigates the potential convergence of XAI with cognitive sciences, the development of emotionally intelligent AI, and the quest for Human-Like Intelligence (HLI) in AI systems. As AI progresses towards Artificial General Intelligence (AGI), considerations of consciousness, ethics, and societal impact become paramount. The ongoing pursuit of deciphering the mysteries of the brain with AI and the quest for HLI represent transformative endeavors, bridging technical advancements with multidisciplinary explorations of human cognition.
[ { "version": "v1", "created": "Wed, 7 Feb 2024 14:09:11 GMT" } ]
1,707,782,400,000
[ [ "Zhou", "Yongchen", "" ], [ "Jiang", "Richard", "" ] ]
2402.06764
Stefan Dernbach
Stefan Dernbach, Khushbu Agarwal, Alejandro Zuniga, Michael Henry, Sutanay Choudhury
GLaM: Fine-Tuning Large Language Models for Domain Knowledge Graph Alignment via Neighborhood Partitioning and Generative Subgraph Encoding
Published in AAAI Spring Symposium: AAAI-MAKE 2024
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Integrating large language models (LLMs) with knowledge graphs derived from domain-specific data represents an important advancement towards more powerful and factual reasoning. As these models grow more capable, it is crucial to enable them to perform multi-step inferences over real-world knowledge graphs while minimizing hallucination. While large language models excel at conversation and text generation, their ability to reason over domain-specialized graphs of interconnected entities remains limited. For example, can we query a LLM to identify the optimal contact in a professional network for a specific goal, based on relationships and attributes in a private database? The answer is no--such capabilities lie beyond current methods. However, this question underscores a critical technical gap that must be addressed. Many high-value applications in areas such as science, security, and e-commerce rely on proprietary knowledge graphs encoding unique structures, relationships, and logical constraints. We introduce a fine-tuning framework for developing Graph-aligned LAnguage Models (GLaM) that transforms a knowledge graph into an alternate text representation with labeled question-answer pairs. We demonstrate that grounding the models in specific graph-based knowledge expands the models' capacity for structure-based reasoning. Our methodology leverages the large-language model's generative capabilities to create the dataset and proposes an efficient alternate to retrieval-augmented generation styled methods.
[ { "version": "v1", "created": "Fri, 9 Feb 2024 19:53:29 GMT" }, { "version": "v2", "created": "Fri, 16 Feb 2024 17:23:56 GMT" }, { "version": "v3", "created": "Wed, 17 Apr 2024 19:55:37 GMT" } ]
1,713,484,800,000
[ [ "Dernbach", "Stefan", "" ], [ "Agarwal", "Khushbu", "" ], [ "Zuniga", "Alejandro", "" ], [ "Henry", "Michael", "" ], [ "Choudhury", "Sutanay", "" ] ]
2402.06811
Andrew Smart
Andrew Smart, Ding Wang, Ellis Monk, Mark D\'iaz, Atoosa Kasirzadeh, Erin Van Liemt, Sonja Schmer-Galunder
Discipline and Label: A WEIRD Genealogy and Social Theory of Data Annotation
18 pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Data annotation remains the sine qua non of machine learning and AI. Recent empirical work on data annotation has begun to highlight the importance of rater diversity for fairness, model performance, and new lines of research have begun to examine the working conditions for data annotation workers, the impacts and role of annotator subjectivity on labels, and the potential psychological harms from aspects of annotation work. This paper outlines a critical genealogy of data annotation; starting with its psychological and perceptual aspects. We draw on similarities with critiques of the rise of computerized lab-based psychological experiments in the 1970's which question whether these experiments permit the generalization of results beyond the laboratory settings within which these results are typically obtained. Do data annotations permit the generalization of results beyond the settings, or locations, in which they were obtained? Psychology is overly reliant on participants from Western, Educated, Industrialized, Rich, and Democratic societies (WEIRD). Many of the people who work as data annotation platform workers, however, are not from WEIRD countries; most data annotation workers are based in Global South countries. Social categorizations and classifications from WEIRD countries are imposed on non-WEIRD annotators through instructions and tasks, and through them, on data, which is then used to train or evaluate AI models in WEIRD countries. We synthesize evidence from several recent lines of research and argue that data annotation is a form of automated social categorization that risks entrenching outdated and static social categories that are in reality dynamic and changing. We propose a framework for understanding the interplay of the global social conditions of data annotation with the subjective phenomenological experience of data annotation work.
[ { "version": "v1", "created": "Fri, 9 Feb 2024 22:21:55 GMT" } ]
1,707,782,400,000
[ [ "Smart", "Andrew", "" ], [ "Wang", "Ding", "" ], [ "Monk", "Ellis", "" ], [ "Díaz", "Mark", "" ], [ "Kasirzadeh", "Atoosa", "" ], [ "Van Liemt", "Erin", "" ], [ "Schmer-Galunder", "Sonja", "" ] ]
2402.06861
Yansong Ning
Yansong Ning, Hao Liu
UrbanKGent: A Unified Large Language Model Agent Framework for Urban Knowledge Graph Construction
Under review
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Urban knowledge graph has recently worked as an emerging building block to distill critical knowledge from multi-sourced urban data for diverse urban application scenarios. Despite its promising benefits, urban knowledge graph construction (UrbanKGC) still heavily relies on manual effort, hindering its potential advancement. This paper presents UrbanKGent, a unified large language model agent framework, for urban knowledge graph construction. Specifically, we first construct the knowledgeable instruction set for UrbanKGC tasks (such as relational triplet extraction and knowledge graph completion) via heterogeneity-aware and geospatial-infused instruction generation. Moreover, we propose a tool-augmented iterative trajectory refinement module to enhance and refine the trajectories distilled from GPT-4. Through hybrid instruction fine-tuning with augmented trajectories on Llama-2-13B, we obtain the UrbanKGC agent, UrbanKGent-13B. We perform a comprehensive evaluation on two real-world datasets using both human and GPT-4 self-evaluation. The experimental results demonstrate that UrbanKGent-13B not only can significantly outperform 21 baselines in UrbanKGC tasks, but also surpass the state-of-the-art LLM, GPT-4, by more than 10\% with approximately 20 times lower cost. We deploy UrbanKGent-13B to provide online services, which can construct an UrbanKG with thousands of times richer relationships using only one-fifth of the data compared with the existing benchmark. Our data, code, and opensource UrbanKGC agent are available at https://github.com/usail-hkust/UrbanKGent.
[ { "version": "v1", "created": "Sat, 10 Feb 2024 01:50:19 GMT" } ]
1,707,782,400,000
[ [ "Ning", "Yansong", "" ], [ "Liu", "Hao", "" ] ]
2402.06929
Jae Young Suh
Jae Young Suh, Minsoo Kwak, Soo Yong Kim, Hyoungseo Cho
Making a prototype of Seoul historical sites chatbot using Langchain
4 pages, 4 figures, draft
null
10.33140/JEEE.03.01.14
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we are going to share a draft of the development of a conversational agent created to disseminate information about historical sites located in the Seoul. The primary objective of the agent is to increase awareness among visitors who are not familiar with Seoul, about the presence and precise locations of valuable cultural heritage sites. It aims to promote a basic understanding of Korea's rich and diverse cultural history. The agent is thoughtfully designed for accessibility in English and utilizes data generously provided by the Seoul Metropolitan Government. Despite the limited data volume, it consistently delivers reliable and accurate responses, seamlessly aligning with the available information. We have meticulously detailed the methodologies employed in creating this agent and provided a comprehensive overview of its underlying structure within the paper. Additionally, we delve into potential improvements to enhance this initial version of the system, with a primary emphasis on expanding the available data through our prompting. In conclusion, we provide an in-depth discussion of our expectations regarding the future impact of this agent in promoting and facilitating the sharing of historical sites.
[ { "version": "v1", "created": "Sat, 10 Feb 2024 11:38:09 GMT" } ]
1,709,683,200,000
[ [ "Suh", "Jae Young", "" ], [ "Kwak", "Minsoo", "" ], [ "Kim", "Soo Yong", "" ], [ "Cho", "Hyoungseo", "" ] ]
2402.07016
Yinghao Zhu
Yinghao Zhu, Changyu Ren, Shiyun Xie, Shukai Liu, Hangyuan Ji, Zixiang Wang, Tao Sun, Long He, Zhoujun Li, Xi Zhu, Chengwei Pan
REALM: RAG-Driven Enhancement of Multimodal Electronic Health Records Analysis via Large Language Models
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The integration of multimodal Electronic Health Records (EHR) data has significantly improved clinical predictive capabilities. Leveraging clinical notes and multivariate time-series EHR, existing models often lack the medical context relevent to clinical tasks, prompting the incorporation of external knowledge, particularly from the knowledge graph (KG). Previous approaches with KG knowledge have primarily focused on structured knowledge extraction, neglecting unstructured data modalities and semantic high dimensional medical knowledge. In response, we propose REALM, a Retrieval-Augmented Generation (RAG) driven framework to enhance multimodal EHR representations that address these limitations. Firstly, we apply Large Language Model (LLM) to encode long context clinical notes and GRU model to encode time-series EHR data. Secondly, we prompt LLM to extract task-relevant medical entities and match entities in professionally labeled external knowledge graph (PrimeKG) with corresponding medical knowledge. By matching and aligning with clinical standards, our framework eliminates hallucinations and ensures consistency. Lastly, we propose an adaptive multimodal fusion network to integrate extracted knowledge with multimodal EHR data. Our extensive experiments on MIMIC-III mortality and readmission tasks showcase the superior performance of our REALM framework over baselines, emphasizing the effectiveness of each module. REALM framework contributes to refining the use of multimodal EHR data in healthcare and bridging the gap with nuanced medical context essential for informed clinical predictions.
[ { "version": "v1", "created": "Sat, 10 Feb 2024 18:27:28 GMT" } ]
1,707,782,400,000
[ [ "Zhu", "Yinghao", "" ], [ "Ren", "Changyu", "" ], [ "Xie", "Shiyun", "" ], [ "Liu", "Shukai", "" ], [ "Ji", "Hangyuan", "" ], [ "Wang", "Zixiang", "" ], [ "Sun", "Tao", "" ], [ "He", "Long", "" ], [ "Li", "Zhoujun", "" ], [ "Zhu", "Xi", "" ], [ "Pan", "Chengwei", "" ] ]
2402.07049
Behzad Akbari
Behzad Akbari, Mingfeng Yuan, Hao Wang, Haibin Zhu, Jinjun Shan
A Factor Graph Model of Trust for a Collaborative Multi-Agent System
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the field of Multi-Agent Systems (MAS), known for their openness, dynamism, and cooperative nature, the ability to trust the resources and services of other agents is crucial. Trust, in this setting, is the reliance and confidence an agent has in the information, behaviors, intentions, truthfulness, and capabilities of others within the system. Our paper introduces a new graphical approach that utilizes factor graphs to represent the interdependent behaviors and trustworthiness among agents. This includes modeling the behavior of robots as a trajectory of actions using a Gaussian process factor graph, which accounts for smoothness, obstacle avoidance, and trust-related factors. Our method for evaluating trust is decentralized and considers key interdependent sub-factors such as proximity safety, consistency, and cooperation. The overall system comprises a network of factor graphs that interact through trust-related factors and employs a Bayesian inference method to dynamically assess trust-based decisions with informed consent. The effectiveness of this method is validated via simulations and empirical tests with autonomous robots navigating unsignalized intersections.
[ { "version": "v1", "created": "Sat, 10 Feb 2024 21:44:28 GMT" } ]
1,707,782,400,000
[ [ "Akbari", "Behzad", "" ], [ "Yuan", "Mingfeng", "" ], [ "Wang", "Hao", "" ], [ "Zhu", "Haibin", "" ], [ "Shan", "Jinjun", "" ] ]
2402.07140
Yuyao Ge
Yuyao Ge, Shenghua Liu, Wenjie Feng, Lingrui Mei, Lizhe Chen, Xueqi Cheng
Graph Descriptive Order Improves Reasoning with Large Language Model
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, large language models have achieved state-of-the-art performance across multiple domains. However, the progress in the field of graph reasoning with LLM remains limited. Our work delves into this gap by thoroughly investigating graph reasoning with LLMs. In this work, we reveal the impact of the order of graph description on LLMs' graph reasoning performance, which significantly affects LLMs' reasoning abilities. By altering this order, we enhance the performance of LLMs from 42.22\% to 70\%. Furthermore, we introduce the Scaled Graph Reasoning benchmark for assessing LLMs' performance across various graph sizes and evaluate the relationship between LLMs' graph reasoning abilities and graph size. We discover that the graph reasoning performance of LLMs does not monotonically decrease with the increase in graph size. The experiments span several mainstream models, including GPT-3.5, LLaMA-2-7B, and LLaMA-2-13B, to offer a comprehensive evaluation.
[ { "version": "v1", "created": "Sun, 11 Feb 2024 09:46:24 GMT" }, { "version": "v2", "created": "Tue, 20 Feb 2024 03:13:55 GMT" }, { "version": "v3", "created": "Sat, 24 Feb 2024 07:05:37 GMT" } ]
1,708,992,000,000
[ [ "Ge", "Yuyao", "" ], [ "Liu", "Shenghua", "" ], [ "Feng", "Wenjie", "" ], [ "Mei", "Lingrui", "" ], [ "Chen", "Lizhe", "" ], [ "Cheng", "Xueqi", "" ] ]
2402.07166
Arifa Khan
Arifa Khan, P. Saravanan and S.K Venkatesan
Social Evolution of Published Text and The Emergence of Artificial Intelligence Through Large Language Models and The Problem of Toxicity and Bias
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
We provide a birds eye view of the rapid developments in AI and Deep Learning that has led to the path-breaking emergence of AI in Large Language Models. The aim of this study is to place all these developments in a pragmatic broader historical social perspective without any exaggerations while at the same time without any pessimism that created the AI winter in the 1970s to 1990s. We also at the same time point out toxicity, bias, memorization, sycophancy, logical inconsistencies, hallucinations that exist just as a warning to the overly optimistic. We note here that just as this emergence of AI seems to occur at a threshold point in the number of neural connections or weights, it has also been observed that human brain and especially the cortex region is nothing special or extraordinary but simply a case of scaled-up version of the primate brain and that even the human intelligence seems like an emergent phenomena of scale.
[ { "version": "v1", "created": "Sun, 11 Feb 2024 11:23:28 GMT" }, { "version": "v2", "created": "Fri, 17 May 2024 07:12:12 GMT" } ]
1,716,163,200,000
[ [ "Khan", "Arifa", "" ], [ "Saravanan", "P.", "" ], [ "Venkatesan", "S. K", "" ] ]
2402.07167
Zehao Dong
Zehao Dong, Yixin Chen, Hiram Gay, Yao Hao, Geoffrey D. Hugo, Pamela Samson, Tianyu Zhao
Large-Language-Model Empowered Dose Volume Histogram Prediction for Intensity Modulated Radiotherapy
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Treatment planning is currently a patient specific, time-consuming, and resource demanding task in radiotherapy. Dose-volume histogram (DVH) prediction plays a critical role in automating this process. The geometric relationship between DVHs in radiotherapy plans and organs-at-risk (OAR) and planning target volume (PTV) has been well established. This study explores the potential of deep learning models for predicting DVHs using images and subsequent human intervention facilitated by a large-language model (LLM) to enhance the planning quality. We propose a pipeline to convert unstructured images to a structured graph consisting of image-patch nodes and dose nodes. A novel Dose Graph Neural Network (DoseGNN) model is developed for predicting DVHs from the structured graph. The proposed DoseGNN is enhanced with the LLM to encode massive knowledge from prescriptions and interactive instructions from clinicians. In this study, we introduced an online human-AI collaboration (OHAC) system as a practical implementation of the concept proposed for the automation of intensity-modulated radiotherapy (IMRT) planning. In comparison to the widely-employed DL models used in radiotherapy, DoseGNN achieved mean square errors that were 80$\%$, 76$\%$ and 41.0$\%$ of those predicted by Swin U-Net Transformer, 3D U-Net CNN and vanilla MLP, respectively. Moreover, the LLM-empowered DoseGNN model facilitates seamless adjustment to treatment plans through interaction with clinicians using natural language.
[ { "version": "v1", "created": "Sun, 11 Feb 2024 11:24:09 GMT" } ]
1,707,782,400,000
[ [ "Dong", "Zehao", "" ], [ "Chen", "Yixin", "" ], [ "Gay", "Hiram", "" ], [ "Hao", "Yao", "" ], [ "Hugo", "Geoffrey D.", "" ], [ "Samson", "Pamela", "" ], [ "Zhao", "Tianyu", "" ] ]
2402.07183
Ryota Iijima
Ryota Iijima, Sayaka Shiota, Hitoshi Kiya
A Random Ensemble of Encrypted Vision Transformers for Adversarially Robust Defense
9 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs). In previous studies, the use of models encrypted with a secret key was demonstrated to be robust against white-box attacks, but not against black-box ones. In this paper, we propose a novel method using the vision transformer (ViT) that is a random ensemble of encrypted models for enhancing robustness against both white-box and black-box attacks. In addition, a benchmark attack method, called AutoAttack, is applied to models to test adversarial robustness objectively. In experiments, the method was demonstrated to be robust against not only white-box attacks but also black-box ones in an image classification task on the CIFAR-10 and ImageNet datasets. The method was also compared with the state-of-the-art in a standardized benchmark for adversarial robustness, RobustBench, and it was verified to outperform conventional defenses in terms of clean accuracy and robust accuracy.
[ { "version": "v1", "created": "Sun, 11 Feb 2024 12:35:28 GMT" } ]
1,707,782,400,000
[ [ "Iijima", "Ryota", "" ], [ "Shiota", "Sayaka", "" ], [ "Kiya", "Hitoshi", "" ] ]
2402.07197
Mengmei Zhang
Mengmei Zhang, Mingwei Sun, Peng Wang, Shen Fan, Yanhu Mo, Xiaoxiao Xu, Hong Liu, Cheng Yang, Chuan Shi
GraphTranslator: Aligning Graph Model to Large Language Model for Open-ended Tasks
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) like ChatGPT, exhibit powerful zero-shot and instruction-following capabilities, have catalyzed a revolutionary transformation across diverse fields, especially for open-ended tasks. While the idea is less explored in the graph domain, despite the availability of numerous powerful graph models (GMs), they are restricted to tasks in a pre-defined form. Although several methods applying LLMs to graphs have been proposed, they fail to simultaneously handle the pre-defined and open-ended tasks, with LLM as a node feature enhancer or as a standalone predictor. To break this dilemma, we propose to bridge the pretrained GM and LLM by a Translator, named GraphTranslator, aiming to leverage GM to handle the pre-defined tasks effectively and utilize the extended interface of LLMs to offer various open-ended tasks for GM. To train such Translator, we propose a Producer capable of constructing the graph-text alignment data along node information, neighbor information and model information. By translating node representation into tokens, GraphTranslator empowers an LLM to make predictions based on language instructions, providing a unified perspective for both pre-defined and open-ended tasks. Extensive results demonstrate the effectiveness of our proposed GraphTranslator on zero-shot node classification. The graph question answering experiments reveal our GraphTranslator potential across a broad spectrum of open-ended tasks through language instructions. Our code is available at: https://github.com/alibaba/GraphTranslator.
[ { "version": "v1", "created": "Sun, 11 Feb 2024 13:24:13 GMT" }, { "version": "v2", "created": "Tue, 13 Feb 2024 09:25:37 GMT" }, { "version": "v3", "created": "Tue, 20 Feb 2024 08:34:15 GMT" }, { "version": "v4", "created": "Wed, 28 Feb 2024 02:42:35 GMT" } ]
1,709,164,800,000
[ [ "Zhang", "Mengmei", "" ], [ "Sun", "Mingwei", "" ], [ "Wang", "Peng", "" ], [ "Fan", "Shen", "" ], [ "Mo", "Yanhu", "" ], [ "Xu", "Xiaoxiao", "" ], [ "Liu", "Hong", "" ], [ "Yang", "Cheng", "" ], [ "Shi", "Chuan", "" ] ]
2402.07199
Bingqing Liu
Bingqing Liu, Xikun Huang
Link-aware link prediction over temporal graph by pattern recognition
12 pages, one column
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A temporal graph can be considered as a stream of links, each of which represents an interaction between two nodes at a certain time. On temporal graphs, link prediction is a common task, which aims to answer whether the query link is true or not. To do this task, previous methods usually focus on the learning of representations of the two nodes in the query link. We point out that the learned representation by their models may encode too much information with side effects for link prediction because they have not utilized the information of the query link, i.e., they are link-unaware. Based on this observation, we propose a link-aware model: historical links and the query link are input together into the following model layers to distinguish whether this input implies a reasonable pattern that ends with the query link. During this process, we focus on the modeling of link evolution patterns rather than node representations. Experiments on six datasets show that our model achieves strong performances compared with state-of-the-art baselines, and the results of link prediction are interpretable. The code and datasets are available on the project website: https://github.com/lbq8942/TGACN.
[ { "version": "v1", "created": "Sun, 11 Feb 2024 13:26:06 GMT" } ]
1,707,782,400,000
[ [ "Liu", "Bingqing", "" ], [ "Huang", "Xikun", "" ] ]
2402.07221
Francis Rhys Ward
Francis Rhys Ward and Matt MacDermott and Francesco Belardinelli and Francesca Toni and Tom Everitt
The Reasons that Agents Act: Intention and Instrumental Goals
AAMAS24
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Intention is an important and challenging concept in AI. It is important because it underlies many other concepts we care about, such as agency, manipulation, legal responsibility, and blame. However, ascribing intent to AI systems is contentious, and there is no universally accepted theory of intention applicable to AI agents. We operationalise the intention with which an agent acts, relating to the reasons it chooses its decision. We introduce a formal definition of intention in structural causal influence models, grounded in the philosophy literature on intent and applicable to real-world machine learning systems. Through a number of examples and results, we show that our definition captures the intuitive notion of intent and satisfies desiderata set-out by past work. In addition, we show how our definition relates to past concepts, including actual causality, and the notion of instrumental goals, which is a core idea in the literature on safe AI agents. Finally, we demonstrate how our definition can be used to infer the intentions of reinforcement learning agents and language models from their behaviour.
[ { "version": "v1", "created": "Sun, 11 Feb 2024 14:39:40 GMT" }, { "version": "v2", "created": "Thu, 15 Feb 2024 11:45:37 GMT" } ]
1,708,041,600,000
[ [ "Ward", "Francis Rhys", "" ], [ "MacDermott", "Matt", "" ], [ "Belardinelli", "Francesco", "" ], [ "Toni", "Francesca", "" ], [ "Everitt", "Tom", "" ] ]
2402.07226
Sungyoon Kim
Sungyoon Kim, Yunseon Choi, Daiki E. Matsunaga, and Kee-Eung Kim
Stitching Sub-Trajectories with Conditional Diffusion Model for Goal-Conditioned Offline RL
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Offline Goal-Conditioned Reinforcement Learning (Offline GCRL) is an important problem in RL that focuses on acquiring diverse goal-oriented skills solely from pre-collected behavior datasets. In this setting, the reward feedback is typically absent except when the goal is achieved, which makes it difficult to learn policies especially from a finite dataset of suboptimal behaviors. In addition, realistic scenarios involve long-horizon planning, which necessitates the extraction of useful skills within sub-trajectories. Recently, the conditional diffusion model has been shown to be a promising approach to generate high-quality long-horizon plans for RL. However, their practicality for the goal-conditioned setting is still limited due to a number of technical assumptions made by the methods. In this paper, we propose SSD (Sub-trajectory Stitching with Diffusion), a model-based offline GCRL method that leverages the conditional diffusion model to address these limitations. In summary, we use the diffusion model that generates future plans conditioned on the target goal and value, with the target value estimated from the goal-relabeled offline dataset. We report state-of-the-art performance in the standard benchmark set of GCRL tasks, and demonstrate the capability to successfully stitch the segments of suboptimal trajectories in the offline data to generate high-quality plans.
[ { "version": "v1", "created": "Sun, 11 Feb 2024 15:23:13 GMT" } ]
1,707,782,400,000
[ [ "Kim", "Sungyoon", "" ], [ "Choi", "Yunseon", "" ], [ "Matsunaga", "Daiki E.", "" ], [ "Kim", "Kee-Eung", "" ] ]
2402.07234
Xin Tong
Xin Tong, Bo Jin, Zhi Lin, Binjun Wang, Ting Yu and Qiang Cheng
CPSDBench: A Large Language Model Evaluation Benchmark and Baseline for Chinese Public Security Domain
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) have demonstrated significant potential and effectiveness across multiple application domains. To assess the performance of mainstream LLMs in public security tasks, this study aims to construct a specialized evaluation benchmark tailored to the Chinese public security domain--CPSDbench. CPSDbench integrates datasets related to public security collected from real-world scenarios, supporting a comprehensive assessment of LLMs across four key dimensions: text classification, information extraction, question answering, and text generation. Furthermore, this study introduces a set of innovative evaluation metrics designed to more precisely quantify the efficacy of LLMs in executing tasks related to public security. Through the in-depth analysis and evaluation conducted in this research, we not only enhance our understanding of the performance strengths and limitations of existing models in addressing public security issues but also provide references for the future development of more accurate and customized LLM models targeted at applications in this field.
[ { "version": "v1", "created": "Sun, 11 Feb 2024 15:56:03 GMT" }, { "version": "v2", "created": "Sun, 3 Mar 2024 01:26:01 GMT" }, { "version": "v3", "created": "Thu, 21 Mar 2024 12:39:09 GMT" } ]
1,711,065,600,000
[ [ "Tong", "Xin", "" ], [ "Jin", "Bo", "" ], [ "Lin", "Zhi", "" ], [ "Wang", "Binjun", "" ], [ "Yu", "Ting", "" ], [ "Cheng", "Qiang", "" ] ]
2402.07327
Minoo Shayaninasab
Minoo Shayaninasab, Bagher Babaali
Multi-Modal Emotion Recognition by Text, Speech and Video Using Pretrained Transformers
null
null
null
null
cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
Due to the complex nature of human emotions and the diversity of emotion representation methods in humans, emotion recognition is a challenging field. In this research, three input modalities, namely text, audio (speech), and video, are employed to generate multimodal feature vectors. For generating features for each of these modalities, pre-trained Transformer models with fine-tuning are utilized. In each modality, a Transformer model is used with transfer learning to extract feature and emotional structure. These features are then fused together, and emotion recognition is performed using a classifier. To select an appropriate fusion method and classifier, various feature-level and decision-level fusion techniques have been experimented with, and ultimately, the best model, which combines feature-level fusion by concatenating feature vectors and classification using a Support Vector Machine on the IEMOCAP multimodal dataset, achieves an accuracy of 75.42%. Keywords: Multimodal Emotion Recognition, IEMOCAP, Self-Supervised Learning, Transfer Learning, Transformer.
[ { "version": "v1", "created": "Sun, 11 Feb 2024 23:27:24 GMT" } ]
1,707,782,400,000
[ [ "Shayaninasab", "Minoo", "" ], [ "Babaali", "Bagher", "" ] ]
2402.07398
Dongsheng Zhu
Dongsheng Zhu, Xunzhu Tang, Weidong Han, Jinghui Lu, Yukun Zhao, Guoliang Xing, Junfeng Wang, Dawei Yin
VisLingInstruct: Elevating Zero-Shot Learning in Multi-Modal Language Models with Autonomous Instruction Optimization
Accepted to NAACL2024 main conference
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper presents VisLingInstruct, a novel approach to advancing Multi-Modal Language Models (MMLMs) in zero-shot learning. Current MMLMs show impressive zero-shot abilities in multi-modal tasks, but their performance depends heavily on the quality of instructions. VisLingInstruct tackles this by autonomously evaluating and optimizing instructional texts through In-Context Learning, improving the synergy between visual perception and linguistic expression in MMLMs. Alongside this instructional advancement, we have also optimized the visual feature extraction modules in MMLMs, further augmenting their responsiveness to textual cues. Our comprehensive experiments on MMLMs, based on FlanT5 and Vicuna, show that VisLingInstruct significantly improves zero-shot performance in visual multi-modal tasks. Notably, it achieves a 13.1% and 9% increase in accuracy over the prior state-of-the-art on the TextVQA and HatefulMemes datasets.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 04:13:16 GMT" }, { "version": "v2", "created": "Thu, 14 Mar 2024 14:30:14 GMT" } ]
1,710,460,800,000
[ [ "Zhu", "Dongsheng", "" ], [ "Tang", "Xunzhu", "" ], [ "Han", "Weidong", "" ], [ "Lu", "Jinghui", "" ], [ "Zhao", "Yukun", "" ], [ "Xing", "Guoliang", "" ], [ "Wang", "Junfeng", "" ], [ "Yin", "Dawei", "" ] ]
2402.07418
Sangwoo Shin
Sangwoo Shin, Minjong Yoo, Jeongwoo Lee, Honguk Woo
SemTra: A Semantic Skill Translator for Cross-Domain Zero-Shot Policy Adaptation
AAAI 2024 Camera-ready version
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This work explores the zero-shot adaptation capability of semantic skills, semantically interpretable experts' behavior patterns, in cross-domain settings, where a user input in interleaved multi-modal snippets can prompt a new long-horizon task for different domains. In these cross-domain settings, we present a semantic skill translator framework SemTra which utilizes a set of multi-modal models to extract skills from the snippets, and leverages the reasoning capabilities of a pretrained language model to adapt these extracted skills to the target domain. The framework employs a two-level hierarchy for adaptation: task adaptation and skill adaptation. During task adaptation, seq-to-seq translation by the language model transforms the extracted skills into a semantic skill sequence, which is tailored to fit the cross-domain contexts. Skill adaptation focuses on optimizing each semantic skill for the target domain context, through parametric instantiations that are facilitated by language prompting and contrastive learning-based context inferences. This hierarchical adaptation empowers the framework to not only infer a complex task specification in one-shot from the interleaved multi-modal snippets, but also adapt it to new domains with zero-shot learning abilities. We evaluate our framework with Meta-World, Franka Kitchen, RLBench, and CARLA environments. The results clarify the framework's superiority in performing long-horizon tasks and adapting to different domains, showing its broad applicability in practical use cases, such as cognitive robots interpreting abstract instructions and autonomous vehicles operating under varied configurations.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 05:46:10 GMT" } ]
1,707,782,400,000
[ [ "Shin", "Sangwoo", "" ], [ "Yoo", "Minjong", "" ], [ "Lee", "Jeongwoo", "" ], [ "Woo", "Honguk", "" ] ]
2402.07420
Hideaki Takahashi
Hideaki Takahashi and Alex Fukunaga
On the Transit Obfuscation Problem
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Concealing an intermediate point on a route or visible from a route is an important goal in some transportation and surveillance scenarios. This paper studies the Transit Obfuscation Problem, the problem of traveling from some start location to an end location while "covering" a specific transit point that needs to be concealed from adversaries. We propose the notion of transit anonymity, a quantitative guarantee of the anonymity of a specific transit point, even with a powerful adversary with full knowledge of the path planning algorithm. We propose and evaluate planning/search algorithms that satisfy this anonymity criterion.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 05:48:52 GMT" }, { "version": "v2", "created": "Tue, 13 Feb 2024 07:02:05 GMT" } ]
1,707,868,800,000
[ [ "Takahashi", "Hideaki", "" ], [ "Fukunaga", "Alex", "" ] ]
2402.07422
Chufeng Jiang
Tianrui Liu, Changxin Xu, Yuxin Qiao, Chufeng Jiang, Weisheng Chen
News Recommendation with Attention Mechanism
7 pages, Journal of Industrial Engineering and Applied Science
Journal of Industrial Engineering and Applied Science 2024
10.5281/zenodo.10635481
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper explores the area of news recommendation, a key component of online information sharing. Initially, we provide a clear introduction to news recommendation, defining the core problem and summarizing current methods and notable recent algorithms. We then present our work on implementing the NRAM (News Recommendation with Attention Mechanism), an attention-based approach for news recommendation, and assess its effectiveness. Our evaluation shows that NRAM has the potential to significantly improve how news content is personalized for users on digital news platforms.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 05:56:12 GMT" }, { "version": "v2", "created": "Tue, 20 Feb 2024 02:46:17 GMT" } ]
1,708,473,600,000
[ [ "Liu", "Tianrui", "" ], [ "Xu", "Changxin", "" ], [ "Qiao", "Yuxin", "" ], [ "Jiang", "Chufeng", "" ], [ "Chen", "Weisheng", "" ] ]
2402.07429
Chufeng Jiang
Tianrui Liu, Changxin Xu, Yuxin Qiao, Chufeng Jiang, Jiqiang Yu
Particle Filter SLAM for Vehicle Localization
6 pages, Journal of Industrial Engineering and Applied Science
Journal of Industrial Engineering and Applied Science 2024
10.5281/zenodo.10635489
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simultaneous Localization and Mapping (SLAM) presents a formidable challenge in robotics, involving the dynamic construction of a map while concurrently determining the precise location of the robotic agent within an unfamiliar environment. This intricate task is further compounded by the inherent "chicken-and-egg" dilemma, where accurate mapping relies on a dependable estimation of the robot's location, and vice versa. Moreover, the computational intensity of SLAM adds an additional layer of complexity, making it a crucial yet demanding topic in the field. In our research, we address the challenges of SLAM by adopting the Particle Filter SLAM method. Our approach leverages encoded data and fiber optic gyro (FOG) information to enable precise estimation of vehicle motion, while lidar technology contributes to environmental perception by providing detailed insights into surrounding obstacles. The integration of these data streams culminates in the establishment of a Particle Filter SLAM framework, representing a key endeavor in this paper to effectively navigate and overcome the complexities associated with simultaneous localization and mapping in robotic systems.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 06:06:09 GMT" }, { "version": "v2", "created": "Tue, 20 Feb 2024 02:42:33 GMT" } ]
1,708,473,600,000
[ [ "Liu", "Tianrui", "" ], [ "Xu", "Changxin", "" ], [ "Qiao", "Yuxin", "" ], [ "Jiang", "Chufeng", "" ], [ "Yu", "Jiqiang", "" ] ]
2402.07442
Ray Ito
Ray Ito, Junichiro Takahashi
Game Agent Driven by Free-Form Text Command: Using LLM-based Code Generation and Behavior Branch
This paper is posted at JSAI 2024 Conference
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Several attempts have been made to implement text command control for game agents. However, current technologies are limited to processing predefined format commands. This paper proposes a pioneering text command control system for a game agent that can understand natural language commands expressed in free-form. The proposed system uses a large language model (LLM) for code generation to interpret and transform natural language commands into behavior branch, a proposed knowledge expression based on behavior trees, which facilitates execution by the game agent. This study conducted empirical validation within a game environment that simulates a Pok\'emon game and involved multiple participants. The results confirmed the system's ability to understand and carry out natural language commands, representing a noteworthy in the realm of real-time language interactive game agents. Notice for the use of this material. The copyright of this material is retained by the Japanese Society for Artificial Intelligence (JSAI). This material is published here with the agreement of JSAI. Please be complied with Copyright Law of Japan if any users wish to reproduce, make derivative work, distribute or make available to the public any part or whole thereof. All Rights Reserved, Copyright (C) The Japanese Society for Artificial Intelligence.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 06:49:48 GMT" } ]
1,707,782,400,000
[ [ "Ito", "Ray", "" ], [ "Takahashi", "Junichiro", "" ] ]
2402.07456
Chengcheng Han
Zhiyong Wu, Chengcheng Han, Zichen Ding, Zhenmin Weng, Zhoumianze Liu, Shunyu Yao, Tao Yu and Lingpeng Kong
OS-Copilot: Towards Generalist Computer Agents with Self-Improvement
Project page: https://os-copilot.github.io
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous interaction with the computer has been a longstanding challenge with great potential, and the recent proliferation of large language models (LLMs) has markedly accelerated progress in building digital agents. However, most of these agents are designed to interact with a narrow domain, such as a specific software or website. This narrow focus constrains their applicability for general computer tasks. To this end, we introduce OS-Copilot, a framework to build generalist agents capable of interfacing with comprehensive elements in an operating system (OS), including the web, code terminals, files, multimedia, and various third-party applications. We use OS-Copilot to create FRIDAY, a self-improving embodied agent for automating general computer tasks. On GAIA, a general AI assistants benchmark, FRIDAY outperforms previous methods by 35%, showcasing strong generalization to unseen applications via accumulated skills from previous tasks. We also present numerical and quantitative evidence that FRIDAY learns to control and self-improve on Excel and Powerpoint with minimal supervision. Our OS-Copilot framework and empirical findings provide infrastructure and insights for future research toward more capable and general-purpose computer agents.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 07:29:22 GMT" }, { "version": "v2", "created": "Thu, 15 Feb 2024 09:30:48 GMT" } ]
1,708,041,600,000
[ [ "Wu", "Zhiyong", "" ], [ "Han", "Chengcheng", "" ], [ "Ding", "Zichen", "" ], [ "Weng", "Zhenmin", "" ], [ "Liu", "Zhoumianze", "" ], [ "Yao", "Shunyu", "" ], [ "Yu", "Tao", "" ], [ "Kong", "Lingpeng", "" ] ]
2402.07477
Ali Rostami
Ali Rostami, Ramesh Jain, Amir M. Rahmani
Food Recommendation as Language Processing (F-RLP): A Personalized and Contextual Paradigm
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art rule-based and classification-based food recommendation systems face significant challenges in becoming practical and useful. This difficulty arises primarily because most machine learning models struggle with problems characterized by an almost infinite number of classes and a limited number of samples within an unbalanced dataset. Conversely, the emergence of Large Language Models (LLMs) as recommendation engines offers a promising avenue. However, a general-purpose Recommendation as Language Processing (RLP) approach lacks the critical components necessary for effective food recommendations. To address this gap, we introduce Food Recommendation as Language Processing (F-RLP), a novel framework that offers a food-specific, tailored infrastructure. F-RLP leverages the capabilities of LLMs to maximize their potential, thereby paving the way for more accurate, personalized food recommendations.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 08:32:29 GMT" }, { "version": "v2", "created": "Wed, 14 Feb 2024 12:11:44 GMT" } ]
1,707,955,200,000
[ [ "Rostami", "Ali", "" ], [ "Jain", "Ramesh", "" ], [ "Rahmani", "Amir M.", "" ] ]
2402.07507
Sarah Almeida Carneiro
Sarah Almeida Carneiro (LIGM), Giovanni Chierchia (LIGM), Aurelie Pirayre (IFPEN), Laurent Najman (LIGM)
Clustering Dynamics for Improved Speed Prediction Deriving from Topographical GPS Registrations
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A persistent challenge in the field of Intelligent Transportation Systems is to extract accurate traffic insights from geographic regions with scarce or no data coverage. To this end, we propose solutions for speed prediction using sparse GPS data points and their associated topographical and road design features. Our goal is to investigate whether we can use similarities in the terrain and infrastructure to train a machine learning model that can predict speed in regions where we lack transportation data. For this we create a Temporally Orientated Speed Dictionary Centered on Topographically Clustered Roads, which helps us to provide speed correlations to selected feature configurations. Our results show qualitative and quantitative improvement over new and standard regression methods. The presented framework provides a fresh perspective on devising strategies for missing data traffic analysis.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 09:28:16 GMT" } ]
1,707,782,400,000
[ [ "Carneiro", "Sarah Almeida", "", "LIGM" ], [ "Chierchia", "Giovanni", "", "LIGM" ], [ "Pirayre", "Aurelie", "", "IFPEN" ], [ "Najman", "Laurent", "", "LIGM" ] ]
2402.07772
My H Dinh
My H Dinh and James Kotary and Ferdinando Fioretto
End-to-End Learning for Fair Multiobjective Optimization Under Uncertainty
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many decision processes in artificial intelligence and operations research are modeled by parametric optimization problems whose defining parameters are unknown and must be inferred from observable data. The Predict-Then-Optimize (PtO) paradigm in machine learning aims to maximize downstream decision quality by training the parametric inference model end-to-end with the subsequent constrained optimization. This requires backpropagation through the optimization problem using approximation techniques specific to the problem's form, especially for nondifferentiable linear and mixed-integer programs. This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives, known for their ability to ensure properties of fairness and robustness in decision models. Through a collection of training techniques and proposed application settings, it shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 16:33:35 GMT" } ]
1,707,782,400,000
[ [ "Dinh", "My H", "" ], [ "Kotary", "James", "" ], [ "Fioretto", "Ferdinando", "" ] ]
2402.07799
Alberto Pozanco
Alberto Pozanco, Ramon Fraga Pereira, Daniel Borrajo
Generalising Planning Environment Redesign
Paper accepted at AAAI'24
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Environment Design, one interested party seeks to affect another agent's decisions by applying changes to the environment. Most research on planning environment (re)design assumes the interested party's objective is to facilitate the recognition of goals and plans, and search over the space of environment modifications to find the minimal set of changes that simplify those tasks and optimise a particular metric. This search space is usually intractable, so existing approaches devise metric-dependent pruning techniques for performing search more efficiently. This results in approaches that are not able to generalise across different objectives and/or metrics. In this paper, we argue that the interested party could have objectives and metrics that are not necessarily related to recognising agents' goals or plans. Thus, to generalise the task of Planning Environment Redesign, we develop a general environment redesign approach that is metric-agnostic and leverages recent research on top-quality planning to efficiently redesign planning environments according to any interested party's objective and metric. Experiments over a set of environment redesign benchmarks show that our general approach outperforms existing approaches when using well-known metrics, such as facilitating the recognition of goals, as well as its effectiveness when solving environment redesign tasks that optimise a novel set of different metrics.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 17:03:58 GMT" }, { "version": "v2", "created": "Wed, 14 Feb 2024 14:01:55 GMT" } ]
1,707,955,200,000
[ [ "Pozanco", "Alberto", "" ], [ "Pereira", "Ramon Fraga", "" ], [ "Borrajo", "Daniel", "" ] ]
2402.07822
Sarah L. Thomson
Sarah L. Thomson, L\'eni K. Le Goff, Emma Hart, Edgar Buchanan
Understanding fitness landscapes in morpho-evolution via local optima networks
Submitted to GECCO-2024
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Morpho-evolution (ME) refers to the simultaneous optimisation of a robot's design and controller to maximise performance given a task and environment. Many genetic encodings have been proposed which are capable of representing design and control. Previous research has provided empirical comparisons between encodings in terms of their performance with respect to an objective function and the diversity of designs that are evaluated, however there has been no attempt to explain the observed findings. We address this by applying Local Optima Network (LON) analysis to investigate the structure of the fitness landscapes induced by three different encodings when evolving a robot for a locomotion task, shedding new light on the ease by which different fitness landscapes can be traversed by a search process. This is the first time LON analysis has been applied in the field of ME despite its popularity in combinatorial optimisation domains; the findings will facilitate design of new algorithms or operators that are customised to ME landscapes in the future.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 17:26:35 GMT" } ]
1,707,782,400,000
[ [ "Thomson", "Sarah L.", "" ], [ "Goff", "Léni K. Le", "" ], [ "Hart", "Emma", "" ], [ "Buchanan", "Edgar", "" ] ]
2402.07877
Yangxinyu Xie
Yangxinyu Xie, Tanwi Mallick, Joshua David Bergerson, John K. Hutchison, Duane R. Verner, Jordan Branham, M. Ross Alexander, Robert B. Ross, Yan Feng, Leslie-Anne Levy, Weijie Su
WildfireGPT: Tailored Large Language Model for Wildfire Analysis
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The recent advancement of large language models (LLMs) represents a transformational capability at the frontier of artificial intelligence (AI) and machine learning (ML). However, LLMs are generalized models, trained on extensive text corpus, and often struggle to provide context-specific information, particularly in areas requiring specialized knowledge such as wildfire details within the broader context of climate change. For decision-makers and policymakers focused on wildfire resilience and adaptation, it is crucial to obtain responses that are not only precise but also domain-specific, rather than generic. To that end, we developed WildfireGPT, a prototype LLM agent designed to transform user queries into actionable insights on wildfire risks. We enrich WildfireGPT by providing additional context such as climate projections and scientific literature to ensure its information is current, relevant, and scientifically accurate. This enables WildfireGPT to be an effective tool for delivering detailed, user-specific insights on wildfire risks to support a diverse set of end users, including researchers, engineers, urban planners, emergency managers, and infrastructure operators.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 18:41:55 GMT" } ]
1,707,782,400,000
[ [ "Xie", "Yangxinyu", "" ], [ "Mallick", "Tanwi", "" ], [ "Bergerson", "Joshua David", "" ], [ "Hutchison", "John K.", "" ], [ "Verner", "Duane R.", "" ], [ "Branham", "Jordan", "" ], [ "Alexander", "M. Ross", "" ], [ "Ross", "Robert B.", "" ], [ "Feng", "Yan", "" ], [ "Levy", "Leslie-Anne", "" ], [ "Su", "Weijie", "" ] ]
2402.08115
Karthik Valmeekam
Kaya Stechly, Karthik Valmeekam, Subbarao Kambhampati
On the Self-Verification Limitations of Large Language Models on Reasoning and Planning Tasks
arXiv admin note: text overlap with arXiv:2310.12397
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been considerable divergence of opinion on the reasoning abilities of Large Language Models (LLMs). While the initial optimism that reasoning might emerge automatically with scale has been tempered thanks to a slew of counterexamples--ranging from multiplication to simple planning--there persists a wide spread belief that LLMs can self-critique and improve their own solutions in an iterative fashion. This belief seemingly rests on the assumption that verification of correctness should be easier than generation--a rather classical argument from computational complexity--which should be irrelevant to LLMs to the extent that what they are doing is approximate retrieval. In this paper, we set out to systematically investigate the effectiveness of iterative prompting in the context of reasoning and planning. We present a principled empirical study of the performance of GPT-4 in three domains: Game of 24, Graph Coloring, and STRIPS planning. We experiment both with the model critiquing its own answers and with an external correct reasoner verifying proposed solutions. In each case, we analyze whether the content of criticisms actually affects bottom line performance, and whether we can ablate elements of the augmented system without losing performance. We observe significant performance collapse with self-critique, significant performance gains with sound external verification, but that the content of critique doesn't matter to the performance of the system. In fact, merely re-prompting with a sound verifier maintains most of the benefits of more involved setups.
[ { "version": "v1", "created": "Mon, 12 Feb 2024 23:11:01 GMT" } ]
1,707,868,800,000
[ [ "Stechly", "Kaya", "" ], [ "Valmeekam", "Karthik", "" ], [ "Kambhampati", "Subbarao", "" ] ]
2402.08145
Rushang Karia
Rushang Karia, Pulkit Verma, Alberto Speranzon, Siddharth Srivastava
Epistemic Exploration for Generalizable Planning and Learning in Non-Stationary Settings
To appear at ICAPS-24
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a new approach for continual planning and model learning in non-stationary stochastic environments expressed using relational representations. Such capabilities are essential for the deployment of sequential decision-making systems in the uncertain, constantly evolving real world. Working in such practical settings with unknown (and non-stationary) transition systems and changing tasks, the proposed framework models gaps in the agent's current state of knowledge and uses them to conduct focused, investigative explorations. Data collected using these explorations is used for learning generalizable probabilistic models for solving the current task despite continual changes in the environment dynamics. Empirical evaluations on several benchmark domains show that this approach significantly outperforms planning and RL baselines in terms of sample complexity in non-stationary settings. Theoretical results show that the system reverts to exhibit desirable convergence properties when stationarity holds.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 00:50:06 GMT" } ]
1,707,868,800,000
[ [ "Karia", "Rushang", "" ], [ "Verma", "Pulkit", "" ], [ "Speranzon", "Alberto", "" ], [ "Srivastava", "Siddharth", "" ] ]
2402.08178
Jae-Woo Choi
Jae-Woo Choi and Youngwoo Yoon and Hyobin Ong and Jaehong Kim and Minsu Jang
LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents
ICLR 2024. Code: https://github.com/lbaa2022/LLMTaskPlanning
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) have recently received considerable attention as alternative solutions for task planning. However, comparing the performance of language-oriented task planners becomes difficult, and there exists a dearth of detailed exploration regarding the effects of various factors such as pre-trained model selection and prompt construction. To address this, we propose a benchmark system for automatically quantifying performance of task planning for home-service embodied agents. Task planners are tested on two pairs of datasets and simulators: 1) ALFRED and AI2-THOR, 2) an extension of Watch-And-Help and VirtualHome. Using the proposed benchmark system, we perform extensive experiments with LLMs and prompts, and explore several enhancements of the baseline planner. We expect that the proposed benchmark tool would accelerate the development of language-oriented task planners.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 02:28:57 GMT" } ]
1,707,868,800,000
[ [ "Choi", "Jae-Woo", "" ], [ "Yoon", "Youngwoo", "" ], [ "Ong", "Hyobin", "" ], [ "Kim", "Jaehong", "" ], [ "Jang", "Minsu", "" ] ]
2402.08208
Mandar Manohar Pitale
Mandar Pitale, Alireza Abbaspour, Devesh Upadhyay
Inherent Diverse Redundant Safety Mechanisms for AI-based Software Elements in Automotive Applications
This article is accepted for the SAE WCX 2024 conference proceedings
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper explores the role and challenges of Artificial Intelligence (AI) algorithms, specifically AI-based software elements, in autonomous driving systems. These AI systems are fundamental in executing real-time critical functions in complex and high-dimensional environments. They handle vital tasks like multi-modal perception, cognition, and decision-making tasks such as motion planning, lane keeping, and emergency braking. A primary concern relates to the ability (and necessity) of AI models to generalize beyond their initial training data. This generalization issue becomes evident in real-time scenarios, where models frequently encounter inputs not represented in their training or validation data. In such cases, AI systems must still function effectively despite facing distributional or domain shifts. This paper investigates the risk associated with overconfident AI models in safety-critical applications like autonomous driving. To mitigate these risks, methods for training AI models that help maintain performance without overconfidence are proposed. This involves implementing certainty reporting architectures and ensuring diverse training data. While various distribution-based methods exist to provide safety mechanisms for AI models, there is a noted lack of systematic assessment of these methods, especially in the context of safety-critical automotive applications. Many methods in the literature do not adapt well to the quick response times required in safety-critical edge applications. This paper reviews these methods, discusses their suitability for safety-critical applications, and highlights their strengths and limitations. The paper also proposes potential improvements to enhance the safety and reliability of AI algorithms in autonomous vehicles in the context of rapid and accurate decision-making processes.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 04:15:26 GMT" }, { "version": "v2", "created": "Thu, 29 Feb 2024 18:18:04 GMT" } ]
1,709,251,200,000
[ [ "Pitale", "Mandar", "" ], [ "Abbaspour", "Alireza", "" ], [ "Upadhyay", "Devesh", "" ] ]
2402.08211
Aaron Traylor
Aaron Traylor, Jack Merullo, Michael J. Frank, Ellie Pavlick
Transformer Mechanisms Mimic Frontostriatal Gating Operations When Trained on Human Working Memory Tasks
8 pages, 4 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Models based on the Transformer neural network architecture have seen success on a wide variety of tasks that appear to require complex "cognitive branching" -- or the ability to maintain pursuit of one goal while accomplishing others. In cognitive neuroscience, success on such tasks is thought to rely on sophisticated frontostriatal mechanisms for selective \textit{gating}, which enable role-addressable updating -- and later readout -- of information to and from distinct "addresses" of memory, in the form of clusters of neurons. However, Transformer models have no such mechanisms intentionally built-in. It is thus an open question how Transformers solve such tasks, and whether the mechanisms that emerge to help them to do so bear any resemblance to the gating mechanisms in the human brain. In this work, we analyze the mechanisms that emerge within a vanilla attention-only Transformer trained on a simple sequence modeling task inspired by a task explicitly designed to study working memory gating in computational cognitive neuroscience. We find that, as a result of training, the self-attention mechanism within the Transformer specializes in a way that mirrors the input and output gating mechanisms which were explicitly incorporated into earlier, more biologically-inspired architectures. These results suggest opportunities for future research on computational similarities between modern AI architectures and models of the human brain.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 04:28:43 GMT" } ]
1,707,868,800,000
[ [ "Traylor", "Aaron", "" ], [ "Merullo", "Jack", "" ], [ "Frank", "Michael J.", "" ], [ "Pavlick", "Ellie", "" ] ]
2402.08236
Hongyuan Yang
Siqi Peng, Hongyuan Yang, Akihiro Yamamoto
BERT4FCA: A Method for Bipartite Link Prediction using Formal Concept Analysis and BERT
23 pages, 5 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
We propose BERT4FCA, a novel method for link prediction in bipartite networks, using formal concept analysis (FCA) and BERT. Link prediction in bipartite networks is an important task that can solve various practical problems like friend recommendation in social networks and co-authorship prediction in author-paper networks. Recent research has found that in bipartite networks, maximal bi-cliques provide important information for link prediction, and they can be extracted by FCA. Some FCA-based bipartite link prediction methods have achieved good performance. However, we figured out that their performance could be further improved because these methods did not fully capture the rich information of the extracted maximal bi-cliques. To address this limitation, we propose an approach using BERT, which can learn more information from the maximal bi-cliques extracted by FCA and use them to make link prediction. We conduct experiments on three real-world bipartite networks and demonstrate that our method outperforms previous FCA-based methods, and some classic methods such as matrix-factorization and node2vec.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 06:02:05 GMT" } ]
1,707,868,800,000
[ [ "Peng", "Siqi", "" ], [ "Yang", "Hongyuan", "" ], [ "Yamamoto", "Akihiro", "" ] ]
2402.08250
Yifan Yang
Yifan Yang, Mingquan Lin, Han Zhao, Yifan Peng, Furong Huang, Zhiyong Lu
A survey of recent methods for addressing AI fairness and bias in biomedicine
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Artificial intelligence (AI) systems have the potential to revolutionize clinical practices, including improving diagnostic accuracy and surgical decision-making, while also reducing costs and manpower. However, it is important to recognize that these systems may perpetuate social inequities or demonstrate biases, such as those based on race or gender. Such biases can occur before, during, or after the development of AI models, making it critical to understand and address potential biases to enable the accurate and reliable application of AI models in clinical settings. To mitigate bias concerns during model development, we surveyed recent publications on different debiasing methods in the fields of biomedical natural language processing (NLP) or computer vision (CV). Then we discussed the methods that have been applied in the biomedical domain to address bias. We performed our literature search on PubMed, ACM digital library, and IEEE Xplore of relevant articles published between January 2018 and December 2023 using multiple combinations of keywords. We then filtered the result of 10,041 articles automatically with loose constraints, and manually inspected the abstracts of the remaining 890 articles to identify the 55 articles included in this review. Additional articles in the references are also included in this review. We discuss each method and compare its strengths and weaknesses. Finally, we review other potential methods from the general domain that could be applied to biomedicine to address bias and improve fairness.The bias of AIs in biomedicine can originate from multiple sources. Existing debiasing methods that focus on algorithms can be categorized into distributional or algorithmic.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 06:38:46 GMT" } ]
1,707,868,800,000
[ [ "Yang", "Yifan", "" ], [ "Lin", "Mingquan", "" ], [ "Zhao", "Han", "" ], [ "Peng", "Yifan", "" ], [ "Huang", "Furong", "" ], [ "Lu", "Zhiyong", "" ] ]
2402.08284
Takanori Ugai
Takanori Ugai, Yusuke Koyanagi, Fumihito Nishino
A Logical Approach to Criminal Case Investigation
11 pages, 11 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
XAI (eXplanable AI) techniques that have the property of explaining the reasons for their conclusions, i.e. explainability or interpretability, are attracting attention. XAI is expected to be used in the development of forensic science and the justice system. In today's forensic and criminal investigation environment, experts face many challenges due to large amounts of data, small pieces of evidence in a chaotic and complex environment, traditional laboratory structures and sometimes inadequate knowledge. All these can lead to failed investigations and miscarriages of justice. In this paper, we describe the application of one logical approach to crime scene investigation. The subject of the application is ``The Adventure of the Speckled Band'' from the Sherlock Holmes short stories. The applied data is the knowledge graph created for the Knowledge Graph Reasoning Challenge. We tried to find the murderer by inferring each person with the motive, opportunity, and method. We created an ontology of motives and methods of murder from dictionaries and dictionaries, added it to the knowledge graph of ``The Adventure of the Speckled Band'', and applied scripts to determine motives, opportunities, and methods.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 08:24:32 GMT" } ]
1,707,868,800,000
[ [ "Ugai", "Takanori", "" ], [ "Koyanagi", "Yusuke", "" ], [ "Nishino", "Fumihito", "" ] ]
2402.08298
Josu Ceberio
Josu Ceberio, Borja Calvo
Time to Stop and Think: What kind of research do we want to do?
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Experimentation is an intrinsic part of research in artificial intelligence since it allows for collecting quantitative observations, validating hypotheses, and providing evidence for their reformulation. For that reason, experimentation must be coherent with the purposes of the research, properly addressing the relevant questions in each case. Unfortunately, the literature is full of works whose experimentation is neither rigorous nor convincing, oftentimes designed to support prior beliefs rather than answering the relevant research questions. In this paper, we focus on the field of metaheuristic optimization, since it is our main field of work, and it is where we have observed the misconduct that has motivated this letter. Even if we limit the focus of this manuscript to the experimental part of the research, our main goal is to sew the seed of sincere critical assessment of our work, sparking a reflection process both at the individual and the community level. Such a reflection process is too complex and extensive to be tackled as a whole. Therefore, to bring our feet to the ground, we will include in this document our reflections about the role of experimentation in our work, discussing topics such as the use of benchmark instances vs instance generators, or the statistical assessment of empirical results. That is, all the statements included in this document are personal views and opinions, which can be shared by others or not. Certainly, having different points of view is the basis to establish a good discussion process.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 08:53:57 GMT" } ]
1,707,868,800,000
[ [ "Ceberio", "Josu", "" ], [ "Calvo", "Borja", "" ] ]
2402.08369
Sangwoo Shin
Sangwoo Shin, Daehee Lee, Minjong Yoo, Woo Kyung Kim, Honguk Woo
One-shot Imitation in a Non-Stationary Environment via Multi-Modal Skill
ICML-2023 Camera Ready Version
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
One-shot imitation is to learn a new task from a single demonstration, yet it is a challenging problem to adopt it for complex tasks with the high domain diversity inherent in a non-stationary environment. To tackle the problem, we explore the compositionality of complex tasks, and present a novel skill-based imitation learning framework enabling one-shot imitation and zero-shot adaptation; from a single demonstration for a complex unseen task, a semantic skill sequence is inferred and then each skill in the sequence is converted into an action sequence optimized for environmental hidden dynamics that can vary over time. Specifically, we leverage a vision-language model to learn a semantic skill set from offline video datasets, where each skill is represented on the vision-language embedding space, and adapt meta-learning with dynamics inference to enable zero-shot skill adaptation. We evaluate our framework with various one-shot imitation scenarios for extended multi-stage Meta-world tasks, showing its superiority in learning complex tasks, generalizing to dynamics changes, and extending to different demonstration conditions and modalities, compared to other baselines.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 11:01:52 GMT" } ]
1,707,868,800,000
[ [ "Shin", "Sangwoo", "" ], [ "Lee", "Daehee", "" ], [ "Yoo", "Minjong", "" ], [ "Kim", "Woo Kyung", "" ], [ "Woo", "Honguk", "" ] ]
2402.08423
Jianwu Fang
Peining Shen, Jianwu Fang, Hongkai Yu, and Jianru Xue
Vehicle Behavior Prediction by Episodic-Memory Implanted NDT
Accepted by ICRA2024
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In autonomous driving, predicting the behavior (turning left, stopping, etc.) of target vehicles is crucial for the self-driving vehicle to make safe decisions and avoid accidents. Existing deep learning-based methods have shown excellent and accurate performance, but the black-box nature makes it untrustworthy to apply them in practical use. In this work, we explore the interpretability of behavior prediction of target vehicles by an Episodic Memory implanted Neural Decision Tree (abbrev. eMem-NDT). The structure of eMem-NDT is constructed by hierarchically clustering the text embedding of vehicle behavior descriptions. eMem-NDT is a neural-backed part of a pre-trained deep learning model by changing the soft-max layer of the deep model to eMem-NDT, for grouping and aligning the memory prototypes of the historical vehicle behavior features in training data on a neural decision tree. Each leaf node of eMem-NDT is modeled by a neural network for aligning the behavior memory prototypes. By eMem-NDT, we infer each instance in behavior prediction of vehicles by bottom-up Memory Prototype Matching (MPM) (searching the appropriate leaf node and the links to the root node) and top-down Leaf Link Aggregation (LLA) (obtaining the probability of future behaviors of vehicles for certain instances). We validate eMem-NDT on BLVD and LOKI datasets, and the results show that our model can obtain a superior performance to other methods with clear explainability. The code is available at https://github.com/JWFangit/eMem-NDT.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 12:50:04 GMT" } ]
1,707,868,800,000
[ [ "Shen", "Peining", "" ], [ "Fang", "Jianwu", "" ], [ "Yu", "Hongkai", "" ], [ "Xue", "Jianru", "" ] ]
2402.08472
Christian Blum
Camilo Chac\'on Sartori and Christian Blum and Gabriela Ochoa
Large Language Models for the Automated Analysis of Optimization Algorithms
Submitted to the GECCO 2024 conference
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability of Large Language Models (LLMs) to generate high-quality text and code has fuelled their rise in popularity. In this paper, we aim to demonstrate the potential of LLMs within the realm of optimization algorithms by integrating them into STNWeb. This is a web-based tool for the generation of Search Trajectory Networks (STNs), which are visualizations of optimization algorithm behavior. Although visualizations produced by STNWeb can be very informative for algorithm designers, they often require a certain level of prior knowledge to be interpreted. In an attempt to bridge this knowledge gap, we have incorporated LLMs, specifically GPT-4, into STNWeb to produce extensive written reports, complemented by automatically generated plots, thereby enhancing the user experience and reducing the barriers to the adoption of this tool by the research community. Moreover, our approach can be expanded to other tools from the optimization community, showcasing the versatility and potential of LLMs in this field.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 14:05:02 GMT" } ]
1,707,868,800,000
[ [ "Sartori", "Camilo Chacón", "" ], [ "Blum", "Christian", "" ], [ "Ochoa", "Gabriela", "" ] ]
2402.08492
Xiaobo Liu
Xiaoqiang Liu, Yubin Wang, Zicheng Huang, Boming Xu, Yilin Zeng, Xinqi Chen, Zilong Wang, Enning Yang, Xiaoxuan Lei, Yisen Huang, Xiaobo Liu
The Application of ChatGPT in Responding to Questions Related to the Boston Bowel Preparation Scale
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Background: Colonoscopy, a crucial diagnostic tool in gastroenterology, depends heavily on superior bowel preparation. ChatGPT, a large language model with emergent intelligence which also exhibits potential in medical applications. This study aims to assess the accuracy and consistency of ChatGPT in using the Boston Bowel Preparation Scale (BBPS) for colonoscopy assessment. Methods: We retrospectively collected 233 colonoscopy images from 2020 to 2023. These images were evaluated using the BBPS by 3 senior endoscopists and 3 novice endoscopists. Additionally, ChatGPT also assessed these images, having been divided into three groups and undergone specific Fine-tuning. Consistency was evaluated through two rounds of testing. Results: In the initial round, ChatGPT's accuracy varied between 48.93% and 62.66%, trailing the endoscopists' accuracy of 76.68% to 77.83%. Kappa values for ChatGPT was between 0.52 and 0.53, compared to 0.75 to 0.87 for the endoscopists. Conclusion: While ChatGPT shows promise in bowel preparation scoring, it currently does not match the accuracy and consistency of experienced endoscopists. Future research should focus on in-depth Fine-tuning.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 14:38:12 GMT" } ]
1,707,868,800,000
[ [ "Liu", "Xiaoqiang", "" ], [ "Wang", "Yubin", "" ], [ "Huang", "Zicheng", "" ], [ "Xu", "Boming", "" ], [ "Zeng", "Yilin", "" ], [ "Chen", "Xinqi", "" ], [ "Wang", "Zilong", "" ], [ "Yang", "Enning", "" ], [ "Lei", "Xiaoxuan", "" ], [ "Huang", "Yisen", "" ], [ "Liu", "Xiaobo", "" ] ]
2402.08511
Cedric Derstroff
Cedric Derstroff, Jannis Brugger, Jannis Bl\"uml, Mira Mezini, Stefan Kramer, Kristian Kersting
Amplifying Exploration in Monte-Carlo Tree Search by Focusing on the Unknown
10 pages, 7 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Monte-Carlo tree search (MCTS) is an effective anytime algorithm with a vast amount of applications. It strategically allocates computational resources to focus on promising segments of the search tree, making it a very attractive search algorithm in large search spaces. However, it often expends its limited resources on reevaluating previously explored regions when they remain the most promising path. Our proposed methodology, denoted as AmEx-MCTS, solves this problem by introducing a novel MCTS formulation. Central to AmEx-MCTS is the decoupling of value updates, visit count updates, and the selected path during the tree search, thereby enabling the exclusion of already explored subtrees or leaves. This segregation preserves the utility of visit counts for both exploration-exploitation balancing and quality metrics within MCTS. The resultant augmentation facilitates in a considerably broader search using identical computational resources, preserving the essential characteristics of MCTS. The expanded coverage not only yields more precise estimations but also proves instrumental in larger and more complex problems. Our empirical evaluation demonstrates the superior performance of AmEx-MCTS, surpassing classical MCTS and related approaches by a substantial margin.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 15:05:54 GMT" } ]
1,707,868,800,000
[ [ "Derstroff", "Cedric", "" ], [ "Brugger", "Jannis", "" ], [ "Blüml", "Jannis", "" ], [ "Mezini", "Mira", "" ], [ "Kramer", "Stefan", "" ], [ "Kersting", "Kristian", "" ] ]
2402.08514
Milad Kazemi
Milad Kazemi, Jessica Lally, Ekaterina Tishchenko, Hana Chockler and Nicola Paoletti
Counterfactual Influence in Markov Decision Processes
12 pages, 6 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Our work addresses a fundamental problem in the context of counterfactual inference for Markov Decision Processes (MDPs). Given an MDP path $\tau$, this kind of inference allows us to derive counterfactual paths $\tau'$ describing what-if versions of $\tau$ obtained under different action sequences than those observed in $\tau$. However, as the counterfactual states and actions deviate from the observed ones over time, the observation $\tau$ may no longer influence the counterfactual world, meaning that the analysis is no longer tailored to the individual observation, resulting in interventional outcomes rather than counterfactual ones. Even though this issue specifically affects the popular Gumbel-max structural causal model used for MDP counterfactuals, it has remained overlooked until now. In this work, we introduce a formal characterisation of influence based on comparing counterfactual and interventional distributions. We devise an algorithm to construct counterfactual models that automatically satisfy influence constraints. Leveraging such models, we derive counterfactual policies that are not just optimal for a given reward structure but also remain tailored to the observed path. Even though there is an unavoidable trade-off between policy optimality and strength of influence constraints, our experiments demonstrate that it is possible to derive (near-)optimal policies while remaining under the influence of the observation.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 15:10:30 GMT" } ]
1,707,868,800,000
[ [ "Kazemi", "Milad", "" ], [ "Lally", "Jessica", "" ], [ "Tishchenko", "Ekaterina", "" ], [ "Chockler", "Hana", "" ], [ "Paoletti", "Nicola", "" ] ]
2402.08646
Hiroyuki Kido
Hiroyuki Kido
Inference of Abstraction for a Unified Account of Symbolic Reasoning from Data
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inspired by empirical work in neuroscience for Bayesian approaches to brain function, we give a unified probabilistic account of various types of symbolic reasoning from data. We characterise them in terms of formal logic using the classical consequence relation, an empirical consequence relation, maximal consistent sets, maximal possible sets and maximum likelihood estimation. The theory gives new insights into reasoning towards human-like machine intelligence.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 18:24:23 GMT" } ]
1,707,868,800,000
[ [ "Kido", "Hiroyuki", "" ] ]
2402.08670
Yu Wang
Yuqing Liu, Yu Wang, Lichao Sun, Philip S. Yu
Rec-GPT4V: Multimodal Recommendation with Large Vision-Language Models
under review
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The development of large vision-language models (LVLMs) offers the potential to address challenges faced by traditional multimodal recommendations thanks to their proficient understanding of static images and textual dynamics. However, the application of LVLMs in this field is still limited due to the following complexities: First, LVLMs lack user preference knowledge as they are trained from vast general datasets. Second, LVLMs suffer setbacks in addressing multiple image dynamics in scenarios involving discrete, noisy, and redundant image sequences. To overcome these issues, we propose the novel reasoning scheme named Rec-GPT4V: Visual-Summary Thought (VST) of leveraging large vision-language models for multimodal recommendation. We utilize user history as in-context user preferences to address the first challenge. Next, we prompt LVLMs to generate item image summaries and utilize image comprehension in natural language space combined with item titles to query the user preferences over candidate items. We conduct comprehensive experiments across four datasets with three LVLMs: GPT4-V, LLaVa-7b, and LLaVa-13b. The numerical results indicate the efficacy of VST.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 18:51:18 GMT" } ]
1,707,868,800,000
[ [ "Liu", "Yuqing", "" ], [ "Wang", "Yu", "" ], [ "Sun", "Lichao", "" ], [ "Yu", "Philip S.", "" ] ]