id
stringlengths
9
10
submitter
stringlengths
5
47
authors
stringlengths
5
1.72k
title
stringlengths
11
234
comments
stringlengths
1
491
journal-ref
stringlengths
4
396
doi
stringlengths
13
97
report-no
stringlengths
4
138
categories
stringclasses
1 value
license
stringclasses
9 values
abstract
stringlengths
29
3.66k
versions
listlengths
1
21
update_date
int64
1,180B
1,718B
authors_parsed
listlengths
1
98
2402.08780
Sagar Pathak
Sagar Pathak, Bidhya Shrestha and Kritish Pahi
Enhanced Deep Q-Learning for 2D Self-Driving Cars: Implementation and Evaluation on a Custom Track Environment
8 pages, 8 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This research project presents the implementation of a Deep Q-Learning Network (DQN) for a self-driving car on a 2-dimensional (2D) custom track, with the objective of enhancing the DQN network's performance. It encompasses the development of a custom driving environment using Pygame on a track surrounding the University of Memphis map, as well as the design and implementation of the DQN model. The algorithm utilizes data from 7 sensors installed in the car, which measure the distance between the car and the track. These sensors are positioned in front of the vehicle, spaced 20 degrees apart, enabling them to sense a wide area ahead. We successfully implemented the DQN and also a modified version of the DQN with a priority-based action selection mechanism, which we refer to as modified DQN. The model was trained over 1000 episodes, and the average reward received by the agent was found to be around 40, which is approximately 60% higher than the original DQN and around 50% higher than the vanilla neural network.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 20:29:36 GMT" } ]
1,707,955,200,000
[ [ "Pathak", "Sagar", "" ], [ "Shrestha", "Bidhya", "" ], [ "Pahi", "Kritish", "" ] ]
2402.08806
Gioele Barabucci
Gioele Barabucci, Victor Shia, Eugene Chu, Benjamin Harack, Nathan Fu
Combining Insights From Multiple Large Language Models Improves Diagnostic Accuracy
5 pages, 2 figures, 1 table
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Large language models (LLMs) such as OpenAI's GPT-4 or Google's PaLM 2 are proposed as viable diagnostic support tools or even spoken of as replacements for "curbside consults". However, even LLMs specifically trained on medical topics may lack sufficient diagnostic accuracy for real-life applications. Methods: Using collective intelligence methods and a dataset of 200 clinical vignettes of real-life cases, we assessed and compared the accuracy of differential diagnoses obtained by asking individual commercial LLMs (OpenAI GPT-4, Google PaLM 2, Cohere Command, Meta Llama 2) against the accuracy of differential diagnoses synthesized by aggregating responses from combinations of the same LLMs. Results: We find that aggregating responses from multiple, various LLMs leads to more accurate differential diagnoses (average accuracy for 3 LLMs: $75.3\%\pm 1.6pp$) compared to the differential diagnoses produced by single LLMs (average accuracy for single LLMs: $59.0\%\pm 6.1pp$). Discussion: The use of collective intelligence methods to synthesize differential diagnoses combining the responses of different LLMs achieves two of the necessary steps towards advancing acceptance of LLMs as a diagnostic support tool: (1) demonstrate high diagnostic accuracy and (2) eliminate dependence on a single commercial vendor.
[ { "version": "v1", "created": "Tue, 13 Feb 2024 21:24:21 GMT" } ]
1,707,955,200,000
[ [ "Barabucci", "Gioele", "" ], [ "Shia", "Victor", "" ], [ "Chu", "Eugene", "" ], [ "Harack", "Benjamin", "" ], [ "Fu", "Nathan", "" ] ]
2402.08859
Yingpeng Du
Yingpeng Du, Ziyan Wang, Zhu Sun, Haoyan Chua, Hongzhi Liu, Zhonghai Wu, Yining Ma, Jie Zhang, Youchen Sun
Large Language Model with Graph Convolution for Recommendation
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, efforts have been made to use text information for better user profiling and item characterization in recommendations. However, text information can sometimes be of low quality, hindering its effectiveness for real-world applications. With knowledge and reasoning capabilities capsuled in Large Language Models (LLMs), utilizing LLMs emerges as a promising way for description improvement. However, existing ways of prompting LLMs with raw texts ignore structured knowledge of user-item interactions, which may lead to hallucination problems like inconsistent description generation. To this end, we propose a Graph-aware Convolutional LLM method to elicit LLMs to capture high-order relations in the user-item graph. To adapt text-based LLMs with structured graphs, We use the LLM as an aggregator in graph processing, allowing it to understand graph-based information step by step. Specifically, the LLM is required for description enhancement by exploring multi-hop neighbors layer by layer, thereby propagating information progressively in the graph. To enable LLMs to capture large-scale graph information, we break down the description task into smaller parts, which drastically reduces the context length of the token input with each step. Extensive experiments on three real-world datasets show that our method consistently outperforms state-of-the-art methods.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 00:04:33 GMT" } ]
1,707,955,200,000
[ [ "Du", "Yingpeng", "" ], [ "Wang", "Ziyan", "" ], [ "Sun", "Zhu", "" ], [ "Chua", "Haoyan", "" ], [ "Liu", "Hongzhi", "" ], [ "Wu", "Zhonghai", "" ], [ "Ma", "Yining", "" ], [ "Zhang", "Jie", "" ], [ "Sun", "Youchen", "" ] ]
2402.08869
Stefan Erben
Stefan Erben and Andreas Waldis
ScamSpot: Fighting Financial Fraud in Instagram Comments
EACL 2024 Demo Paper, 11 pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The long-standing problem of spam and fraudulent messages in the comment sections of Instagram pages in the financial sector claims new victims every day. Instagram's current spam filter proves inadequate, and existing research approaches are primarily confined to theoretical concepts. Practical implementations with evaluated results are missing. To solve this problem, we propose ScamSpot, a comprehensive system that includes a browser extension, a fine-tuned BERT model and a REST API. This approach ensures public accessibility of our results for Instagram users using the Chrome browser. Furthermore, we conduct a data annotation study, shedding light on the reasons and causes of the problem and evaluate the system through user feedback and comparison with existing models. ScamSpot is an open-source project and is publicly available at https://scamspot.github.io/.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 00:30:18 GMT" } ]
1,707,955,200,000
[ [ "Erben", "Stefan", "" ], [ "Waldis", "Andreas", "" ] ]
2402.08961
Zhao Li
Zhao Li, Xin Wang, Jun Zhao, Wenbin Guo, Jianxin Li
HyCubE: Efficient Knowledge Hypergraph 3D Circular Convolutional Embedding
14 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge hypergraph embedding models are usually computationally expensive due to the inherent complex semantic information. However, existing works mainly focus on improving the effectiveness of knowledge hypergraph embedding, making the model architecture more complex and redundant. It is desirable and challenging for knowledge hypergraph embedding to reach a trade-off between model effectiveness and efficiency. In this paper, we propose an end-to-end efficient n-ary knowledge hypergraph embedding model, HyCubE, which designs a novel 3D circular convolutional neural network and the alternate mask stack strategy to enhance the interaction and extraction of feature information comprehensively. Furthermore, our proposed model achieves a better trade-off between effectiveness and efficiency by adaptively adjusting the 3D circular convolutional layer structure to handle different arity knowledge hypergraphs with fewer parameters. In addition, we use 1-N multilinear scoring based on the entity mask mechanism to further accelerate the model training efficiency. Finally, extensive experimental results on all datasets demonstrate that our proposed model consistently outperforms state-of-the-art baselines, with an average improvement of 7.30%-9.53% and a maximum improvement of 33.82% across all metrics. Meanwhile, HyCubE is 4.12x faster, GPU memory usage is 52.19% lower, and the number of parameters is reduced by 85.21% compared with the average metric of the latest state-of-the-art baselines.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 06:05:37 GMT" }, { "version": "v2", "created": "Mon, 3 Jun 2024 15:17:46 GMT" } ]
1,717,459,200,000
[ [ "Li", "Zhao", "" ], [ "Wang", "Xin", "" ], [ "Zhao", "Jun", "" ], [ "Guo", "Wenbin", "" ], [ "Li", "Jianxin", "" ] ]
2402.08968
Siwon Kim
Siwon Kim, Shuyang Dai, Mohammad Kachuee, Shayan Ray, Tara Taghavi, and Sungroh Yoon
GrounDial: Human-norm Grounded Safe Dialog Response Generation
Accepted to findings of EACL 2024
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Current conversational AI systems based on large language models (LLMs) are known to generate unsafe responses, agreeing to offensive user input or including toxic content. Previous research aimed to alleviate the toxicity, by fine-tuning LLM with manually annotated safe dialogue histories. However, the dependency on additional tuning requires substantial costs. To remove the dependency, we propose GrounDial, where response safety is achieved by grounding responses to commonsense social rules without requiring fine-tuning. A hybrid approach of in-context learning and human-norm-guided decoding of GrounDial enables the response to be quantitatively and qualitatively safer even without additional data or tuning.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 06:25:50 GMT" } ]
1,707,955,200,000
[ [ "Kim", "Siwon", "" ], [ "Dai", "Shuyang", "" ], [ "Kachuee", "Mohammad", "" ], [ "Ray", "Shayan", "" ], [ "Taghavi", "Tara", "" ], [ "Yoon", "Sungroh", "" ] ]
2402.09047
Tuo Leng
Yiming He, Jia Zou, Xiaokai Zhang, Na Zhu, Tuo Leng
FGeo-TP: A Language Model-Enhanced Solver for Geometry Problems
16 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The application of contemporary artificial intelligence techniques to address geometric problems and automated deductive proof has always been a grand challenge to the interdiscipline field of mathematics and artificial Intelligence. This is the fourth article in a series of our works, in our previous work, we established of a geometric formalized system known as FormalGeo. Moreover we annotated approximately 7000 geometric problems, forming the FormalGeo7k dataset. Despite the FGPS (Formal Geometry Problem Solver) can achieve interpretable algebraic equation solving and human-like deductive reasoning, it often experiences timeouts due to the complexity of the search strategy. In this paper, we introduced FGeo-TP (Theorem Predictor), which utilizes the language model to predict theorem sequences for solving geometry problems. We compared the effectiveness of various Transformer architectures, such as BART or T5, in theorem prediction, implementing pruning in the search process of FGPS, thereby improving its performance in solving geometry problems. Our results demonstrate a significant increase in the problem-solving rate of the language model-enhanced FGeo-TP on the FormalGeo7k dataset, rising from 39.7% to 80.86%. Furthermore, FGeo-TP exhibits notable reductions in solving time and search steps across problems of varying difficulty levels.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 09:44:28 GMT" } ]
1,707,955,200,000
[ [ "He", "Yiming", "" ], [ "Zou", "Jia", "" ], [ "Zhang", "Xiaokai", "" ], [ "Zhu", "Na", "" ], [ "Leng", "Tuo", "" ] ]
2402.09051
Tuo Leng
Jia Zou, Xiaokai Zhang, Yiming He, Na Zhu, Tuo Leng
FGeo-DRL: Deductive Reasoning for Geometric Problems through Deep Reinforcement Learning
15 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human-like automatic deductive reasoning has always been one of the most challenging open problems in the interdiscipline of mathematics and artificial intelligence. This paper is the third in a series of our works. We built a neural-symbolic system, called FGeoDRL, to automatically perform human-like geometric deductive reasoning. The neural part is an AI agent based on reinforcement learning, capable of autonomously learning problem-solving methods from the feedback of a formalized environment, without the need for human supervision. It leverages a pre-trained natural language model to establish a policy network for theorem selection and employ Monte Carlo Tree Search for heuristic exploration. The symbolic part is a reinforcement learning environment based on geometry formalization theory and FormalGeo, which models GPS as a Markov Decision Process. In this formal symbolic system, the known conditions and objectives of the problem form the state space, while the set of theorems forms the action space. Leveraging FGeoDRL, we have achieved readable and verifiable automated solutions to geometric problems. Experiments conducted on the formalgeo7k dataset have achieved a problem-solving success rate of 86.40%. The project is available at https://github.com/PersonNoName/FGeoDRL.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 09:48:39 GMT" }, { "version": "v2", "created": "Thu, 15 Feb 2024 04:50:52 GMT" } ]
1,708,041,600,000
[ [ "Zou", "Jia", "" ], [ "Zhang", "Xiaokai", "" ], [ "He", "Yiming", "" ], [ "Zhu", "Na", "" ], [ "Leng", "Tuo", "" ] ]
2402.09052
Yutaro Yamada
Yutaro Yamada, Khyathi Chandu, Yuchen Lin, Jack Hessel, Ilker Yildirim, Yejin Choi
L3GO: Language Agents with Chain-of-3D-Thoughts for Generating Unconventional Objects
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion-based image generation models such as DALL-E 3 and Stable Diffusion-XL demonstrate remarkable capabilities in generating images with realistic and unique compositions. Yet, these models are not robust in precisely reasoning about physical and spatial configurations of objects, especially when instructed with unconventional, thereby out-of-distribution descriptions, such as "a chair with five legs". In this paper, we propose a language agent with chain-of-3D-thoughts (L3GO), an inference-time approach that can reason about part-based 3D mesh generation of unconventional objects that current data-driven diffusion models struggle with. More concretely, we use large language models as agents to compose a desired object via trial-and-error within the 3D simulation environment. To facilitate our investigation, we develop a new benchmark, Unconventionally Feasible Objects (UFO), as well as SimpleBlenv, a wrapper environment built on top of Blender where language agents can build and compose atomic building blocks via API calls. Human and automatic GPT-4V evaluations show that our approach surpasses the standard GPT-4 and other language agents (e.g., ReAct and Reflexion) for 3D mesh generation on ShapeNet. Moreover, when tested on our UFO benchmark, our approach outperforms other state-of-the-art text-to-2D image and text-to-3D models based on human evaluation.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 09:51:05 GMT" } ]
1,707,955,200,000
[ [ "Yamada", "Yutaro", "" ], [ "Chandu", "Khyathi", "" ], [ "Lin", "Yuchen", "" ], [ "Hessel", "Jack", "" ], [ "Yildirim", "Ilker", "" ], [ "Choi", "Yejin", "" ] ]
2402.09085
Oliver Broadrick
Oliver Broadrick, Honghua Zhang, Guy Van den Broeck
Polynomial Semantics of Tractable Probabilistic Circuits
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probabilistic circuits compute multilinear polynomials that represent multivariate probability distributions. They are tractable models that support efficient marginal inference. However, various polynomial semantics have been considered in the literature (e.g., network polynomials, likelihood polynomials, generating functions, and Fourier transforms). The relationships between circuit representations of these polynomial encodings of distributions is largely unknown. In this paper, we prove that for distributions over binary variables, each of these probabilistic circuit models is equivalent in the sense that any circuit for one of them can be transformed into a circuit for any of the others with only a polynomial increase in size. They are therefore all tractable for marginal inference on the same class of distributions. Finally, we explore the natural extension of one such polynomial semantics, called probabilistic generating circuits, to categorical random variables, and establish that inference becomes #P-hard.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 11:02:04 GMT" }, { "version": "v2", "created": "Sun, 28 Apr 2024 19:34:38 GMT" } ]
1,714,435,200,000
[ [ "Broadrick", "Oliver", "" ], [ "Zhang", "Honghua", "" ], [ "Broeck", "Guy Van den", "" ] ]
2402.09099
Xiongye Xiao
Xiongye Xiao, Chenyu Zhou, Heng Ping, Defu Cao, Yaxing Li, Yizhuo Zhou, Shixuan Li, Paul Bogdan
Exploring Neuron Interactions and Emergence in LLMs: From the Multifractal Analysis Perspective
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Prior studies on the emergence in large models have primarily focused on how the functional capabilities of large language models (LLMs) scale with model size. Our research, however, transcends this traditional paradigm, aiming to deepen our understanding of the emergence within LLMs by placing a special emphasis not just on the model size but more significantly on the complex behavior of neuron interactions during the training process. By introducing the concepts of "self-organization" and "multifractal analysis," we explore how neuron interactions dynamically evolve during training, leading to "emergence," mirroring the phenomenon in natural systems where simple micro-level interactions give rise to complex macro-level behaviors. To quantitatively analyze the continuously evolving interactions among neurons in large models during training, we propose the Neuron-based Multifractal Analysis (NeuroMFA). Utilizing NeuroMFA, we conduct a comprehensive examination of the emergent behavior in LLMs through the lens of both model size and training process, paving new avenues for research into the emergence in large models.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 11:20:09 GMT" }, { "version": "v2", "created": "Mon, 4 Mar 2024 11:22:38 GMT" }, { "version": "v3", "created": "Tue, 5 Mar 2024 10:44:36 GMT" }, { "version": "v4", "created": "Thu, 21 Mar 2024 05:33:23 GMT" } ]
1,711,065,600,000
[ [ "Xiao", "Xiongye", "" ], [ "Zhou", "Chenyu", "" ], [ "Ping", "Heng", "" ], [ "Cao", "Defu", "" ], [ "Li", "Yaxing", "" ], [ "Zhou", "Yizhuo", "" ], [ "Li", "Shixuan", "" ], [ "Bogdan", "Paul", "" ] ]
2402.09147
Teddy Ferdinan
Teddy Ferdinan, Jan Koco\'n, Przemys{\l}aw Kazienko
Into the Unknown: Self-Learning Large Language Models
16 pages, 12 figures, 4 tables, submitted to ACL SRW 2024
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the main problem of self-learning LLM: the question of what to learn. We propose a self-learning LLM framework that enables an LLM to independently learn previously unknown knowledge through selfassessment of their own hallucinations. Using the hallucination score, we introduce a new concept of Points in the Unknown (PiUs), along with one extrinsic and three intrinsic methods for automatic PiUs identification. It facilitates the creation of a self-learning loop that focuses exclusively on the knowledge gap in Points in the Unknown, resulting in a reduced hallucination score. We also developed evaluation metrics for gauging an LLM's self-learning capability. Our experiments revealed that 7B-Mistral models that have been finetuned or aligned and RWKV5-Eagle are capable of self-learning considerably well. Our self-learning concept allows more efficient LLM updates and opens new perspectives for knowledge exchange. It may also increase public trust in AI.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 12:56:58 GMT" }, { "version": "v2", "created": "Tue, 4 Jun 2024 12:44:46 GMT" } ]
1,717,545,600,000
[ [ "Ferdinan", "Teddy", "" ], [ "Kocoń", "Jan", "" ], [ "Kazienko", "Przemysław", "" ] ]
2402.09266
Andres Molares-Ulloa
Andres Molares-Ulloa, Enrique Fernandez-Blanco, Alejandro Pazos and Daniel Rivero
Machine Learning in management of precautionary closures caused by lipophilic biotoxins
null
Computers and Electronics in Agriculture, 197, 106956. (2022)
10.1016/j.compag.2022.106956
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mussel farming is one of the most important aquaculture industries. The main risk to mussel farming is harmful algal blooms (HABs), which pose a risk to human consumption. In Galicia, the Spanish main producer of cultivated mussels, the opening and closing of the production areas is controlled by a monitoring program. In addition to the closures resulting from the presence of toxicity exceeding the legal threshold, in the absence of a confirmatory sampling and the existence of risk factors, precautionary closures may be applied. These decisions are made by experts without the support or formalisation of the experience on which they are based. Therefore, this work proposes a predictive model capable of supporting the application of precautionary closures. Achieving sensitivity, accuracy and kappa index values of 97.34%, 91.83% and 0.75 respectively, the kNN algorithm has provided the best results. This allows the creation of a system capable of helping in complex situations where forecast errors are more common.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 15:51:58 GMT" } ]
1,707,955,200,000
[ [ "Molares-Ulloa", "Andres", "" ], [ "Fernandez-Blanco", "Enrique", "" ], [ "Pazos", "Alejandro", "" ], [ "Rivero", "Daniel", "" ] ]
2402.09334
Maryam Amirizaniani
Maryam Amirizaniani, Tanya Roosta, Aman Chadha, Chirag Shah
AuditLLM: A Tool for Auditing Large Language Models Using Multiprobe Approach
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
As Large Language Models (LLMs) gain wider adoption in various contexts, it becomes crucial to ensure they are reasonably safe, consistent, and reliable for an application at hand. This may require probing or auditing them. Probing LLMs with varied iterations of a single question could reveal potential inconsistencies in their knowledge or functionality. However, a tool for performing such audits with simple workflow and low technical threshold is lacking. In this demo, we introduce "AuditLLM," a novel tool designed to evaluate the performance of various LLMs in a methodical way. AuditLLM's core functionality lies in its ability to test a given LLM by auditing it using multiple probes generated from a single question, thereby identifying any inconsistencies in the model's understanding or operation. A reasonably robust, reliable, and consistent LLM should output semantically similar responses for a question asked differently or by different people. Based on this assumption, AuditLLM produces easily interpretable results regarding the LLM's consistencies from a single question that the user enters. A certain level of inconsistency has been shown to be an indicator of potential bias, hallucinations, and other issues. One could then use the output of AuditLLM to further investigate issues with the aforementioned LLM. To facilitate demonstration and practical uses, AuditLLM offers two key modes: (1) Live mode which allows instant auditing of LLMs by analyzing responses to real-time queries; (2) Batch mode which facilitates comprehensive LLM auditing by processing multiple queries at once for in-depth analysis. This tool is beneficial for both researchers and general users, as it enhances our understanding of LLMs' capabilities in generating responses, using a standardized auditing platform.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 17:31:04 GMT" } ]
1,707,955,200,000
[ [ "Amirizaniani", "Maryam", "" ], [ "Roosta", "Tanya", "" ], [ "Chadha", "Aman", "" ], [ "Shah", "Chirag", "" ] ]
2402.09346
Maryam Amirizaniani
Maryam Amirizaniani, Jihan Yao, Adrian Lavergne, Elizabeth Snell Okada, Aman Chadha, Tanya Roosta, Chirag Shah
LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
As Large Language Models (LLMs) become more pervasive across various users and scenarios, identifying potential issues when using these models becomes essential. Examples of such issues include: bias, inconsistencies, and hallucination. Although auditing the LLM for these problems is often warranted, such a process is neither easy nor accessible for most. An effective method is to probe the LLM using different versions of the same question. This could expose inconsistencies in its knowledge or operation, indicating potential for bias or hallucination. However, to operationalize this auditing method at scale, we need an approach to create those probes reliably and automatically. In this paper we propose the LLMAuditor framework which is an automatic, and scalable solution, where one uses a different LLM along with human-in-the-loop (HIL). This approach offers verifiability and transparency, while avoiding circular reliance on the same LLM, and increasing scientific rigor and generalizability. Specifically, LLMAuditor includes two phases of verification using humans: standardized evaluation criteria to verify responses, and a structured prompt template to generate desired probes. A case study using questions from the TruthfulQA dataset demonstrates that we can generate a reliable set of probes from one LLM that can be used to audit inconsistencies in a different LLM. This process is enhanced by our structured prompt template with HIL, which not only boosts the reliability of our approach in auditing but also yields the delivery of less hallucinated results. The novelty of our research stems from the development of a comprehensive, general-purpose framework that includes a HIL verified prompt template for auditing responses generated by LLMs.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 17:49:31 GMT" }, { "version": "v2", "created": "Fri, 16 Feb 2024 16:58:20 GMT" }, { "version": "v3", "created": "Wed, 22 May 2024 17:17:03 GMT" } ]
1,716,508,800,000
[ [ "Amirizaniani", "Maryam", "" ], [ "Yao", "Jihan", "" ], [ "Lavergne", "Adrian", "" ], [ "Okada", "Elizabeth Snell", "" ], [ "Chadha", "Aman", "" ], [ "Roosta", "Tanya", "" ], [ "Shah", "Chirag", "" ] ]
2402.09388
Harrison Delecki
Harrison Delecki, Marcell Vazquez-Chanlatte, Esen Yel, Kyle Wray, Tomer Arnon, Stefan Witwicki, Mykel J. Kochenderfer
Entropy-regularized Point-based Value Iteration
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Model-based planners for partially observable problems must accommodate both model uncertainty during planning and goal uncertainty during objective inference. However, model-based planners may be brittle under these types of uncertainty because they rely on an exact model and tend to commit to a single optimal behavior. Inspired by results in the model-free setting, we propose an entropy-regularized model-based planner for partially observable problems. Entropy regularization promotes policy robustness for planning and objective inference by encouraging policies to be no more committed to a single action than necessary. We evaluate the robustness and objective inference performance of entropy-regularized policies in three problem domains. Our results show that entropy-regularized policies outperform non-entropy-regularized baselines in terms of higher expected returns under modeling errors and higher accuracy during objective inference.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 18:37:47 GMT" } ]
1,707,955,200,000
[ [ "Delecki", "Harrison", "" ], [ "Vazquez-Chanlatte", "Marcell", "" ], [ "Yel", "Esen", "" ], [ "Wray", "Kyle", "" ], [ "Arnon", "Tomer", "" ], [ "Witwicki", "Stefan", "" ], [ "Kochenderfer", "Mykel J.", "" ] ]
2402.09413
Joseph Y. Halpern
Joseph Y. Halpern
Mathematical Explanations
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A definition of what counts as an explanation of mathematical statement, and when one explanation is better than another, is given. Since all mathematical facts must be true in all causal models, and hence known by an agent, mathematical facts cannot be part of an explanation (under the standard notion of explanation). This problem is solved using impossible possible worlds.
[ { "version": "v1", "created": "Sun, 31 Dec 2023 17:07:28 GMT" } ]
1,708,041,600,000
[ [ "Halpern", "Joseph Y.", "" ] ]
2402.09498
Jos\'e Alberto Ben\'itez-Andrades Ph.D.
Jos\'e Alberto Ben\'itez-Andrades, Mar\'ia Teresa Garc\'ia-Ord\'as, Mar\'ia \'Alvarez-Gonz\'alez, Raquel Leir\'os-Rodr\'iguez and Ana F L\'opez Rodr\'iguez
Detection of the most influential variables for preventing postpartum urinary incontinence using machine learning techniques
null
Digital Health, Volume 8, 2022, 20552076221111289
10.1177/20552076221111289
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Background: Postpartum urinary incontinence (PUI) is a common issue among postnatal women. Previous studies identified potential related variables, but lacked analysis on certain intrinsic and extrinsic patient variables during pregnancy. Objective: The study aims to evaluate the most influential variables in PUI using machine learning, focusing on intrinsic, extrinsic, and combined variable groups. Methods: Data from 93 pregnant women were analyzed using machine learning and oversampling techniques. Four key variables were predicted: occurrence, frequency, intensity of urinary incontinence, and stress urinary incontinence. Results: Models using extrinsic variables were most accurate, with 70% accuracy for urinary incontinence, 77% for frequency, 71% for intensity, and 93% for stress urinary incontinence. Conclusions: The study highlights extrinsic variables as significant predictors of PUI issues. This suggests that PUI prevention might be achievable through healthy habits during pregnancy, although further research is needed for confirmation.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 16:45:10 GMT" } ]
1,708,041,600,000
[ [ "Benítez-Andrades", "José Alberto", "" ], [ "García-Ordás", "María Teresa", "" ], [ "Álvarez-González", "María", "" ], [ "Leirós-Rodríguez", "Raquel", "" ], [ "Rodríguez", "Ana F López", "" ] ]
2402.09565
Linfeng Cao
Linfeng Cao, Haoran Deng, Yang Yang, Chunping Wang, Lei Chen
Graph-Skeleton: ~1% Nodes are Sufficient to Represent Billion-Scale Graph
21 pages, 11 figures, In Proceedings of the ACM Web Conference 2024 (WWW'24)
null
10.1145/3589334.3645452
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the ubiquity of graph data on the web, web graph mining has become a hot research spot. Nonetheless, the prevalence of large-scale web graphs in real applications poses significant challenges to storage, computational capacity and graph model design. Despite numerous studies to enhance the scalability of graph models, a noticeable gap remains between academic research and practical web graph mining applications. One major cause is that in most industrial scenarios, only a small part of nodes in a web graph are actually required to be analyzed, where we term these nodes as target nodes, while others as background nodes. In this paper, we argue that properly fetching and condensing the background nodes from massive web graph data might be a more economical shortcut to tackle the obstacles fundamentally. To this end, we make the first attempt to study the problem of massive background nodes compression for target nodes classification. Through extensive experiments, we reveal two critical roles played by the background nodes in target node classification: enhancing structural connectivity between target nodes, and feature correlation with target nodes. Followingthis, we propose a novel Graph-Skeleton1 model, which properly fetches the background nodes, and further condenses the semantic and topological information of background nodes within similar target-background local structures. Extensive experiments on various web graph datasets demonstrate the effectiveness and efficiency of the proposed method. In particular, for MAG240M dataset with 0.24 billion nodes, our generated skeleton graph achieves highly comparable performance while only containing 1.8% nodes of the original graph.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 20:33:11 GMT" }, { "version": "v2", "created": "Wed, 6 Mar 2024 22:22:33 GMT" } ]
1,709,856,000,000
[ [ "Cao", "Linfeng", "" ], [ "Deng", "Haoran", "" ], [ "Yang", "Yang", "" ], [ "Wang", "Chunping", "" ], [ "Chen", "Lei", "" ] ]
2402.09656
Wanli Yang
Wanli Yang, Fei Sun, Xinyu Ma, Xun Liu, Dawei Yin, Xueqi Cheng
The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse
Accepted at Findings of ACL 2024
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although model editing has shown promise in revising knowledge in Large Language Models (LLMs), its impact on the inherent capabilities of LLMs is often overlooked. In this work, we reveal a critical phenomenon: even a single edit can trigger model collapse, manifesting as significant performance degradation in various benchmark tasks. However, benchmarking LLMs after each edit, while necessary to prevent such collapses, is impractically time-consuming and resource-intensive. To mitigate this, we propose using perplexity as a surrogate metric, validated by extensive experiments demonstrating changes in an edited model's perplexity are strongly correlated with its downstream task performances. We further conduct an in-depth study on sequential editing, a practical setting for real-world scenarios, across various editing methods and LLMs, focusing on hard cases from our previous single edit studies. The results indicate that nearly all examined editing methods result in model collapse after only few edits. To facilitate further research, we have utilized GPT-3.5 to develop a new dataset, HardEdit, based on those hard cases. This dataset aims to establish the foundation for pioneering research in reliable model editing and the mechanisms underlying editing-induced model collapse. We hope this work can draw the community's attention to the potential risks inherent in model editing practices.
[ { "version": "v1", "created": "Thu, 15 Feb 2024 01:50:38 GMT" }, { "version": "v2", "created": "Sun, 18 Feb 2024 08:00:46 GMT" }, { "version": "v3", "created": "Thu, 14 Mar 2024 11:18:21 GMT" }, { "version": "v4", "created": "Wed, 5 Jun 2024 09:43:00 GMT" } ]
1,717,632,000,000
[ [ "Yang", "Wanli", "" ], [ "Sun", "Fei", "" ], [ "Ma", "Xinyu", "" ], [ "Liu", "Xun", "" ], [ "Yin", "Dawei", "" ], [ "Cheng", "Xueqi", "" ] ]
2402.09734
Paulo Garcia
Paulo Garcia
Agents Need Not Know Their Purpose
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Ensuring artificial intelligence behaves in such a way that is aligned with human values is commonly referred to as the alignment challenge. Prior work has shown that rational agents, behaving in such a way that maximizes a utility function, will inevitably behave in such a way that is not aligned with human values, especially as their level of intelligence goes up. Prior work has also shown that there is no "one true utility function"; solutions must include a more holistic approach to alignment. This paper describes oblivious agents: agents that are architected in such a way that their effective utility function is an aggregation of a known and hidden sub-functions. The hidden component, to be maximized, is internally implemented as a black box, preventing the agent from examining it. The known component, to be minimized, is knowledge of the hidden sub-function. Architectural constraints further influence how agent actions can evolve its internal environment model. We show that an oblivious agent, behaving rationally, constructs an internal approximation of designers' intentions (i.e., infers alignment), and, as a consequence of its architecture and effective utility function, behaves in such a way that maximizes alignment; i.e., maximizing the approximated intention function. We show that, paradoxically, it does this for whatever utility function is used as the hidden component and, in contrast with extant techniques, chances of alignment actually improve as agent intelligence grows.
[ { "version": "v1", "created": "Thu, 15 Feb 2024 06:15:46 GMT" } ]
1,708,041,600,000
[ [ "Garcia", "Paulo", "" ] ]
2402.09764
Dexun Li
Dexun Li, Cong Zhang, Kuicai Dong, Derrick Goh Xin Deik, Ruiming Tang, Yong Liu
Aligning Crowd Feedback via Distributional Preference Reward Modeling
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Reinforcement Learning is widely used for aligning Large Language Models (LLM) with human preference. However, the conventional reward modelling is predominantly dependent on human annotations provided by a select cohort of individuals. Such dependence may unintentionally result in skewed models that reflect the inclinations of these annotators, thereby failing to adequately represent the wider population's expectations. We propose the Distributional Preference Reward Model (DPRM), a simple yet effective framework to align large language models with diverse human preferences. To this end, we characterize multiple preferences by a categorical distribution and introduce a Bayesian updater to accommodate shifted or new preferences. On top of that, we design an optimal-transportation-based loss to calibrate DPRM to align with the preference distribution. Finally, the expected reward is utilized to fine-tune an LLM policy to generate responses favoured by the population. Our experiments show that DPRM significantly enhances the alignment of LLMs with population preference, yielding more accurate, unbiased, and contextually appropriate responses.
[ { "version": "v1", "created": "Thu, 15 Feb 2024 07:29:43 GMT" }, { "version": "v2", "created": "Wed, 21 Feb 2024 07:56:28 GMT" }, { "version": "v3", "created": "Thu, 30 May 2024 15:39:17 GMT" } ]
1,717,113,600,000
[ [ "Li", "Dexun", "" ], [ "Zhang", "Cong", "" ], [ "Dong", "Kuicai", "" ], [ "Deik", "Derrick Goh Xin", "" ], [ "Tang", "Ruiming", "" ], [ "Liu", "Yong", "" ] ]
2402.09765
Zangir Iklassov
Zangir Iklassov and Ikboljon Sobirov and Ruben Solozabal and Martin Takac
Reinforcement Learning for Solving Stochastic Vehicle Routing Problem with Time Windows
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper introduces a reinforcement learning approach to optimize the Stochastic Vehicle Routing Problem with Time Windows (SVRP), focusing on reducing travel costs in goods delivery. We develop a novel SVRP formulation that accounts for uncertain travel costs and demands, alongside specific customer time windows. An attention-based neural network trained through reinforcement learning is employed to minimize routing costs. Our approach addresses a gap in SVRP research, which traditionally relies on heuristic methods, by leveraging machine learning. The model outperforms the Ant-Colony Optimization algorithm, achieving a 1.73% reduction in travel costs. It uniquely integrates external information, demonstrating robustness in diverse environments, making it a valuable benchmark for future SVRP studies and industry application.
[ { "version": "v1", "created": "Thu, 15 Feb 2024 07:35:29 GMT" } ]
1,708,041,600,000
[ [ "Iklassov", "Zangir", "" ], [ "Sobirov", "Ikboljon", "" ], [ "Solozabal", "Ruben", "" ], [ "Takac", "Martin", "" ] ]
2402.09769
Ayon Borthakur
Aditya Somasundaram, Pushkal Mishra, Ayon Borthakur
Representation Learning Using a Single Forward Pass
Under review
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a neuroscience-inspired Solo Pass Embedded Learning Algorithm (SPELA). SPELA is a prime candidate for training and inference applications in Edge AI devices. At the same time, SPELA can optimally cater to the need for a framework to study perceptual representation learning and formation. SPELA has distinctive features such as neural priors (in the form of embedded vectors), no weight transport, no update locking of weights, complete local Hebbian learning, single forward pass with no storage of activations, and single weight update per sample. Juxtaposed with traditional approaches, SPELA operates without the need for backpropagation. We show that our algorithm can perform nonlinear classification on a noisy boolean operation dataset. Additionally, we exhibit high performance using SPELA across MNIST, KMNIST, and Fashion MNIST. Lastly, we show the few-shot and 1-epoch learning capabilities of SPELA on MNIST, KMNIST, and Fashion MNIST, where it consistently outperforms backpropagation.
[ { "version": "v1", "created": "Thu, 15 Feb 2024 07:47:10 GMT" } ]
1,708,041,600,000
[ [ "Somasundaram", "Aditya", "" ], [ "Mishra", "Pushkal", "" ], [ "Borthakur", "Ayon", "" ] ]
2402.09836
Chenyang Shao
Chenyang Shao, Fengli Xu, Bingbing Fan, Jingtao Ding, Yuan Yuan, Meng Wang, Yong Li
Chain-of-Planned-Behaviour Workflow Elicits Few-Shot Mobility Generation in LLMs
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The powerful reasoning capabilities of large language models (LLMs) have brought revolutionary changes to many fields, but their performance in human behaviour generation has not yet been extensively explored. This gap likely emerges because the internal processes governing behavioral intentions cannot be solely explained by abstract reasoning. Instead, they are also influenced by a multitude of factors, including social norms and personal preference. Inspired by the Theory of Planned Behaviour (TPB), we develop a LLM workflow named Chain-of-Planned Behaviour (CoPB) for mobility behaviour generation, which reflects the important spatio-temporal dynamics of human activities. Through exploiting the cognitive structures of attitude, subjective norms, and perceived behaviour control in TPB, CoPB significantly enhance the ability of LLMs to reason the intention of next movement. Specifically, CoPB substantially reduces the error rate of mobility intention generation from 57.8% to 19.4%. To improve the scalability of the proposed CoPB workflow, we further explore the synergy between LLMs and mechanistic models. We find mechanistic mobility models, such as gravity model, can effectively map mobility intentions to physical mobility behaviours. The strategy of integrating CoPB with gravity model can reduce the token cost by 97.7% and achieve better performance simultaneously. Besides, the proposed CoPB workflow can facilitate GPT-4-turbo to automatically generate high quality labels for mobility behavior reasoning. We show such labels can be leveraged to fine-tune the smaller-scale, open source LLaMA 3-8B, which significantly reduces usage costs without sacrificing the quality of the generated behaviours.
[ { "version": "v1", "created": "Thu, 15 Feb 2024 09:58:23 GMT" }, { "version": "v2", "created": "Wed, 5 Jun 2024 09:27:42 GMT" } ]
1,717,632,000,000
[ [ "Shao", "Chenyang", "" ], [ "Xu", "Fengli", "" ], [ "Fan", "Bingbing", "" ], [ "Ding", "Jingtao", "" ], [ "Yuan", "Yuan", "" ], [ "Wang", "Meng", "" ], [ "Li", "Yong", "" ] ]
2402.09844
Quentin Gallou\'edec
Quentin Gallou\'edec and Edward Beeching and Cl\'ement Romac and Emmanuel Dellandr\'ea
Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent
Under review
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The search for a general model that can operate seamlessly across multiple domains remains a key goal in machine learning research. The prevailing methodology in Reinforcement Learning (RL) typically limits models to a single task within a unimodal framework, a limitation that contrasts with the broader vision of a versatile, multi-domain model. In this paper, we present Jack of All Trades (JAT), a transformer-based model with a unique design optimized for handling sequential decision-making tasks and multimodal data types. The JAT model demonstrates its robust capabilities and versatility by achieving strong performance on very different RL benchmarks, along with promising results on Computer Vision (CV) and Natural Language Processing (NLP) tasks, all using a single set of weights. The JAT model marks a significant step towards more general, cross-domain AI model design, and notably, it is the first model of its kind to be fully open-sourced (see https://huggingface.co/jat-project/jat), including a pioneering general-purpose dataset.
[ { "version": "v1", "created": "Thu, 15 Feb 2024 10:01:55 GMT" }, { "version": "v2", "created": "Mon, 22 Apr 2024 09:47:31 GMT" } ]
1,713,830,400,000
[ [ "Gallouédec", "Quentin", "" ], [ "Beeching", "Edward", "" ], [ "Romac", "Clément", "" ], [ "Dellandréa", "Emmanuel", "" ] ]
2402.09877
Alberto Pozanco
Alberto Pozanco, Daniel Borrajo, Manuela Veloso
On Computing Plans with Uniform Action Costs
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many real-world planning applications, agents might be interested in finding plans whose actions have costs that are as uniform as possible. Such plans provide agents with a sense of stability and predictability, which are key features when humans are the agents executing plans suggested by planning tools. This paper adapts three uniformity metrics to automated planning, and introduce planning-based compilations that allow to lexicographically optimize sum of action costs and action costs uniformity. Experimental results both in well-known and novel planning benchmarks show that the reformulated tasks can be effectively solved in practice to generate uniform plans.
[ { "version": "v1", "created": "Thu, 15 Feb 2024 11:00:28 GMT" }, { "version": "v2", "created": "Fri, 26 Apr 2024 13:20:49 GMT" }, { "version": "v3", "created": "Fri, 24 May 2024 09:19:23 GMT" } ]
1,716,768,000,000
[ [ "Pozanco", "Alberto", "" ], [ "Borrajo", "Daniel", "" ], [ "Veloso", "Manuela", "" ] ]
2402.09919
Katarzyna Micha{\l}owska
Katarzyna Micha{\l}owska, Helga Margrete Bodahl Holmestad, Signe Riemer-S{\o}rensen
Road Graph Generator: Mapping roads at construction sites from GPS data
18 pages, 4 figures, 3 tables
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
We propose a new method for inferring roads from GPS trajectories to map construction sites. This task presents a unique challenge due to the erratic and non-standard movement patterns of construction machinery, which significantly diverge from typical vehicular traffic on established roads. Our proposed method first identifies intersections in the road network that serve as critical decision points, and then connects them with edges to produce a graph, which can subsequently be used for planning and task-allocation. We demonstrate the approach by mapping roads at a real-life construction site in Norway. The method is validated on four increasingly complex segments of the map. In our tests, the method achieved perfect accuracy in detecting intersections and inferring roads in data with no or low noise, while its performance was reduced in map areas with significant noise and consistently missing GPS updates.
[ { "version": "v1", "created": "Thu, 15 Feb 2024 12:53:25 GMT" }, { "version": "v2", "created": "Tue, 9 Apr 2024 11:41:21 GMT" } ]
1,712,707,200,000
[ [ "Michałowska", "Katarzyna", "" ], [ "Holmestad", "Helga Margrete Bodahl", "" ], [ "Riemer-Sørensen", "Signe", "" ] ]
2402.10011
Cong Liu
Cong Liu, David Ruhe, Floor Eijkelboom, Patrick Forr\'e
Clifford Group Equivariant Simplicial Message Passing Networks
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Clifford Group Equivariant Simplicial Message Passing Networks, a method for steerable E(n)-equivariant message passing on simplicial complexes. Our method integrates the expressivity of Clifford group-equivariant layers with simplicial message passing, which is topologically more intricate than regular graph message passing. Clifford algebras include higher-order objects such as bivectors and trivectors, which express geometric features (e.g., areas, volumes) derived from vectors. Using this knowledge, we represent simplex features through geometric products of their vertices. To achieve efficient simplicial message passing, we share the parameters of the message network across different dimensions. Additionally, we restrict the final message to an aggregation of the incoming messages from different dimensions, leading to what we term shared simplicial message passing. Experimental results show that our method is able to outperform both equivariant and simplicial graph neural networks on a variety of geometric tasks.
[ { "version": "v1", "created": "Thu, 15 Feb 2024 15:18:53 GMT" }, { "version": "v2", "created": "Tue, 20 Feb 2024 17:12:49 GMT" }, { "version": "v3", "created": "Tue, 12 Mar 2024 12:38:09 GMT" } ]
1,710,288,000,000
[ [ "Liu", "Cong", "" ], [ "Ruhe", "David", "" ], [ "Eijkelboom", "Floor", "" ], [ "Forré", "Patrick", "" ] ]
2402.10083
Yuhe Ke
Ting Fang Tan, Kabilan Elangovan, Liyuan Jin, Yao Jie, Li Yong, Joshua Lim, Stanley Poh, Wei Yan Ng, Daniel Lim, Yuhe Ke, Nan Liu, Daniel Shu Wei Ting
Fine-tuning Large Language Model (LLM) Artificial Intelligence Chatbots in Ophthalmology and LLM-based evaluation using GPT-4
13 Pages, 1 Figure, 8 Tables
null
null
null
cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
Purpose: To assess the alignment of GPT-4-based evaluation to human clinician experts, for the evaluation of responses to ophthalmology-related patient queries generated by fine-tuned LLM chatbots. Methods: 400 ophthalmology questions and paired answers were created by ophthalmologists to represent commonly asked patient questions, divided into fine-tuning (368; 92%), and testing (40; 8%). We find-tuned 5 different LLMs, including LLAMA2-7b, LLAMA2-7b-Chat, LLAMA2-13b, and LLAMA2-13b-Chat. For the testing dataset, additional 8 glaucoma QnA pairs were included. 200 responses to the testing dataset were generated by 5 fine-tuned LLMs for evaluation. A customized clinical evaluation rubric was used to guide GPT-4 evaluation, grounded on clinical accuracy, relevance, patient safety, and ease of understanding. GPT-4 evaluation was then compared against ranking by 5 clinicians for clinical alignment. Results: Among all fine-tuned LLMs, GPT-3.5 scored the highest (87.1%), followed by LLAMA2-13b (80.9%), LLAMA2-13b-chat (75.5%), LLAMA2-7b-Chat (70%) and LLAMA2-7b (68.8%) based on the GPT-4 evaluation. GPT-4 evaluation demonstrated significant agreement with human clinician rankings, with Spearman and Kendall Tau correlation coefficients of 0.90 and 0.80 respectively; while correlation based on Cohen Kappa was more modest at 0.50. Notably, qualitative analysis and the glaucoma sub-analysis revealed clinical inaccuracies in the LLM-generated responses, which were appropriately identified by the GPT-4 evaluation. Conclusion: The notable clinical alignment of GPT-4 evaluation highlighted its potential to streamline the clinical evaluation of LLM chatbot responses to healthcare-related queries. By complementing the existing clinician-dependent manual grading, this efficient and automated evaluation could assist the validation of future developments in LLM applications for healthcare.
[ { "version": "v1", "created": "Thu, 15 Feb 2024 16:43:41 GMT" } ]
1,708,041,600,000
[ [ "Tan", "Ting Fang", "" ], [ "Elangovan", "Kabilan", "" ], [ "Jin", "Liyuan", "" ], [ "Jie", "Yao", "" ], [ "Yong", "Li", "" ], [ "Lim", "Joshua", "" ], [ "Poh", "Stanley", "" ], [ "Ng", "Wei Yan", "" ], [ "Lim", "Daniel", "" ], [ "Ke", "Yuhe", "" ], [ "Liu", "Nan", "" ], [ "Ting", "Daniel Shu Wei", "" ] ]
2402.10133
Davor Hafnar
Davor Hafnar (1), Jure Dem\v{s}ar (1 and 2) ((1) Faculty of Computer and Information Science, University of Ljubljana (2) Department of Psychology, Faculty of Arts, University of Ljubljana)
Zero-Shot Reasoning: Personalized Content Generation Without the Cold Start Problem
9 pages, 6 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Procedural content generation uses algorithmic techniques to create large amounts of new content for games at much lower production costs. In newer approaches, procedural content generation utilizes machine learning. However, these methods usually require expensive collection of large amounts of data, as well as the development and training of fairly complex learning models, which can be both extremely time-consuming and expensive. The core of our research is to explore whether we can lower the barrier to the use of personalized procedural content generation through a more practical and generalizable approach with large language models. Matching game content with player preferences benefits both players, who enjoy the game more, and developers, who increasingly depend on players enjoying the game before being able to monetize it. Therefore, this paper presents a novel approach to achieving personalization by using large language models to propose levels based on the gameplay data continuously collected from individual players. We compared the levels generated using our approach with levels generated with more traditional procedural generation techniques. Our easily reproducible method has proven viable in a production setting and outperformed levels generated by traditional methods in the probability that a player will not quit the game mid-level.
[ { "version": "v1", "created": "Thu, 15 Feb 2024 17:37:25 GMT" } ]
1,708,041,600,000
[ [ "Hafnar", "Davor", "", "1 and 2" ], [ "Demšar", "Jure", "", "1 and 2" ] ]
2402.10290
Jonathan Dodge
Sujay Nagesh Koujalgi and Jonathan Dodge
Experiments with Encoding Structured Data for Neural Networks
18 pages, 8 figures, 2 tables
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The project's aim is to create an AI agent capable of selecting good actions in a game-playing domain called Battlespace. Sequential domains like Battlespace are important testbeds for planning problems, as such, the Department of Defense uses such domains for wargaming exercises. The agents we developed combine Monte Carlo Tree Search (MCTS) and Deep Q-Network (DQN) techniques in an effort to navigate the game environment, avoid obstacles, interact with adversaries, and capture the flag. This paper will focus on the encoding techniques we explored to present complex structured data stored in a Python class, a necessary precursor to an agent.
[ { "version": "v1", "created": "Thu, 15 Feb 2024 19:45:15 GMT" } ]
1,708,300,800,000
[ [ "Koujalgi", "Sujay Nagesh", "" ], [ "Dodge", "Jonathan", "" ] ]
2402.10705
Yiwen Sun
Yiwen Sun, Xianyin Zhang, Shiyu Huang, Shaowei Cai, BingZhen Zhang, Ke Wei
AutoSAT: Automatically Optimize SAT Solvers via Large Language Models
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heuristics are crucial in SAT solvers, but no heuristic rules are suitable for all SAT problems. Therefore, it is helpful to refine specific heuristics for specific problems. In this context, we present AutoSAT, a novel framework for automatically optimizing heuristics in SAT solvers. AutoSAT is based on Large Language Models (LLMs) which is able to autonomously generate codes, conduct evaluation, and then utilize feedback to further optimize heuristics, thereby reducing human intervention and enhancing solver capabilities. AutoSAT operates on a plug-and-play basis, eliminating the need for extensive enterprise and model training, and fosters a Multi-Agent-based collaborative process with fault tolerance to ensure robust heuristic optimization. We implement AutoSAT on a lightweight Conflict-Driven Clause Learning (CDCL) solver EasySAT (the volume of EasySAT is about one-fiftieth of the State-of-the-Art hybrid solver Kissat) and extensive experiments on seven datasets demonstrate its superior performance. Out of the seven testing datasets, AutoSAT shows a superior performance to Kissat in two datasets and displays an overall similar performance in three datasets. Some heuristics generated by AutoSAT are even counter-intuitive but are very effective.
[ { "version": "v1", "created": "Fri, 16 Feb 2024 14:04:56 GMT" }, { "version": "v2", "created": "Fri, 31 May 2024 11:38:00 GMT" } ]
1,717,372,800,000
[ [ "Sun", "Yiwen", "" ], [ "Zhang", "Xianyin", "" ], [ "Huang", "Shiyu", "" ], [ "Cai", "Shaowei", "" ], [ "Zhang", "BingZhen", "" ], [ "Wei", "Ke", "" ] ]
2402.10726
Tomas Balyo
Tom\'a\v{s} Balyo, Martin Suda, Luk\'a\v{s} Chrpa, Dominik \v{S}afr\'anek, Filip Dvo\v{r}\'ak, Roman Bart\'ak, G. Michael Youngblood
Learning Planning Action Models from State Traces
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Previous STRIPS domain model acquisition approaches that learn from state traces start with the names and parameters of the actions to be learned. Therefore their only task is to deduce the preconditions and effects of the given actions. In this work, we explore learning in situations when the parameters of learned actions are not provided. We define two levels of trace quality based on which information is provided and present an algorithm for each. In one level (L1), the states in the traces are labeled with action names, so we can deduce the number and names of the actions, but we still need to work out the number and types of parameters. In the other level (L2), the states are additionally labeled with objects that constitute the parameters of the corresponding grounded actions. Here we still need to deduce the types of the parameters in the learned actions. We experimentally evaluate the proposed algorithms and compare them with the state-of-the-art learning tool FAMA on a large collection of IPC benchmarks. The evaluation shows that our new algorithms are faster, can handle larger inputs and provide better results in terms of learning action models more similar to reference models.
[ { "version": "v1", "created": "Fri, 16 Feb 2024 14:36:58 GMT" } ]
1,708,300,800,000
[ [ "Balyo", "Tomáš", "" ], [ "Suda", "Martin", "" ], [ "Chrpa", "Lukáš", "" ], [ "Šafránek", "Dominik", "" ], [ "Dvořák", "Filip", "" ], [ "Barták", "Roman", "" ], [ "Youngblood", "G. Michael", "" ] ]
2402.10762
Danae Pla Karidi
Christos Fragkathoulas, Vasiliki Papanikou, Danae Pla Karidi, Evaggelia Pitoura
On Explaining Unfairness: An Overview
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Algorithmic fairness and explainability are foundational elements for achieving responsible AI. In this paper, we focus on their interplay, a research area that is recently receiving increasing attention. To this end, we first present two comprehensive taxonomies, each representing one of the two complementary fields of study: fairness and explanations. Then, we categorize explanations for fairness into three types: (a) Explanations to enhance fairness metrics, (b) Explanations to help us understand the causes of (un)fairness, and (c) Explanations to assist us in designing methods for mitigating unfairness. Finally, based on our fairness and explanation taxonomies, we present undiscovered literature paths revealing gaps that can serve as valuable insights for future research.
[ { "version": "v1", "created": "Fri, 16 Feb 2024 15:38:00 GMT" } ]
1,708,300,800,000
[ [ "Fragkathoulas", "Christos", "" ], [ "Papanikou", "Vasiliki", "" ], [ "Karidi", "Danae Pla", "" ], [ "Pitoura", "Evaggelia", "" ] ]
2402.10967
Jos\'e Alberto Ben\'itez-Andrades Ph.D.
Jos\'e Alberto Ben\'itez-Andrades, Isa\'ias Garc\'ia-Rodr\'iguez, Carmen Benavides, H\'ector Alaiz-Moret\'on and Alejandro Rodr\'iguez-Gonz\'alez
Social network analysis for personalized characterization and risk assessment of alcohol use disorders in adolescents using semantic technologies
null
Future Generation Computer Systems, Volume 106, May 2020, Pages 154-170
10.1016/j.future.2020.01.002
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Alcohol Use Disorder (AUD) is a major concern for public health organizations worldwide, especially as regards the adolescent population. The consumption of alcohol in adolescents is known to be influenced by seeing friends and even parents drinking alcohol. Building on this fact, a number of studies into alcohol consumption among adolescents have made use of Social Network Analysis (SNA) techniques to study the different social networks (peers, friends, family, etc.) with whom the adolescent is involved. These kinds of studies need an initial phase of data gathering by means of questionnaires and a subsequent analysis phase using the SNA techniques. The process involves a number of manual data handling stages that are time consuming and error-prone. The use of knowledge engineering techniques (including the construction of a domain ontology) to represent the information, allows the automation of all the activities, from the initial data collection to the results of the SNA study. This paper shows how a knowledge model is constructed, and compares the results obtained using the traditional method with this, fully automated model, detailing the main advantages of the latter. In the case of the SNA analysis, the validity of the results obtained with the knowledge engineering approach are compared to those obtained manually using the UCINET, Cytoscape, Pajek and Gephi to test the accuracy of the knowledge model.
[ { "version": "v1", "created": "Wed, 14 Feb 2024 16:09:05 GMT" } ]
1,708,387,200,000
[ [ "Benítez-Andrades", "José Alberto", "" ], [ "García-Rodríguez", "Isaías", "" ], [ "Benavides", "Carmen", "" ], [ "Alaiz-Moretón", "Héctor", "" ], [ "Rodríguez-González", "Alejandro", "" ] ]
2402.11403
Liying Han
Liying Han, Mani B. Srivastava
An Empirical Evaluation of Neural and Neuro-symbolic Approaches to Real-time Multimodal Complex Event Detection
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robots and autonomous systems require an understanding of complex events (CEs) from sensor data to interact with their environments and humans effectively. Traditional end-to-end neural architectures, despite processing sensor data efficiently, struggle with long-duration events due to limited context sizes and reasoning capabilities. Recent advances in neuro-symbolic methods, which integrate neural and symbolic models leveraging human knowledge, promise improved performance with less data. This study addresses the gap in understanding these approaches' effectiveness in complex event detection (CED), especially in temporal reasoning. We investigate neural and neuro-symbolic architectures' performance in a multimodal CED task, analyzing IMU and acoustic data streams to recognize CE patterns. Our methodology includes (i) end-to-end neural architectures for direct CE detection from sensor embeddings, (ii) two-stage concept-based neural models mapping sensor embeddings to atomic events (AEs) before CE detection, and (iii) a neuro-symbolic approach using a symbolic finite-state machine for CE detection from AEs. Empirically, the neuro-symbolic architecture significantly surpasses purely neural models, demonstrating superior performance in CE recognition, even with extensive training data and ample temporal context for neural approaches.
[ { "version": "v1", "created": "Sat, 17 Feb 2024 23:34:50 GMT" }, { "version": "v2", "created": "Sun, 3 Mar 2024 22:07:50 GMT" } ]
1,709,596,800,000
[ [ "Han", "Liying", "" ], [ "Srivastava", "Mani B.", "" ] ]
2402.11461
Tuo Leng
Xiaokai Zhang, Na Zhu, Cheng Qin, Yang Li, Zhenbing Zeng, Tuo Leng
FGeo-HyperGNet: Geometric Problem Solving Integrating Formal Symbolic System and Hypergraph Neural Network
13 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Geometric problem solving has always been a long-standing challenge in the fields of automated reasoning and artificial intelligence. We built a neural-symbolic system to automatically perform human-like geometric deductive reasoning. The symbolic part is a formal system built on FormalGeo, which can automatically perform geomertic relational reasoning and algebraic calculations and organize the solving process into a solution hypertree with conditions as hypernodes and theorems as hyperedges. The neural part, called HyperGNet, is a hypergraph neural network based on the attention mechanism, including a encoder to effectively encode the structural and semantic information of the hypertree, and a solver to provide problem-solving guidance. The neural part predicts theorems according to the hypertree, and the symbolic part applies theorems and updates the hypertree, thus forming a predict-apply cycle to ultimately achieve readable and traceable automatic solving of geometric problems. Experiments demonstrate the correctness and effectiveness of this neural-symbolic architecture. We achieved a step-wised accuracy of 87.65% and an overall accuracy of 85.53% on the formalgeo7k datasets.
[ { "version": "v1", "created": "Sun, 18 Feb 2024 05:23:15 GMT" }, { "version": "v2", "created": "Mon, 22 Apr 2024 07:31:15 GMT" } ]
1,713,830,400,000
[ [ "Zhang", "Xiaokai", "" ], [ "Zhu", "Na", "" ], [ "Qin", "Cheng", "" ], [ "Li", "Yang", "" ], [ "Zeng", "Zhenbing", "" ], [ "Leng", "Tuo", "" ] ]
2402.11893
Xiaowei Yuan
Xiaowei Yuan, Zhao Yang, Yequan Wang, Shengping Liu, Jun Zhao, Kang Liu
Discerning and Resolving Knowledge Conflicts through Adaptive Decoding with Contextual Information-Entropy Constraint
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models internalize enormous parametric knowledge during pre-training. Concurrently, realistic applications necessitate external contextual knowledge to aid models on the underlying tasks. This raises a crucial dilemma known as knowledge conflicts, where the contextual knowledge clashes with the However, existing decoding works are specialized in resolving knowledge conflicts and could inadvertently deteriorate performance in absence of conflicts. In this paper, we propose an adaptive decoding method, termed as contextual information-entropy constraint decoding (COIECD), to discern whether the knowledge conflicts occur and resolve them. It can improve the model's faithfulness to conflicting context, and simultaneously maintain high performance among non- Our experiments show that COIECD exhibits strong performance and robustness over knowledge conflicts in realistic datasets. Code is available.
[ { "version": "v1", "created": "Mon, 19 Feb 2024 07:10:30 GMT" } ]
1,708,387,200,000
[ [ "Yuan", "Xiaowei", "" ], [ "Yang", "Zhao", "" ], [ "Wang", "Yequan", "" ], [ "Liu", "Shengping", "" ], [ "Zhao", "Jun", "" ], [ "Liu", "Kang", "" ] ]
2402.11901
Wiktor Piotrowski
Wiktor Piotrowski, Alexandre Perez
Real-World Planning with PDDL+ and Beyond
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Real-world applications of AI Planning often require a highly expressive modeling language to accurately capture important intricacies of target systems. Hybrid systems are ubiquitous in the real-world, and PDDL+ is the standardized modeling language for capturing such systems as planning domains. PDDL+ enables accurate encoding of mixed discrete-continuous system dynamics, exogenous activity, and many other interesting features exhibited in realistic scenarios. However, the uptake in usage of PDDL+ has been slow and apprehensive, largely due to a general shortage of PDDL+ planning software, and rigid limitations of the few existing planners. To overcome this chasm, we present Nyx, a novel PDDL+ planner built to emphasize lightness, simplicity, and, most importantly, adaptability. The planner is designed to be effortlessly customizable to expand its capabilities well beyond the scope of PDDL+. As a result, Nyx can be tailored to virtually any potential real-world application requiring some form of AI Planning, paving the way for wider adoption of planning methods for solving real-world problems.
[ { "version": "v1", "created": "Mon, 19 Feb 2024 07:35:49 GMT" } ]
1,708,387,200,000
[ [ "Piotrowski", "Wiktor", "" ], [ "Perez", "Alexandre", "" ] ]
2402.12074
Yongquan He
Yongquan He and Peng Zhang and Luchen Liu and Qi Liang and Wenyuan Zhang and Chuang Zhang
HIP Network: Historical Information Passing Network for Extrapolation Reasoning on Temporal Knowledge Graph
7 pages, 3 figures
IJCAI (2021) 1915-1921
10.24963/IJCAI.2021/264
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, temporal knowledge graph (TKG) reasoning has received significant attention. Most existing methods assume that all timestamps and corresponding graphs are available during training, which makes it difficult to predict future events. To address this issue, recent works learn to infer future events based on historical information. However, these methods do not comprehensively consider the latent patterns behind temporal changes, to pass historical information selectively, update representations appropriately and predict events accurately. In this paper, we propose the Historical Information Passing (HIP) network to predict future events. HIP network passes information from temporal, structural and repetitive perspectives, which are used to model the temporal evolution of events, the interactions of events at the same time step, and the known events respectively. In particular, our method considers the updating of relation representations and adopts three scoring functions corresponding to the above dimensions. Experimental results on five benchmark datasets show the superiority of HIP network, and the significant improvements on Hits@1 prove that our method can more accurately predict what is going to happen.
[ { "version": "v1", "created": "Mon, 19 Feb 2024 11:50:30 GMT" } ]
1,708,560,000,000
[ [ "He", "Yongquan", "" ], [ "Zhang", "Peng", "" ], [ "Liu", "Luchen", "" ], [ "Liang", "Qi", "" ], [ "Zhang", "Wenyuan", "" ], [ "Zhang", "Chuang", "" ] ]
2402.12132
Ruiyi Yang
Ruiyi Yang, Flora D. Salim and Hao Xue
SSTKG: Simple Spatio-Temporal Knowledge Graph for Intepretable and Versatile Dynamic Information Embedding
for Web conf 2024. 8 pages context
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge graphs (KGs) have been increasingly employed for link prediction and recommendation using real-world datasets. However, the majority of current methods rely on static data, neglecting the dynamic nature and the hidden spatio-temporal attributes of real-world scenarios. This often results in suboptimal predictions and recommendations. Although there are effective spatio-temporal inference methods, they face challenges such as scalability with large datasets and inadequate semantic understanding, which impede their performance. To address these limitations, this paper introduces a novel framework - Simple Spatio-Temporal Knowledge Graph (SSTKG), for constructing and exploring spatio-temporal KGs. To integrate spatial and temporal data into KGs, our framework exploited through a new 3-step embedding method. Output embeddings can be used for future temporal sequence prediction and spatial information recommendation, providing valuable insights for various applications such as retail sales forecasting and traffic volume prediction. Our framework offers a simple but comprehensive way to understand the underlying patterns and trends in dynamic KG, thereby enhancing the accuracy of predictions and the relevance of recommendations. This work paves the way for more effective utilization of spatio-temporal data in KGs, with potential impacts across a wide range of sectors.
[ { "version": "v1", "created": "Mon, 19 Feb 2024 13:28:43 GMT" } ]
1,708,387,200,000
[ [ "Yang", "Ruiyi", "" ], [ "Salim", "Flora D.", "" ], [ "Xue", "Hao", "" ] ]
2402.12183
Mafalda Malafaia
Mafalda Malafaia, Thalea Schlender, Peter A. N. Bosman, Tanja Alderliesten
MultiFIX: An XAI-friendly feature inducing approach to building models from multimodal data
8 pages, 9 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In the health domain, decisions are often based on different data modalities. Thus, when creating prediction models, multimodal fusion approaches that can extract and combine relevant features from different data modalities, can be highly beneficial. Furthermore, it is important to understand how each modality impacts the final prediction, especially in high-stake domains, so that these models can be used in a trustworthy and responsible manner. We propose MultiFIX: a new interpretability-focused multimodal data fusion pipeline that explicitly induces separate features from different data types that can subsequently be combined to make a final prediction. An end-to-end deep learning architecture is used to train a predictive model and extract representative features of each modality. Each part of the model is then explained using explainable artificial intelligence techniques. Attention maps are used to highlight important regions in image inputs. Inherently interpretable symbolic expressions, learned with GP-GOMEA, are used to describe the contribution of tabular inputs. The fusion of the extracted features to predict the target label is also replaced by a symbolic expression, learned with GP-GOMEA. Results on synthetic problems demonstrate the strengths and limitations of MultiFIX. Lastly, we apply MultiFIX to a publicly available dataset for the detection of malignant skin lesions.
[ { "version": "v1", "created": "Mon, 19 Feb 2024 14:45:46 GMT" } ]
1,708,387,200,000
[ [ "Malafaia", "Mafalda", "" ], [ "Schlender", "Thalea", "" ], [ "Bosman", "Peter A. N.", "" ], [ "Alderliesten", "Tanja", "" ] ]
2402.12422
Murray Shanahan
Murray Shanahan
Simulacra as Conscious Exotica
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The advent of conversational agents with increasingly human-like behaviour throws old philosophical questions into new light. Does it, or could it, ever make sense to speak of AI agents built out of generative language models in terms of consciousness, given that they are "mere" simulacra of human behaviour, and that what they do can be seen as "merely" role play? Drawing on the later writings of Wittgenstein, this paper attempts to tackle this question while avoiding the pitfalls of dualistic thinking.
[ { "version": "v1", "created": "Mon, 19 Feb 2024 13:53:10 GMT" } ]
1,708,473,600,000
[ [ "Shanahan", "Murray", "" ] ]
2402.12608
Subash Neupane
Hassan S. Al Khatib, Subash Neupane, Harish Kumar Manchukonda, Noorbakhsh Amiri Golilarz, Sudip Mittal, Amin Amirlatifi, Shahram Rahimi
Patient-Centric Knowledge Graphs: A Survey of Current Methods, Challenges, and Applications
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Patient-Centric Knowledge Graphs (PCKGs) represent an important shift in healthcare that focuses on individualized patient care by mapping the patient's health information in a holistic and multi-dimensional way. PCKGs integrate various types of health data to provide healthcare professionals with a comprehensive understanding of a patient's health, enabling more personalized and effective care. This literature review explores the methodologies, challenges, and opportunities associated with PCKGs, focusing on their role in integrating disparate healthcare data and enhancing patient care through a unified health perspective. In addition, this review also discusses the complexities of PCKG development, including ontology design, data integration techniques, knowledge extraction, and structured representation of knowledge. It highlights advanced techniques such as reasoning, semantic search, and inference mechanisms essential in constructing and evaluating PCKGs for actionable healthcare insights. We further explore the practical applications of PCKGs in personalized medicine, emphasizing their significance in improving disease prediction and formulating effective treatment plans. Overall, this review provides a foundational perspective on the current state-of-the-art and best practices of PCKGs, guiding future research and applications in this dynamic field.
[ { "version": "v1", "created": "Tue, 20 Feb 2024 00:07:55 GMT" } ]
1,708,473,600,000
[ [ "Khatib", "Hassan S. Al", "" ], [ "Neupane", "Subash", "" ], [ "Manchukonda", "Harish Kumar", "" ], [ "Golilarz", "Noorbakhsh Amiri", "" ], [ "Mittal", "Sudip", "" ], [ "Amirlatifi", "Amin", "" ], [ "Rahimi", "Shahram", "" ] ]
2402.12685
Yu Xiong
Yu Xiong, Zhipeng Hu, Ye Huang, Runze Wu, Kai Guan, Xingchen Fang, Ji Jiang, Tianze Zhou, Yujing Hu, Haoyu Liu, Tangjie Lyu, Changjie Fan
XRL-Bench: A Benchmark for Evaluating and Comparing Explainable Reinforcement Learning Techniques
10 pages, 5 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement Learning (RL) has demonstrated substantial potential across diverse fields, yet understanding its decision-making process, especially in real-world scenarios where rationality and safety are paramount, is an ongoing challenge. This paper delves in to Explainable RL (XRL), a subfield of Explainable AI (XAI) aimed at unravelling the complexities of RL models. Our focus rests on state-explaining techniques, a crucial subset within XRL methods, as they reveal the underlying factors influencing an agent's actions at any given time. Despite their significant role, the lack of a unified evaluation framework hinders assessment of their accuracy and effectiveness. To address this, we introduce XRL-Bench, a unified standardized benchmark tailored for the evaluation and comparison of XRL methods, encompassing three main modules: standard RL environments, explainers based on state importance, and standard evaluators. XRL-Bench supports both tabular and image data for state explanation. We also propose TabularSHAP, an innovative and competitive XRL method. We demonstrate the practical utility of TabularSHAP in real-world online gaming services and offer an open-source benchmark platform for the straightforward implementation and evaluation of XRL methods. Our contributions facilitate the continued progression of XRL technology.
[ { "version": "v1", "created": "Tue, 20 Feb 2024 03:20:37 GMT" } ]
1,708,473,600,000
[ [ "Xiong", "Yu", "" ], [ "Hu", "Zhipeng", "" ], [ "Huang", "Ye", "" ], [ "Wu", "Runze", "" ], [ "Guan", "Kai", "" ], [ "Fang", "Xingchen", "" ], [ "Jiang", "Ji", "" ], [ "Zhou", "Tianze", "" ], [ "Hu", "Yujing", "" ], [ "Liu", "Haoyu", "" ], [ "Lyu", "Tangjie", "" ], [ "Fan", "Changjie", "" ] ]
2402.12887
Steven Mascaro
Steven Mascaro, Owen Woodberry, Yue Wu, Ann E. Nicholson
The practice of qualitative parameterisation in the development of Bayesian networks
6 pages, 2 figures, technical note
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The typical phases of Bayesian network (BN) structured development include specification of purpose and scope, structure development, parameterisation and validation. Structure development is typically focused on qualitative issues and parameterisation quantitative issues, however there are qualitative and quantitative issues that arise in both phases. A common step that occurs after the initial structure has been developed is to perform a rough parameterisation that only captures and illustrates the intended qualitative behaviour of the model. This is done prior to a more rigorous parameterisation, ensuring that the structure is fit for purpose, as well as supporting later development and validation. In our collective experience and in discussions with other modellers, this step is an important part of the development process, but is under-reported in the literature. Since the practice focuses on qualitative issues, despite being quantitative in nature, we call this step qualitative parameterisation and provide an outline of its role in the BN development process.
[ { "version": "v1", "created": "Tue, 20 Feb 2024 10:30:36 GMT" } ]
1,708,473,600,000
[ [ "Mascaro", "Steven", "" ], [ "Woodberry", "Owen", "" ], [ "Wu", "Yue", "" ], [ "Nicholson", "Ann E.", "" ] ]
2402.13058
Tianxiang Zhan
Tianxiang Zhan, Zhen Li, Yong Deng
Random Graph Set and Evidence Pattern Reasoning Model
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evidence theory is widely used in decision-making and reasoning systems. In previous research, Transferable Belief Model (TBM) is a commonly used evidential decision making model, but TBM is a non-preference model. In order to better fit the decision making goals, the Evidence Pattern Reasoning Model (EPRM) is proposed. By defining pattern operators and decision making operators, corresponding preferences can be set for different tasks. Random Permutation Set (RPS) expands order information for evidence theory. It is hard for RPS to characterize the complex relationship between samples such as cycling, paralleling relationships. Therefore, Random Graph Set (RGS) were proposed to model complex relationships and represent more event types. In order to illustrate the significance of RGS and EPRM, an experiment of aircraft velocity ranking was designed and 10,000 cases were simulated. The implementation of EPRM called Conflict Resolution Decision optimized 18.17\% of the cases compared to Mean Velocity Decision, effectively improving the aircraft velocity ranking. EPRM provides a unified solution for evidence-based decision making.
[ { "version": "v1", "created": "Tue, 20 Feb 2024 14:52:52 GMT" }, { "version": "v2", "created": "Sat, 9 Mar 2024 08:43:20 GMT" } ]
1,710,201,600,000
[ [ "Zhan", "Tianxiang", "" ], [ "Li", "Zhen", "" ], [ "Deng", "Yong", "" ] ]
2402.13264
Tingting Wang
Tingting Wang, Guilin Qi, Tianxing Wu
KGroot: Enhancing Root Cause Analysis through Knowledge Graphs and Graph Convolutional Neural Networks
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Fault localization is challenging in online micro-service due to the wide variety of monitoring data volume, types, events and complex interdependencies in service and components. Faults events in services are propagative and can trigger a cascade of alerts in a short period of time. In the industry, fault localization is typically conducted manually by experienced personnel. This reliance on experience is unreliable and lacks automation. Different modules present information barriers during manual localization, making it difficult to quickly align during urgent faults. This inefficiency lags stability assurance to minimize fault detection and repair time. Though actionable methods aimed to automatic the process, the accuracy and efficiency are less than satisfactory. The precision of fault localization results is of paramount importance as it underpins engineers trust in the diagnostic conclusions, which are derived from multiple perspectives and offer comprehensive insights. Therefore, a more reliable method is required to automatically identify the associative relationships among fault events and propagation path. To achieve this, KGroot uses event knowledge and the correlation between events to perform root cause reasoning by integrating knowledge graphs and GCNs for RCA. FEKG is built based on historical data, an online graph is constructed in real-time when a failure event occurs, and the similarity between each knowledge graph and online graph is compared using GCNs to pinpoint the fault type through a ranking strategy. Comprehensive experiments demonstrate KGroot can locate the root cause with accuracy of 93.5% top 3 potential causes in second-level. This performance matches the level of real-time fault diagnosis in the industrial environment and significantly surpasses state-of-the-art baselines in RCA in terms of effectiveness and efficiency.
[ { "version": "v1", "created": "Sun, 11 Feb 2024 10:30:38 GMT" } ]
1,708,560,000,000
[ [ "Wang", "Tingting", "" ], [ "Qi", "Guilin", "" ], [ "Wu", "Tianxing", "" ] ]
2402.13290
Goonmeet Bajaj
Goonmeet Bajaj, Srinivasan Parthasarathy, Valerie L. Shalin, Amit Sheth
Grounding from an AI and Cognitive Science Lens
null
IEEE Intelligent Systems, 2024
10.1109/MIS.2024.3366669
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Grounding is a challenging problem, requiring a formal definition and different levels of abstraction. This article explores grounding from both cognitive science and machine learning perspectives. It identifies the subtleties of grounding, its significance for collaborative agents, and similarities and differences in grounding approaches in both communities. The article examines the potential of neuro-symbolic approaches tailored for grounding tasks, showcasing how they can more comprehensively address grounding. Finally, we discuss areas for further exploration and development in grounding.
[ { "version": "v1", "created": "Mon, 19 Feb 2024 17:44:34 GMT" } ]
1,708,560,000,000
[ [ "Bajaj", "Goonmeet", "" ], [ "Parthasarathy", "Srinivasan", "" ], [ "Shalin", "Valerie L.", "" ], [ "Sheth", "Amit", "" ] ]
2402.13399
Ninell Oldenburg
Ninell Oldenburg and Tan Zhi-Xuan
Learning and Sustaining Shared Normative Systems via Bayesian Rule Induction in Markov Games
Accepted to the 23rd International Conference on Autonomous Agents and Multi-Agent Systems, 8 pages (excl. references), 6 figures/tables, (Appendix: 7 pages, 6 figures/tables). Code available at: https://github.com/ninell-oldenburg/social-contracts
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
A universal feature of human societies is the adoption of systems of rules and norms in the service of cooperative ends. How can we build learning agents that do the same, so that they may flexibly cooperate with the human institutions they are embedded in? We hypothesize that agents can achieve this by assuming there exists a shared set of norms that most others comply with while pursuing their individual desires, even if they do not know the exact content of those norms. By assuming shared norms, a newly introduced agent can infer the norms of an existing population from observations of compliance and violation. Furthermore, groups of agents can converge to a shared set of norms, even if they initially diverge in their beliefs about what the norms are. This in turn enables the stability of the normative system: since agents can bootstrap common knowledge of the norms, this leads the norms to be widely adhered to, enabling new entrants to rapidly learn those norms. We formalize this framework in the context of Markov games and demonstrate its operation in a multi-agent environment via approximately Bayesian rule induction of obligative and prohibitive norms. Using our approach, agents are able to rapidly learn and sustain a variety of cooperative institutions, including resource management norms and compensation for pro-social labor, promoting collective welfare while still allowing agents to act in their own interests.
[ { "version": "v1", "created": "Tue, 20 Feb 2024 21:58:40 GMT" }, { "version": "v2", "created": "Thu, 22 Feb 2024 15:46:21 GMT" } ]
1,708,646,400,000
[ [ "Oldenburg", "Ninell", "" ], [ "Zhi-Xuan", "Tan", "" ] ]
2402.13419
Zhiyu An
Zhiyu An, Xianzhong Ding, Wan Du
Reward Bound for Behavioral Guarantee of Model-based Planning Agents
To be published in ICLR 24 tiny paper track
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Recent years have seen an emerging interest in the trustworthiness of machine learning-based agents in the wild, especially in robotics, to provide safety assurance for the industry. Obtaining behavioral guarantees for these agents remains an important problem. In this work, we focus on guaranteeing a model-based planning agent reaches a goal state within a specific future time step. We show that there exists a lower bound for the reward at the goal state, such that if the said reward is below that bound, it is impossible to obtain such a guarantee. By extension, we show how to enforce preferences over multiple goals.
[ { "version": "v1", "created": "Tue, 20 Feb 2024 23:17:07 GMT" } ]
1,708,560,000,000
[ [ "An", "Zhiyu", "" ], [ "Ding", "Xianzhong", "" ], [ "Du", "Wan", "" ] ]
2402.13782
Vincent Derkinderen
Vincent Derkinderen, Robin Manhaeve, Pedro Zuidberg Dos Martires, Luc De Raedt
Semirings for Probabilistic and Neuro-Symbolic Logic Programming
null
International Journal of Approximate Reasoning (2024): 109130
10.1016/j.ijar.2024.109130
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The field of probabilistic logic programming (PLP) focuses on integrating probabilistic models into programming languages based on logic. Over the past 30 years, numerous languages and frameworks have been developed for modeling, inference and learning in probabilistic logic programs. While originally PLP focused on discrete probability, more recent approaches have incorporated continuous distributions as well as neural networks, effectively yielding neural-symbolic methods. We provide a unified algebraic perspective on PLP, showing that many if not most of the extensions of PLP can be cast within a common algebraic logic programming framework, in which facts are labeled with elements of a semiring and disjunction and conjunction are replaced by addition and multiplication. This does not only hold for the PLP variations itself but also for the underlying execution mechanism that is based on (algebraic) model counting.
[ { "version": "v1", "created": "Wed, 21 Feb 2024 13:06:52 GMT" } ]
1,708,560,000,000
[ [ "Derkinderen", "Vincent", "" ], [ "Manhaeve", "Robin", "" ], [ "Martires", "Pedro Zuidberg Dos", "" ], [ "De Raedt", "Luc", "" ] ]
2402.13785
Florent Delgrange
Florent Delgrange, Guy Avni, Anna Lukina, Christian Schilling, Ann Now\'e, and Guillermo A. P\'erez
Synthesis of Hierarchical Controllers Based on Deep Reinforcement Learning Policies
19 pages main text, 17 pages Appendix (excluding references)
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
We propose a novel approach to the problem of controller design for environments modeled as Markov decision processes (MDPs). Specifically, we consider a hierarchical MDP a graph with each vertex populated by an MDP called a "room". We first apply deep reinforcement learning (DRL) to obtain low-level policies for each room, scaling to large rooms of unknown structure. We then apply reactive synthesis to obtain a high-level planner that chooses which low-level policy to execute in each room. The central challenge in synthesizing the planner is the need for modeling rooms. We address this challenge by developing a DRL procedure to train concise "latent" policies together with PAC guarantees on their performance. Unlike previous approaches, ours circumvents a model distillation step. Our approach combats sparse rewards in DRL and enables reusability of low-level policies. We demonstrate feasibility in a case study involving agent navigation amid moving obstacles.
[ { "version": "v1", "created": "Wed, 21 Feb 2024 13:10:58 GMT" } ]
1,708,560,000,000
[ [ "Delgrange", "Florent", "" ], [ "Avni", "Guy", "" ], [ "Lukina", "Anna", "" ], [ "Schilling", "Christian", "" ], [ "Nowé", "Ann", "" ], [ "Pérez", "Guillermo A.", "" ] ]
2402.13927
Yun-Shiuan Chuang
Yun-Shiuan Chuang, Jerry Zhu, Timothy T. Rogers
The Delusional Hedge Algorithm as a Model of Human Learning from Diverse Opinions
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Whereas cognitive models of learning often assume direct experience with both the features of an event and with a true label or outcome, much of everyday learning arises from hearing the opinions of others, without direct access to either the experience or the ground truth outcome. We consider how people can learn which opinions to trust in such scenarios by extending the hedge algorithm: a classic solution for learning from diverse information sources. We first introduce a semi-supervised variant we call the delusional hedge capable of learning from both supervised and unsupervised experiences. In two experiments, we examine the alignment between human judgments and predictions from the standard hedge, the delusional hedge, and a heuristic baseline model. Results indicate that humans effectively incorporate both labeled and unlabeled information in a manner consistent with the delusional hedge algorithm -- suggesting that human learners not only gauge the accuracy of information sources but also their consistency with other reliable sources. The findings advance our understanding of human learning from diverse opinions, with implications for the development of algorithms that better capture how people learn to weigh conflicting information sources.
[ { "version": "v1", "created": "Wed, 21 Feb 2024 16:48:07 GMT" } ]
1,708,560,000,000
[ [ "Chuang", "Yun-Shiuan", "" ], [ "Zhu", "Jerry", "" ], [ "Rogers", "Timothy T.", "" ] ]
2402.14083
Lucas Lehnert
Lucas Lehnert, Sainbayar Sukhbaatar, DiJia Su, Qinqing Zheng, Paul Mcvay, Michael Rabbat, Yuandong Tian
Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While Transformers have enabled tremendous progress in various application settings, such architectures still trail behind traditional symbolic planners for solving complex decision making tasks. In this work, we demonstrate how to train Transformers to solve complex planning tasks. This is accomplished by training an encoder-decoder Transformer model to predict the search dynamics of the $A^*$ search algorithm. We fine tune this model to obtain a Searchformer, a Transformer model that optimally solves previously unseen Sokoban puzzles 93.7% of the time, while using up to 26.8% fewer search steps than the $A^*$ implementation that was used for training initially. In our training method, $A^*$'s search dynamics are expressed as a token sequence outlining when task states are added and removed into the search tree during symbolic planning. Searchformer significantly outperforms baselines that predict the optimal plan directly with a 5-10$\times$ smaller model size and a 10$\times$ smaller training dataset. Lastly, we demonstrate how Searchformer scales to larger and more complex decision making tasks with improved percentage of solved tasks and shortened search dynamics.
[ { "version": "v1", "created": "Wed, 21 Feb 2024 19:17:28 GMT" }, { "version": "v2", "created": "Fri, 26 Apr 2024 21:05:19 GMT" } ]
1,714,435,200,000
[ [ "Lehnert", "Lucas", "" ], [ "Sukhbaatar", "Sainbayar", "" ], [ "Su", "DiJia", "" ], [ "Zheng", "Qinqing", "" ], [ "Mcvay", "Paul", "" ], [ "Rabbat", "Michael", "" ], [ "Tian", "Yuandong", "" ] ]
2402.14460
Th\'eophile Champion
Th\'eophile Champion, Howard Bowman, Dimitrije Markovi\'c, Marek Grze\'s
Reframing the Expected Free Energy: Four Formulations and a Unification
17 pages, 2 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Active inference is a leading theory of perception, learning and decision making, which can be applied to neuroscience, robotics, psychology, and machine learning. Active inference is based on the expected free energy, which is mostly justified by the intuitive plausibility of its formulations, e.g., the risk plus ambiguity and information gain / pragmatic value formulations. This paper seek to formalize the problem of deriving these formulations from a single root expected free energy definition, i.e., the unification problem. Then, we study two settings, each one having its own root expected free energy definition. In the first setting, no justification for the expected free energy has been proposed to date, but all the formulations can be recovered from it. However, in this setting, the agent cannot have arbitrary prior preferences over observations. Indeed, only a limited class of prior preferences over observations is compatible with the likelihood mapping of the generative model. In the second setting, a justification of the root expected free energy definition is known, but this setting only accounts for two formulations, i.e., the risk over states plus ambiguity and entropy plus expected energy formulations.
[ { "version": "v1", "created": "Thu, 22 Feb 2024 11:38:43 GMT" } ]
1,708,646,400,000
[ [ "Champion", "Théophile", "" ], [ "Bowman", "Howard", "" ], [ "Marković", "Dimitrije", "" ], [ "Grześ", "Marek", "" ] ]
2402.14596
Amin Ullah
Amin Ullah, Guilin Qi, Saddam Hussain, Irfan Ullah, Zafar Ali
The Role of LLMs in Sustainable Smart Cities: Applications, Challenges, and Future Directions
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Smart cities stand as pivotal components in the ongoing pursuit of elevating urban living standards, facilitating the rapid expansion of urban areas while efficiently managing resources through sustainable and scalable innovations. In this regard, as emerging technologies like Artificial Intelligence (AI), the Internet of Things (IoT), big data analytics, and fog and edge computing have become increasingly prevalent, smart city applications grapple with various challenges, including the potential for unauthorized disclosure of confidential and sensitive data. The seamless integration of emerging technologies has played a vital role in sustaining the dynamic pace of their development. This paper explores the substantial potential and applications of Deep Learning (DL), Federated Learning (FL), IoT, Blockchain, Natural Language Processing (NLP), and large language models (LLMs) in optimizing ICT processes within smart cities. We aim to spotlight the vast potential of these technologies as foundational elements that technically strengthen the realization and advancement of smart cities, underscoring their significance in driving innovation within this transformative urban milieu. Our discourse culminates with an exploration of the formidable challenges that DL, FL, IoT, Blockchain, NLP, and LLMs face within these contexts, and we offer insights into potential future directions.
[ { "version": "v1", "created": "Wed, 7 Feb 2024 05:22:10 GMT" } ]
1,708,646,400,000
[ [ "Ullah", "Amin", "" ], [ "Qi", "Guilin", "" ], [ "Hussain", "Saddam", "" ], [ "Ullah", "Irfan", "" ], [ "Ali", "Zafar", "" ] ]
2402.14600
Wei Du
Wenxuan Fang and Wei Du and Renchu He and Yang Tang and Yaochu Jin and Gary G. Yen
Diffusion Model-Based Multiobjective Optimization for Gasoline Blending Scheduling
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Gasoline blending scheduling uses resource allocation and operation sequencing to meet a refinery's production requirements. The presence of nonlinearity, integer constraints, and a large number of decision variables adds complexity to this problem, posing challenges for traditional and evolutionary algorithms. This paper introduces a novel multiobjective optimization approach driven by a diffusion model (named DMO), which is designed specifically for gasoline blending scheduling. To address integer constraints and generate feasible schedules, the diffusion model creates multiple intermediate distributions between Gaussian noise and the feasible domain. Through iterative processes, the solutions transition from Gaussian noise to feasible schedules while optimizing the objectives using the gradient descent method. DMO achieves simultaneous objective optimization and constraint adherence. Comparative tests are conducted to evaluate DMO's performance across various scales. The experimental results demonstrate that DMO surpasses state-of-the-art multiobjective evolutionary algorithms in terms of efficiency when solving gasoline blending scheduling problems.
[ { "version": "v1", "created": "Sun, 4 Feb 2024 05:46:28 GMT" } ]
1,708,646,400,000
[ [ "Fang", "Wenxuan", "" ], [ "Du", "Wei", "" ], [ "He", "Renchu", "" ], [ "Tang", "Yang", "" ], [ "Jin", "Yaochu", "" ], [ "Yen", "Gary G.", "" ] ]
2402.14757
Divija Swetha Gadiraju
Divija Swetha Gadiraju, Saeed Eftekhar Azam and Deepak Khazanchi
SHM-Traffic: DRL and Transfer learning based UAV Control for Structural Health Monitoring of Bridges with Traffic
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This work focuses on using advanced techniques for structural health monitoring (SHM) for bridges with Traffic. We propose an approach using deep reinforcement learning (DRL)-based control for Unmanned Aerial Vehicle (UAV). Our approach conducts a concrete bridge deck survey while traffic is ongoing and detects cracks. The UAV performs the crack detection, and the location of cracks is initially unknown. We use two edge detection techniques. First, we use canny edge detection for crack detection. We also use a Convolutional Neural Network (CNN) for crack detection and compare it with canny edge detection. Transfer learning is applied using CNN with pre-trained weights obtained from a crack image dataset. This enables the model to adapt and improve its performance in identifying and localizing cracks. Proximal Policy Optimization (PPO) is applied for UAV control and bridge surveys. The experimentation across various scenarios is performed to evaluate the performance of the proposed methodology. Key metrics such as task completion time and reward convergence are observed to gauge the effectiveness of the approach. We observe that the Canny edge detector offers up to 40\% lower task completion time, while the CNN excels in up to 12\% better damage detection and 1.8 times better rewards.
[ { "version": "v1", "created": "Thu, 22 Feb 2024 18:19:45 GMT" } ]
1,708,646,400,000
[ [ "Gadiraju", "Divija Swetha", "" ], [ "Azam", "Saeed Eftekhar", "" ], [ "Khazanchi", "Deepak", "" ] ]
2402.15075
Peng Lin
Peng Lin, Martin Neil and Norman Fenton
Stacking Factorizing Partitioned Expressions in Hybrid Bayesian Network Models
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Hybrid Bayesian networks (HBN) contain complex conditional probabilistic distributions (CPD) specified as partitioned expressions over discrete and continuous variables. The size of these CPDs grows exponentially with the number of parent nodes when using discrete inference, resulting in significant inefficiency. Normally, an effective way to reduce the CPD size is to use a binary factorization (BF) algorithm to decompose the statistical or arithmetic functions in the CPD by factorizing the number of connected parent nodes to sets of size two. However, the BF algorithm was not designed to handle partitioned expressions. Hence, we propose a new algorithm called stacking factorization (SF) to decompose the partitioned expressions. The SF algorithm creates intermediate nodes to incrementally reconstruct the densities in the original partitioned expression, allowing no more than two continuous parent nodes to be connected to each child node in the resulting HBN. SF can be either used independently or combined with the BF algorithm. We show that the SF+BF algorithm significantly reduces the CPD size and contributes to lowering the tree-width of a model, thus improving efficiency.
[ { "version": "v1", "created": "Fri, 23 Feb 2024 03:33:06 GMT" } ]
1,708,905,600,000
[ [ "Lin", "Peng", "" ], [ "Neil", "Martin", "" ], [ "Fenton", "Norman", "" ] ]
2402.15140
Yonglin Jing
Yonglin Jing
A Relation-Interactive Approach for Message Passing in Hyper-relational Knowledge Graphs
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hyper-relational knowledge graphs (KGs) contain additional key-value pairs, providing more information about the relations. In many scenarios, the same relation can have distinct key-value pairs, making the original triple fact more recognizable and specific. Prior studies on hyper-relational KGs have established a solid standard method for hyper-relational graph encoding. In this work, we propose a message-passing-based graph encoder with global relation structure awareness ability, which we call ReSaE. Compared to the prior state-of-the-art approach, ReSaE emphasizes the interaction of relations during message passing process and optimizes the readout structure for link prediction tasks. Overall, ReSaE gives a encoding solution for hyper-relational KGs and ensures stronger performance on downstream link prediction tasks. Our experiments demonstrate that ReSaE achieves state-of-the-art performance on multiple link prediction benchmarks. Furthermore, we also analyze the influence of different model structures on model performance.
[ { "version": "v1", "created": "Fri, 23 Feb 2024 06:55:04 GMT" }, { "version": "v2", "created": "Sat, 2 Mar 2024 04:59:36 GMT" } ]
1,709,596,800,000
[ [ "Jing", "Yonglin", "" ] ]
2402.15445
Paolo Liberatore
Paolo Liberatore
Can we forget how we learned? Doxastic redundancy in iterated belief revision
formerly part of arXiv:2305.09200
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How information was acquired may become irrelevant. An obvious case is when something is confirmed many times. In terms of iterated belief revision, a specific revision may become irrelevant in presence of others. Simple repetitions are an example, but not the only case when this happens. Sometimes, a revision becomes redundant even in presence of none equal, or even no else implying it. A necessary and sufficient condition for the redundancy of the first of a sequence of lexicographic revisions is given. The problem is coNP-complete even with two propositional revisions only. Complexity is the same in the Horn case but only with an unbounded number of revisions: it becomes polynomial with two revisions. Lexicographic revisions are not only relevant by themselves, but also because sequences of them are the most compact of the common mechanisms used to represent the state of an iterated revision process. Shortening sequences of lexicographic revisions is shortening the most compact representations of iterated belief revision states.
[ { "version": "v1", "created": "Fri, 23 Feb 2024 17:09:04 GMT" } ]
1,708,905,600,000
[ [ "Liberatore", "Paolo", "" ] ]
2402.15522
Enric Rodriguez Carbonell
Robert Nieuwenhuis, Albert Oliveras, Enric Rodriguez-Carbonell
IntSat: Integer Linear Programming by Conflict-Driven Constraint-Learning
48 pages. This is the Author's Original Manuscript of the journal version
null
10.1080/10556788.2023.2246167
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
State-of-the-art SAT solvers are nowadays able to handle huge real-world instances. The key to this success is the so-called Conflict-Driven Clause-Learning (CDCL) scheme, which encompasses a number of techniques that exploit the conflicts that are encountered during the search for a solution. In this article we extend these techniques to Integer Linear Programming (ILP), where variables may take general integer values instead of purely binary ones, constraints are more expressive than just propositional clauses, and there may be an objective function to optimise. We explain how these methods can be implemented efficiently, and discuss possible improvements. Our work is backed with a basic implementation that shows that, even in this far less mature stage, our techniques are already a useful complement to the state of the art in ILP solving.
[ { "version": "v1", "created": "Fri, 16 Feb 2024 12:48:40 GMT" } ]
1,708,992,000,000
[ [ "Nieuwenhuis", "Robert", "" ], [ "Oliveras", "Albert", "" ], [ "Rodriguez-Carbonell", "Enric", "" ] ]
2402.15960
Yuanhang Zheng
Yuanhang Zheng, Peng Li, Ming Yan, Ji Zhang, Fei Huang and Yang Liu
Budget-Constrained Tool Learning with Planning
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite intensive efforts devoted to tool learning, the problem of budget-constrained tool learning, which focuses on resolving user queries within a specific budget constraint, has been widely overlooked. This paper proposes a novel method for budget-constrained tool learning. Our approach involves creating a preferable plan under the budget constraint before utilizing the tools. This plan outlines the feasible tools and the maximum number of times they can be employed, offering a comprehensive overview of the tool learning process for large language models. This allows them to allocate the budget from a broader perspective. To devise the plan without incurring significant extra costs, we suggest initially estimating the usefulness of the candidate tools based on past experience. Subsequently, we employ dynamic programming to formulate the plan. Experimental results demonstrate that our method can be integrated with various tool learning methods, significantly enhancing their effectiveness under strict budget constraints.
[ { "version": "v1", "created": "Sun, 25 Feb 2024 02:46:33 GMT" } ]
1,708,992,000,000
[ [ "Zheng", "Yuanhang", "" ], [ "Li", "Peng", "" ], [ "Yan", "Ming", "" ], [ "Zhang", "Ji", "" ], [ "Huang", "Fei", "" ], [ "Liu", "Yang", "" ] ]
2402.16505
J.-M. Chauvet
Jean-Marie Chauvet
Memory GAPS: Would LLMs pass the Tulving Test?
15 pages, 3 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
The Tulving Test was designed to investigate memory performance in recognition and recall tasks. Its results help assess the relevance of the "Synergistic Ecphory Model" of memory and similar RK paradigms in human performance. This paper starts investigating whether the more than forty-year-old framework sheds some light on LLMs' acts of remembering.
[ { "version": "v1", "created": "Mon, 26 Feb 2024 11:40:51 GMT" }, { "version": "v2", "created": "Wed, 28 Feb 2024 15:40:31 GMT" } ]
1,709,164,800,000
[ [ "Chauvet", "Jean-Marie", "" ] ]
2402.16924
Marcin Jan Schroeder
Marcin J. Schroeder
Theoretical Unification of the Fractured Aspects of Information
52 pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The article has as its main objective the identification of fundamental epistemological obstacles in the study of information related to unnecessary methodological assumptions and the demystification of popular beliefs in the fundamental divisions of the aspects of information that can be understood as Bachelardian rupture of epistemological obstacles. These general considerations are preceded by an overview of the motivations for the study of information and the role of the concept of information in the conceptualization of intelligence, complexity, and consciousness justifying the need for a sufficiently general perspective in the study of information, and are followed at the end of the article by a brief exposition of an example of a possible application in the development of the unified theory of information free from unnecessary divisions and claims of superiority of the existing preferences in methodology. The reference to Gaston Bachelard and his ideas of epistemological obstacles and epistemological ruptures seems highly appropriate for the reflection on the development of information study, in particular in the context of obstacles such as the absence of semantics of information, negligence of its structural analysis, separation of its digital and analog forms, and misguided use of mathematics.
[ { "version": "v1", "created": "Mon, 26 Feb 2024 10:35:41 GMT" } ]
1,709,078,400,000
[ [ "Schroeder", "Marcin J.", "" ] ]
2402.19195
Tiroshan Madhushanka
Tiroshan Madushanka, Ryutaro Ichise
Negative Sampling in Knowledge Graph Representation Learning: A Review
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Knowledge graph representation learning (KGRL) or knowledge graph embedding (KGE) plays a crucial role in AI applications for knowledge construction and information exploration. These models aim to encode entities and relations present in a knowledge graph into a lower-dimensional vector space. During the training process of KGE models, using positive and negative samples becomes essential for discrimination purposes. However, obtaining negative samples directly from existing knowledge graphs poses a challenge, emphasizing the need for effective generation techniques. The quality of these negative samples greatly impacts the accuracy of the learned embeddings, making their generation a critical aspect of KGRL. This comprehensive survey paper systematically reviews various negative sampling (NS) methods and their contributions to the success of KGRL. Their respective advantages and disadvantages are outlined by categorizing existing NS methods into five distinct categories. Moreover, this survey identifies open research questions that serve as potential directions for future investigations. By offering a generalization and alignment of fundamental NS concepts, this survey provides valuable insights for designing effective NS methods in the context of KGRL and serves as a motivating force for further advancements in the field.
[ { "version": "v1", "created": "Thu, 29 Feb 2024 14:26:20 GMT" } ]
1,709,251,200,000
[ [ "Madushanka", "Tiroshan", "" ], [ "Ichise", "Ryutaro", "" ] ]
2403.00685
Loris Bozzato
Gabriele Sacco, Loris Bozzato, Oliver Kutz
Know your exceptions: Towards an Ontology of Exceptions in Knowledge Representation
18 pages, 4 pages are appendix. (v2 updates: minor revisions on discussions, terminology and text editing)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Defeasible reasoning is a kind of reasoning where some generalisations may not be valid in all circumstances, that is general conclusions may fail in some cases. Various formalisms have been developed to model this kind of reasoning, which is characteristic of common-sense contexts. However, it is not easy for a modeller to choose among these systems the one that better fits its domain from an ontological point of view. In this paper we first propose a framework based on the notions of exceptionality and defeasibility in order to be able to compare formalisms and reveal their ontological commitments. Then, we apply this framework to compare four systems, showing the differences that may occur from an ontological perspective.
[ { "version": "v1", "created": "Fri, 1 Mar 2024 17:19:35 GMT" }, { "version": "v2", "created": "Tue, 5 Mar 2024 16:35:43 GMT" } ]
1,709,683,200,000
[ [ "Sacco", "Gabriele", "" ], [ "Bozzato", "Loris", "" ], [ "Kutz", "Oliver", "" ] ]
2403.00690
Dominik Jeurissen
Dominik Jeurissen and Diego Perez-Liebana and Jeremy Gow and Duygu Cakmak and James Kwan
Playing NetHack with LLMs: Potential & Limitations as Zero-Shot Agents
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) have shown great success as high-level planners for zero-shot game-playing agents. However, these agents are primarily evaluated on Minecraft, where long-term planning is relatively straightforward. In contrast, agents tested in dynamic robot environments face limitations due to simplistic environments with only a few objects and interactions. To fill this gap in the literature, we present NetPlay, the first LLM-powered zero-shot agent for the challenging roguelike NetHack. NetHack is a particularly challenging environment due to its diverse set of items and monsters, complex interactions, and many ways to die. NetPlay uses an architecture designed for dynamic robot environments, modified for NetHack. Like previous approaches, it prompts the LLM to choose from predefined skills and tracks past interactions to enhance decision-making. Given NetHack's unpredictable nature, NetPlay detects important game events to interrupt running skills, enabling it to react to unforeseen circumstances. While NetPlay demonstrates considerable flexibility and proficiency in interacting with NetHack's mechanics, it struggles with ambiguous task descriptions and a lack of explicit feedback. Our findings demonstrate that NetPlay performs best with detailed context information, indicating the necessity for dynamic methods in supplying context information for complex games such as NetHack.
[ { "version": "v1", "created": "Fri, 1 Mar 2024 17:22:16 GMT" } ]
1,709,510,400,000
[ [ "Jeurissen", "Dominik", "" ], [ "Perez-Liebana", "Diego", "" ], [ "Gow", "Jeremy", "" ], [ "Cakmak", "Duygu", "" ], [ "Kwan", "James", "" ] ]
2403.00783
Hankz Hankui Zhuo
Hankz Hankui Zhuo and Xin Chen and Rong Pan
On the Roles of LLMs in Planning: Embedding LLMs into Planning Graphs
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Plan synthesis aims to generate a course of actions or policies to transit given initial states to goal states, provided domain models that could be designed by experts or learnt from training data or interactions with the world. Intrigued by the claims of emergent planning capabilities in large language models (LLMs), works have been proposed to investigate the planning effectiveness of LLMs, without considering any utilization of off-the-shelf planning techniques in LLMs. In this paper, we aim to further study the insight of the planning capability of LLMs by investigating the roles of LLMs in off-the-shelf planning frameworks. To do this, we investigate the effectiveness of embedding LLMs into one of the well-known planning frameworks, graph-based planning, proposing a novel LLMs-based planning framework with LLMs embedded in two levels of planning graphs, i.e., mutual constraints generation level and constraints solving level. We empirically exhibit the effectiveness of our proposed framework in various planning domains.
[ { "version": "v1", "created": "Sun, 18 Feb 2024 15:53:32 GMT" } ]
1,709,596,800,000
[ [ "Zhuo", "Hankz Hankui", "" ], [ "Chen", "Xin", "" ], [ "Pan", "Rong", "" ] ]
2403.00833
Bidipta Sarkar
Qiuyuan Huang, Naoki Wake, Bidipta Sarkar, Zane Durante, Ran Gong, Rohan Taori, Yusuke Noda, Demetri Terzopoulos, Noboru Kuno, Ade Famoti, Ashley Llorens, John Langford, Hoi Vo, Li Fei-Fei, Katsu Ikeuchi, Jianfeng Gao
Position Paper: Agent AI Towards a Holistic Intelligence
22 pages, 4 figures. arXiv admin note: substantial text overlap with arXiv:2401.03568
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Recent advancements in large foundation models have remarkably enhanced our understanding of sensory information in open-world environments. In leveraging the power of foundation models, it is crucial for AI research to pivot away from excessive reductionism and toward an emphasis on systems that function as cohesive wholes. Specifically, we emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions. The emerging field of Agent AI spans a wide range of existing embodied and agent-based multimodal interactions, including robotics, gaming, and healthcare systems, etc. In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model. On top of this idea, we discuss how agent AI exhibits remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. Furthermore, we discuss the potential of Agent AI from an interdisciplinary perspective, underscoring AI cognition and consciousness within scientific discourse. We believe that those discussions serve as a basis for future research directions and encourage broader societal engagement.
[ { "version": "v1", "created": "Wed, 28 Feb 2024 16:09:56 GMT" } ]
1,709,596,800,000
[ [ "Huang", "Qiuyuan", "" ], [ "Wake", "Naoki", "" ], [ "Sarkar", "Bidipta", "" ], [ "Durante", "Zane", "" ], [ "Gong", "Ran", "" ], [ "Taori", "Rohan", "" ], [ "Noda", "Yusuke", "" ], [ "Terzopoulos", "Demetri", "" ], [ "Kuno", "Noboru", "" ], [ "Famoti", "Ade", "" ], [ "Llorens", "Ashley", "" ], [ "Langford", "John", "" ], [ "Vo", "Hoi", "" ], [ "Fei-Fei", "Li", "" ], [ "Ikeuchi", "Katsu", "" ], [ "Gao", "Jianfeng", "" ] ]
2403.00980
Saugat Aryal
Saugat Aryal, Mark T. Keane
Even-Ifs From If-Onlys: Are the Best Semi-Factual Explanations Found Using Counterfactuals As Guides?
16 pages, 5 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Recently, counterfactuals using "if-only" explanations have become very popular in eXplainable AI (XAI), as they describe which changes to feature-inputs of a black-box AI system result in changes to a (usually negative) decision-outcome. Even more recently, semi-factuals using "even-if" explanations have gained more attention. They elucidate the feature-input changes that do not change the decision-outcome of the AI system, with a potential to suggest more beneficial recourses. Some semi-factual methods use counterfactuals to the query-instance to guide semi-factual production (so-called counterfactual-guided methods), whereas others do not (so-called counterfactual-free methods). In this work, we perform comprehensive tests of 8 semi-factual methods on 7 datasets using 5 key metrics, to determine whether counterfactual guidance is necessary to find the best semi-factuals. The results of these tests suggests not, but rather that computing other aspects of the decision space lead to better semi-factual XAI.
[ { "version": "v1", "created": "Fri, 1 Mar 2024 21:04:48 GMT" }, { "version": "v2", "created": "Thu, 25 Apr 2024 15:36:15 GMT" } ]
1,714,089,600,000
[ [ "Aryal", "Saugat", "" ], [ "Keane", "Mark T.", "" ] ]
2403.01199
Sankalpa Ghose
Sankalpa Ghose, Yip Fai Tse, Kasra Rasaee, Jeff Sebo, Peter Singer
The Case for Animal-Friendly AI
AAAI 2024 Workshop on Public Sector LLMs: Algorithmic and Sociotechnical Design. 12 pages, 11 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial intelligence is seen as increasingly important, and potentially profoundly so, but the fields of AI ethics and AI engineering have not fully recognized that these technologies, including large language models (LLMs), will have massive impacts on animals. We argue that this impact matters, because animals matter morally. As a first experiment in evaluating animal consideration in LLMs, we constructed a proof-of-concept Evaluation System, which assesses LLM responses and biases from multiple perspectives. This system evaluates LLM outputs by two criteria: their truthfulness, and the degree of consideration they give to the interests of animals. We tested OpenAI ChatGPT 4 and Anthropic Claude 2.1 using a set of structured queries and predefined normative perspectives. Preliminary results suggest that the outcomes of the tested models can be benchmarked regarding the consideration they give to animals, and that generated positions and biases might be addressed and mitigated with more developed and validated systems. Our research contributes one possible approach to integrating animal ethics in AI, opening pathways for future studies and practical applications in various fields, including education, public policy, and regulation, that involve or relate to animals and society. Overall, this study serves as a step towards more useful and responsible AI systems that better recognize and respect the vital interests and perspectives of all sentient beings.
[ { "version": "v1", "created": "Sat, 2 Mar 2024 12:41:11 GMT" } ]
1,709,596,800,000
[ [ "Ghose", "Sankalpa", "" ], [ "Tse", "Yip Fai", "" ], [ "Rasaee", "Kasra", "" ], [ "Sebo", "Jeff", "" ], [ "Singer", "Peter", "" ] ]
2403.01508
Weizhi Fei
Weizhi Fei, Zihao Wang, Hang Yin, Yang Duan, Hanghang Tong, Yangqiu Song
Soft Reasoning on Uncertain Knowledge Graphs
10 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The study of machine learning-based logical query-answering enables reasoning with large-scale and incomplete knowledge graphs. This paper further advances this line of research by considering the uncertainty in the knowledge. The uncertain nature of knowledge is widely observed in the real world, but \textit{does not} align seamlessly with the first-order logic underpinning existing studies. To bridge this gap, we study the setting of soft queries on uncertain knowledge, which is motivated by the establishment of soft constraint programming. We further propose an ML-based approach with both forward inference and backward calibration to answer soft queries on large-scale, incomplete, and uncertain knowledge graphs. Theoretical discussions present that our methods share the same complexity as state-of-the-art inference algorithms for first-order queries. Empirical results justify the superior performance of our approach against previous ML-based methods with number embedding extensions.
[ { "version": "v1", "created": "Sun, 3 Mar 2024 13:13:53 GMT" } ]
1,709,596,800,000
[ [ "Fei", "Weizhi", "" ], [ "Wang", "Zihao", "" ], [ "Yin", "Hang", "" ], [ "Duan", "Yang", "" ], [ "Tong", "Hanghang", "" ], [ "Song", "Yangqiu", "" ] ]
2403.02053
Zhipeng Ma
Zhipeng Ma, Bo N{\o}rregaard J{\o}rgensen, Zheng Ma
A Scoping Review of Energy-Efficient Driving Behaviors and Applied State-of-the-Art AI Methods
null
Energies 2024, 17, 500
10.3390/en17020500
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The transportation sector remains a major contributor to greenhouse gas emissions. The understanding of energy-efficient driving behaviors and utilization of energy-efficient driving strategies are essential to reduce vehicles' fuel consumption. However, there is no comprehensive investigation into energy-efficient driving behaviors and strategies. Furthermore, many state-of-the-art AI models have been applied for the analysis of eco-friendly driving styles, but no overview is available. To fill the gap, this paper conducts a thorough literature review on ecological driving behaviors and styles and analyzes the driving factors influencing energy consumption and state-of-the-art methodologies. With a thorough scoping review process, the methodological and related data are compared. The results show that the factors that impact driving behaviors can be summarized into eleven features including speed, acceleration, deceleration, pedal, and so on. This paper finds that supervised/unsupervised learning algorithms and reinforcement learning frameworks have been popularly used to model the vehicle's energy consumption with multi-dimensional data. Furthermore, the literature shows that the driving data are collected from either simulators or real-world experiments, and the real-world data are mainly stored and transmitted by meters, controller area networks, onboard data services, smartphones, and additional sensors installed in the vehicle. Based on driving behavior factors, driver characteristics, and safety rules, this paper recommends nine energy-efficient driving styles including four guidelines for the drivers' selection and adjustment of the vehicle parameters, three recommendations for the energy-efficient driving styles in different driving scenarios, and two subjective suggestions for different types of drivers and employers.
[ { "version": "v1", "created": "Mon, 4 Mar 2024 13:57:34 GMT" } ]
1,709,596,800,000
[ [ "Ma", "Zhipeng", "" ], [ "Jørgensen", "Bo Nørregaard", "" ], [ "Ma", "Zheng", "" ] ]
2403.02054
Shuvayan Brahmachary
Shuvayan Brahmachary, Subodh M. Joshi, Aniruddha Panda, Kaushik Koneripalli, Arun Kumar Sagotra, Harshil Patel, Ankush Sharma, Ameya D. Jagtap, Kaushic Kalyanaraman
Large Language Model-Based Evolutionary Optimizer: Reasoning with elitism
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Large Language Models (LLMs) have demonstrated remarkable reasoning abilities, prompting interest in their application as black-box optimizers. This paper asserts that LLMs possess the capability for zero-shot optimization across diverse scenarios, including multi-objective and high-dimensional problems. We introduce a novel population-based method for numerical optimization using LLMs called Language-Model-Based Evolutionary Optimizer (LEO). Our hypothesis is supported through numerical examples, spanning benchmark and industrial engineering problems such as supersonic nozzle shape optimization, heat transfer, and windfarm layout optimization. We compare our method to several gradient-based and gradient-free optimization approaches. While LLMs yield comparable results to state-of-the-art methods, their imaginative nature and propensity to hallucinate demand careful handling. We provide practical guidelines for obtaining reliable answers from LLMs and discuss method limitations and potential research directions.
[ { "version": "v1", "created": "Mon, 4 Mar 2024 13:57:37 GMT" } ]
1,709,596,800,000
[ [ "Brahmachary", "Shuvayan", "" ], [ "Joshi", "Subodh M.", "" ], [ "Panda", "Aniruddha", "" ], [ "Koneripalli", "Kaushik", "" ], [ "Sagotra", "Arun Kumar", "" ], [ "Patel", "Harshil", "" ], [ "Sharma", "Ankush", "" ], [ "Jagtap", "Ameya D.", "" ], [ "Kalyanaraman", "Kaushic", "" ] ]
2403.02454
Asad Anjum
Asad Anjum, Yuting Li, Noelle Law, M Charity, and Julian Togelius
The Ink Splotch Effect: A Case Study on ChatGPT as a Co-Creative Game Designer
12 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies how large language models (LLMs) can act as effective, high-level creative collaborators and ``muses'' for game design. We model the design of this study after the exercises artists use by looking at amorphous ink splotches for creative inspiration. Our goal is to determine whether AI-assistance can improve, hinder, or provide an alternative quality to games when compared to the creative intents implemented by human designers. The capabilities of LLMs as game designers are stress tested by placing it at the forefront of the decision making process. Three prototype games are designed across 3 different genres: (1) a minimalist base game, (2) a game with features and game feel elements added by a human game designer, and (3) a game with features and feel elements directly implemented from prompted outputs of the LLM, ChatGPT. A user study was conducted and participants were asked to blindly evaluate the quality and their preference of these games. We discuss both the development process of communicating creative intent to an AI chatbot and the synthesized open feedback of the participants. We use this data to determine both the benefits and shortcomings of AI in a more design-centric role.
[ { "version": "v1", "created": "Mon, 4 Mar 2024 20:14:38 GMT" } ]
1,709,683,200,000
[ [ "Anjum", "Asad", "" ], [ "Li", "Yuting", "" ], [ "Law", "Noelle", "" ], [ "Charity", "M", "" ], [ "Togelius", "Julian", "" ] ]
2403.02482
Rahul Mihir Patel
Rahul Patel, Elias B. Khalil, David Bergman
MORBDD: Multiobjective Restricted Binary Decision Diagrams by Learning to Sparsify
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In multicriteria decision-making, a user seeks a set of non-dominated solutions to a (constrained) multiobjective optimization problem, the so-called Pareto frontier. In this work, we seek to bring a state-of-the-art method for exact multiobjective integer linear programming into the heuristic realm. We focus on binary decision diagrams (BDDs) which first construct a graph that represents all feasible solutions to the problem and then traverse the graph to extract the Pareto frontier. Because the Pareto frontier may be exponentially large, enumerating it over the BDD can be time-consuming. We explore how restricted BDDs, which have already been shown to be effective as heuristics for single-objective problems, can be adapted to multiobjective optimization through the use of machine learning (ML). MORBDD, our ML-based BDD sparsifier, first trains a binary classifier to eliminate BDD nodes that are unlikely to contribute to Pareto solutions, then post-processes the sparse BDD to ensure its connectivity via optimization. Experimental results on multiobjective knapsack problems show that MORBDD is highly effective at producing very small restricted BDDs with excellent approximation quality, outperforming width-limited restricted BDDs and the well-known evolutionary algorithm NSGA-II.
[ { "version": "v1", "created": "Mon, 4 Mar 2024 21:04:54 GMT" } ]
1,709,683,200,000
[ [ "Patel", "Rahul", "" ], [ "Khalil", "Elias B.", "" ], [ "Bergman", "David", "" ] ]
2403.02610
Ruck Thawonmas
Pittawat Taveekitworachai, Febri Abdullah, Mury F. Dewantoro, Yi Xia, Pratch Suntichaikul, Ruck Thawonmas, Julian Togelius, Jochen Renz
ChatGPT4PCG 2 Competition: Prompt Engineering for Science Birds Level Generation
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper presents the second ChatGPT4PCG competition at the 2024 IEEE Conference on Games. In this edition of the competition, we follow the first edition, but make several improvements and changes. We introduce a new evaluation metric along with allowing a more flexible format for participants' submissions and making several improvements to the evaluation pipeline. Continuing from the first edition, we aim to foster and explore the realm of prompt engineering (PE) for procedural content generation (PCG). While the first competition saw success, it was hindered by various limitations; we aim to mitigate these limitations in this edition. We introduce diversity as a new metric to discourage submissions aimed at producing repetitive structures. Furthermore, we allow submission of a Python program instead of a prompt text file for greater flexibility in implementing advanced PE approaches, which may require control flow, including conditions and iterations. We also make several improvements to the evaluation pipeline with a better classifier for similarity evaluation and better-performing function signatures. We thoroughly evaluate the effectiveness of the new metric and the improved classifier. Additionally, we perform an ablation study to select a function signature to instruct ChatGPT for level generation. Finally, we provide implementation examples of various PE techniques in Python and evaluate their preliminary performance. We hope this competition serves as a resource and platform for learning about PE and PCG in general.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 02:58:57 GMT" } ]
1,709,683,200,000
[ [ "Taveekitworachai", "Pittawat", "" ], [ "Abdullah", "Febri", "" ], [ "Dewantoro", "Mury F.", "" ], [ "Xia", "Yi", "" ], [ "Suntichaikul", "Pratch", "" ], [ "Thawonmas", "Ruck", "" ], [ "Togelius", "Julian", "" ], [ "Renz", "Jochen", "" ] ]
2403.02635
Ke Zhang
Ke Zhang, DanDan Zhu, Qiuhan Xu, Hao Zhou and Ce Zheng
PPS-QMIX: Periodically Parameter Sharing for Accelerating Convergence of Multi-Agent Reinforcement Learning
10 pages, 5 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Training for multi-agent reinforcement learning(MARL) is a time-consuming process caused by distribution shift of each agent. One drawback is that strategy of each agent in MARL is independent but actually in cooperation. Thus, a vertical issue in multi-agent reinforcement learning is how to efficiently accelerate training process. To address this problem, current research has leveraged a centralized function(CF) across multiple agents to learn contribution of the team reward for each agent. However, CF based methods introduce joint error from other agents in estimation of value network. In so doing, inspired by federated learning, we propose three simple novel approaches called Average Periodically Parameter Sharing(A-PPS), Reward-Scalability Periodically Parameter Sharing(RS-PPS) and Partial Personalized Periodically Parameter Sharing(PP-PPS) mechanism to accelerate training of MARL. Agents share Q-value network periodically during the training process. Agents which has same identity adapt collected reward as scalability and update partial neural network during period to share different parameters. We apply our approaches in classical MARL method QMIX and evaluate our approaches on various tasks in StarCraft Multi-Agent Challenge(SMAC) environment. Performance of numerical experiments yield enormous enhancement, with an average improvement of 10\%-30\%, and enable to win tasks that QMIX cannot. Our code can be downloaded from https://github.com/ColaZhang22/PPS-QMIX
[ { "version": "v1", "created": "Tue, 5 Mar 2024 03:59:01 GMT" } ]
1,709,683,200,000
[ [ "Zhang", "Ke", "" ], [ "Zhu", "DanDan", "" ], [ "Xu", "Qiuhan", "" ], [ "Zhou", "Hao", "" ], [ "Zheng", "Ce", "" ] ]
2403.02719
Yu Zhao
Yanbei Liu, Yu Zhao, Xiao Wang, Lei Geng and Zhitao Xiao
Multi-Scale Subgraph Contrastive Learning
The 32nd International Joint Conference on Artificial Intelligence (IJCAI-2023)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph-level contrastive learning, aiming to learn the representations for each graph by contrasting two augmented graphs, has attracted considerable attention. Previous studies usually simply assume that a graph and its augmented graph as a positive pair, otherwise as a negative pair. However, it is well known that graph structure is always complex and multi-scale, which gives rise to a fundamental question: after graph augmentation, will the previous assumption still hold in reality? By an experimental analysis, we discover the semantic information of an augmented graph structure may be not consistent as original graph structure, and whether two augmented graphs are positive or negative pairs is highly related with the multi-scale structures. Based on this finding, we propose a multi-scale subgraph contrastive learning architecture which is able to characterize the fine-grained semantic information. Specifically, we generate global and local views at different scales based on subgraph sampling, and construct multiple contrastive relationships according to their semantic associations to provide richer self-supervised signals. Extensive experiments and parametric analyzes on eight graph classification real-world datasets well demonstrate the effectiveness of the proposed method.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 07:17:18 GMT" }, { "version": "v2", "created": "Thu, 11 Apr 2024 03:06:41 GMT" }, { "version": "v3", "created": "Fri, 12 Apr 2024 01:15:01 GMT" } ]
1,713,139,200,000
[ [ "Liu", "Yanbei", "" ], [ "Zhao", "Yu", "" ], [ "Wang", "Xiao", "" ], [ "Geng", "Lei", "" ], [ "Xiao", "Zhitao", "" ] ]
2403.02723
Mengmei Zhang
Mengmei Zhang, Xiao Wang, Chuan Shi, Lingjuan Lyu, Tianchi Yang, Junping Du
Minimum Topology Attacks for Graph Neural Networks
Published on WWW 2023. Proceedings of the ACM Web Conference 2023
null
10.1145/3543507.3583509
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
With the great popularity of Graph Neural Networks (GNNs), their robustness to adversarial topology attacks has received significant attention. Although many attack methods have been proposed, they mainly focus on fixed-budget attacks, aiming at finding the most adversarial perturbations within a fixed budget for target node. However, considering the varied robustness of each node, there is an inevitable dilemma caused by the fixed budget, i.e., no successful perturbation is found when the budget is relatively small, while if it is too large, the yielding redundant perturbations will hurt the invisibility. To break this dilemma, we propose a new type of topology attack, named minimum-budget topology attack, aiming to adaptively find the minimum perturbation sufficient for a successful attack on each node. To this end, we propose an attack model, named MiBTack, based on a dynamic projected gradient descent algorithm, which can effectively solve the involving non-convex constraint optimization on discrete topology. Extensive results on three GNNs and four real-world datasets show that MiBTack can successfully lead all target nodes misclassified with the minimum perturbation edges. Moreover, the obtained minimum budget can be used to measure node robustness, so we can explore the relationships of robustness, topology, and uncertainty for nodes, which is beyond what the current fixed-budget topology attacks can offer.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 07:29:12 GMT" } ]
1,709,683,200,000
[ [ "Zhang", "Mengmei", "" ], [ "Wang", "Xiao", "" ], [ "Shi", "Chuan", "" ], [ "Lyu", "Lingjuan", "" ], [ "Yang", "Tianchi", "" ], [ "Du", "Junping", "" ] ]
2403.02760
Xiaonan Xu
Xiaonan Xu, Yichao Wu, Penghao Liang, Yuhang He, Han Wang
Emerging Synergies Between Large Language Models and Machine Learning in Ecommerce Recommendations
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the boom of e-commerce and web applications, recommender systems have become an important part of our daily lives, providing personalized recommendations based on the user's preferences. Although deep neural networks (DNNs) have made significant progress in improving recommendation systems by simulating the interaction between users and items and incorporating their textual information, these DNN-based approaches still have some limitations, such as the difficulty of effectively understanding users' interests and capturing textual information. It is not possible to generalize to different seen/unseen recommendation scenarios and reason about their predictions. At the same time, the emergence of large language models (LLMs), represented by ChatGPT and GPT-4, has revolutionized the fields of natural language processing (NLP) and artificial intelligence (AI) due to their superior capabilities in the basic tasks of language understanding and generation, and their impressive generalization and reasoning capabilities. As a result, recent research has sought to harness the power of LLM to improve recommendation systems. Given the rapid development of this research direction in the field of recommendation systems, there is an urgent need for a systematic review of existing LLM-driven recommendation systems for researchers and practitioners in related fields to gain insight into. More specifically, we first introduced a representative approach to learning user and item representations using LLM as a feature encoder. We then reviewed the latest advances in LLMs techniques for collaborative filtering enhanced recommendation systems from the three paradigms of pre-training, fine-tuning, and prompting. Finally, we had a comprehensive discussion on the future direction of this emerging field.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 08:31:00 GMT" }, { "version": "v2", "created": "Tue, 12 Mar 2024 11:29:07 GMT" } ]
1,710,288,000,000
[ [ "Xu", "Xiaonan", "" ], [ "Wu", "Yichao", "" ], [ "Liang", "Penghao", "" ], [ "He", "Yuhang", "" ], [ "Wang", "Han", "" ] ]
2403.02783
Sebastien Verel
S\'ebastien Verel (LISIC), Sarah Thomson, Omar Rifki (LISIC)
Where the Really Hard Quadratic Assignment Problems Are: the QAP-SAT instances
null
Evolutionary Computation in Combinatorial Optimization Conference (evoCOP), Apr 2024, Aberystwyth, United Kingdom
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Quadratic Assignment Problem (QAP) is one of the major domains in the field of evolutionary computation, and more widely in combinatorial optimization. This paper studies the phase transition of the QAP, which can be described as a dramatic change in the problem's computational complexity and satisfiability, within a narrow range of the problem parameters. To approach this phenomenon, we introduce a new QAP-SAT design of the initial problem based on submodularity to capture its difficulty with new features. This decomposition is studied experimentally using branch-and-bound and tabu search solvers. A phase transition parameter is then proposed. The critical parameter of phase transition satisfaction and that of the solving effort are shown to be highly correlated for tabu search, thus allowing the prediction of difficult instances.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 08:56:30 GMT" } ]
1,709,683,200,000
[ [ "Verel", "Sébastien", "", "LISIC" ], [ "Thomson", "Sarah", "", "LISIC" ], [ "Rifki", "Omar", "", "LISIC" ] ]
2403.02820
Buda Baji\'c
Buda Baji\'c, Johannes A. J. Huber, Benedikt Neyses, Linus Olofsson, Ozan \"Oktem
Reconstruction for Sparse View Tomography of Long Objects Applied to Imaging in the Wood Industry
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the wood industry, logs are commonly quality screened by discrete X-ray scans on a moving conveyor belt from a few source positions. Typically, two-dimensional (2D) slice-wise measurements are obtained by a sequential scanning geometry. Each 2D slice alone does not carry sufficient information for a three-dimensional tomographic reconstruction in which biological features of interest in the log are well preserved. In the present work, we propose a learned iterative reconstruction method based on the Learned Primal-Dual neural network, suited for sequential scanning geometries. Our method accumulates information between neighbouring slices, instead of only accounting for single slices during reconstruction. Our quantitative and qualitative evaluations with as few as five source positions show that our method yields reconstructions of logs that are sufficiently accurate to identify biological features like knots (branches), heartwood and sapwood.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 09:44:19 GMT" } ]
1,709,683,200,000
[ [ "Bajić", "Buda", "" ], [ "Huber", "Johannes A. J.", "" ], [ "Neyses", "Benedikt", "" ], [ "Olofsson", "Linus", "" ], [ "Öktem", "Ozan", "" ] ]
2403.02899
Zhekai Du
Zhekai Du, Xinyao Li, Fengling Li, Ke Lu, Lei Zhu, Jingjing Li
Domain-Agnostic Mutual Prompting for Unsupervised Domain Adaptation
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Conventional Unsupervised Domain Adaptation (UDA) strives to minimize distribution discrepancy between domains, which neglects to harness rich semantics from data and struggles to handle complex domain shifts. A promising technique is to leverage the knowledge of large-scale pre-trained vision-language models for more guided adaptation. Despite some endeavors, current methods often learn textual prompts to embed domain semantics for source and target domains separately and perform classification within each domain, limiting cross-domain knowledge transfer. Moreover, prompting only the language branch lacks flexibility to adapt both modalities dynamically. To bridge this gap, we propose Domain-Agnostic Mutual Prompting (DAMP) to exploit domain-invariant semantics by mutually aligning visual and textual embeddings. Specifically, the image contextual information is utilized to prompt the language branch in a domain-agnostic and instance-conditioned way. Meanwhile, visual prompts are imposed based on the domain-agnostic textual prompt to elicit domain-invariant visual embeddings. These two branches of prompts are learned mutually with a cross-attention module and regularized with a semantic-consistency loss and an instance-discrimination contrastive loss. Experiments on three UDA benchmarks demonstrate the superiority of DAMP over state-of-the-art approaches.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 12:06:48 GMT" } ]
1,709,683,200,000
[ [ "Du", "Zhekai", "" ], [ "Li", "Xinyao", "" ], [ "Li", "Fengling", "" ], [ "Lu", "Ke", "" ], [ "Zhu", "Lei", "" ], [ "Li", "Jingjing", "" ] ]
2403.02901
Hanlei Jin
Hanlei Jin, Yang Zhang, Dan Meng, Jun Wang, Jinghua Tan
A Comprehensive Survey on Process-Oriented Automatic Text Summarization with Exploration of LLM-Based Methods
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic Text Summarization (ATS), utilizing Natural Language Processing (NLP) algorithms, aims to create concise and accurate summaries, thereby significantly reducing the human effort required in processing large volumes of text. ATS has drawn considerable interest in both academic and industrial circles. Many studies have been conducted in the past to survey ATS methods; however, they generally lack practicality for real-world implementations, as they often categorize previous methods from a theoretical standpoint. Moreover, the advent of Large Language Models (LLMs) has altered conventional ATS methods. In this survey, we aim to 1) provide a comprehensive overview of ATS from a ``Process-Oriented Schema'' perspective, which is best aligned with real-world implementations; 2) comprehensively review the latest LLM-based ATS works; and 3) deliver an up-to-date survey of ATS, bridging the two-year gap in the literature. To the best of our knowledge, this is the first survey to specifically investigate LLM-based ATS methods.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 12:11:07 GMT" } ]
1,709,683,200,000
[ [ "Jin", "Hanlei", "" ], [ "Zhang", "Yang", "" ], [ "Meng", "Dan", "" ], [ "Wang", "Jun", "" ], [ "Tan", "Jinghua", "" ] ]
2403.02914
Hao Wu
Hao Wu, Haomin Wen, Guibin Zhang, Yutong Xia, Kai Wang, Yuxuan Liang, Yu Zheng, Kun Wang
DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal Forecasting
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The ever-increasing sensor service, though opening a precious path and providing a deluge of earth system data for deep-learning-oriented earth science, sadly introduce a daunting obstacle to their industrial level deployment. Concretely, earth science systems rely heavily on the extensive deployment of sensors, however, the data collection from sensors is constrained by complex geographical and social factors, making it challenging to achieve comprehensive coverage and uniform deployment. To alleviate the obstacle, traditional approaches to sensor deployment utilize specific algorithms to design and deploy sensors. These methods dynamically adjust the activation times of sensors to optimize the detection process across each sub-region. Regrettably, formulating an activation strategy generally based on historical observations and geographic characteristics, which make the methods and resultant models were neither simple nor practical. Worse still, the complex technical design may ultimately lead to a model with weak generalizability. In this paper, we introduce for the first time the concept of spatio-temporal data dynamic sparse training and are committed to adaptively, dynamically filtering important sensor distributions. To our knowledge, this is the first proposal (termed DynST) of an industry-level deployment optimization concept at the data level. However, due to the existence of the temporal dimension, pruning of spatio-temporal data may lead to conflicts at different timestamps. To achieve this goal, we employ dynamic merge technology, along with ingenious dimensional mapping to mitigate potential impacts caused by the temporal aspect. During the training process, DynST utilize iterative pruning and sparse training, repeatedly identifying and dynamically removing sensor perception areas that contribute the least to future predictions.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 12:31:24 GMT" } ]
1,709,683,200,000
[ [ "Wu", "Hao", "" ], [ "Wen", "Haomin", "" ], [ "Zhang", "Guibin", "" ], [ "Xia", "Yutong", "" ], [ "Wang", "Kai", "" ], [ "Liang", "Yuxuan", "" ], [ "Zheng", "Yu", "" ], [ "Wang", "Kun", "" ] ]
2403.02962
Zheng Li
Zheng Li and Xiang Chen and Xiaojun Wan
WikiTableEdit: A Benchmark for Table Editing by Natural Language Instruction
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Tabular data, as a crucial form of data representation, exists in diverse formats on the Web. When confronted with complex and irregular tables, manual modification becomes a laborious task. This paper investigates the performance of Large Language Models (LLMs) in the context of table editing tasks. Existing research mainly focuses on regular-shaped tables, wherein instructions are used to generate code in SQL, Python, or Excel Office-script for manipulating the tables. Nevertheless, editing tables with irregular structures, particularly those containing merged cells spanning multiple rows, poses a challenge when using code. To address this, we introduce the WikiTableEdit dataset. Leveraging 26,531 tables from the WikiSQL dataset, we automatically generate natural language instructions for six distinct basic operations and the corresponding outcomes, resulting in over 200,000 instances. Subsequently, we evaluate several representative large language models on the WikiTableEdit dataset to demonstrate the challenge of this task. The dataset will be released to the community to promote related researches.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 13:33:12 GMT" } ]
1,709,683,200,000
[ [ "Li", "Zheng", "" ], [ "Chen", "Xiang", "" ], [ "Wan", "Xiaojun", "" ] ]
2403.02993
Wenyang Hu
Wenyang Hu, Yao Shu, Zongmin Yu, Zhaoxuan Wu, Xiangqiang Lin, Zhongxiang Dai, See-Kiong Ng, Bryan Kian Hsiang Low
Localized Zeroth-Order Prompt Optimization
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The efficacy of large language models (LLMs) in understanding and generating natural language has aroused a wide interest in developing prompt-based methods to harness the power of black-box LLMs. Existing methodologies usually prioritize a global optimization for finding the global optimum, which however will perform poorly in certain tasks. This thus motivates us to re-think the necessity of finding a global optimum in prompt optimization. To answer this, we conduct a thorough empirical study on prompt optimization and draw two major insights. Contrasting with the rarity of global optimum, local optima are usually prevalent and well-performed, which can be more worthwhile for efficient prompt optimization (Insight I). The choice of the input domain, covering both the generation and the representation of prompts, affects the identification of well-performing local optima (Insight II). Inspired by these insights, we propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO), which incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization. Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency, which we demonstrate through extensive experiments.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 14:18:15 GMT" } ]
1,709,683,200,000
[ [ "Hu", "Wenyang", "" ], [ "Shu", "Yao", "" ], [ "Yu", "Zongmin", "" ], [ "Wu", "Zhaoxuan", "" ], [ "Lin", "Xiangqiang", "" ], [ "Dai", "Zhongxiang", "" ], [ "Ng", "See-Kiong", "" ], [ "Low", "Bryan Kian Hsiang", "" ] ]
2403.03008
Hasan Abu-Rasheed
Hasan Abu-Rasheed, Christian Weber, Madjid Fathi
Knowledge Graphs as Context Sources for LLM-Based Explanations of Learning Recommendations
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In the era of personalized education, the provision of comprehensible explanations for learning recommendations is of a great value to enhance the learner's understanding and engagement with the recommended learning content. Large language models (LLMs) and generative AI in general have recently opened new doors for generating human-like explanations, for and along learning recommendations. However, their precision is still far away from acceptable in a sensitive field like education. To harness the abilities of LLMs, while still ensuring a high level of precision towards the intent of the learners, this paper proposes an approach to utilize knowledge graphs (KG) as a source of factual context, for LLM prompts, reducing the risk of model hallucinations, and safeguarding against wrong or imprecise information, while maintaining an application-intended learning context. We utilize the semantic relations in the knowledge graph to offer curated knowledge about learning recommendations. With domain-experts in the loop, we design the explanation as a textual template, which is filled and completed by the LLM. Domain experts were integrated in the prompt engineering phase as part of a study, to ensure that explanations include information that is relevant to the learner. We evaluate our approach quantitatively using Rouge-N and Rouge-L measures, as well as qualitatively with experts and learners. Our results show an enhanced recall and precision of the generated explanations compared to those generated solely by the GPT model, with a greatly reduced risk of generating imprecise information in the final learning explanation.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 14:41:12 GMT" } ]
1,709,683,200,000
[ [ "Abu-Rasheed", "Hasan", "" ], [ "Weber", "Christian", "" ], [ "Fathi", "Madjid", "" ] ]
2403.03017
Haochen Shi
Haochen Shi, Zhiyuan Sun, Xingdi Yuan, Marc-Alexandre C\^ot\'e, Bang Liu
OPEx: A Component-Wise Analysis of LLM-Centric Agents in Embodied Instruction Following
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Embodied Instruction Following (EIF) is a crucial task in embodied learning, requiring agents to interact with their environment through egocentric observations to fulfill natural language instructions. Recent advancements have seen a surge in employing large language models (LLMs) within a framework-centric approach to enhance performance in embodied learning tasks, including EIF. Despite these efforts, there exists a lack of a unified understanding regarding the impact of various components-ranging from visual perception to action execution-on task performance. To address this gap, we introduce OPEx, a comprehensive framework that delineates the core components essential for solving embodied learning tasks: Observer, Planner, and Executor. Through extensive evaluations, we provide a deep analysis of how each component influences EIF task performance. Furthermore, we innovate within this space by deploying a multi-agent dialogue strategy on a TextWorld counterpart, further enhancing task performance. Our findings reveal that LLM-centric design markedly improves EIF outcomes, identify visual perception and low-level action execution as critical bottlenecks, and demonstrate that augmenting LLMs with a multi-agent framework further elevates performance.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 14:53:53 GMT" } ]
1,709,683,200,000
[ [ "Shi", "Haochen", "" ], [ "Sun", "Zhiyuan", "" ], [ "Yuan", "Xingdi", "" ], [ "Côté", "Marc-Alexandre", "" ], [ "Liu", "Bang", "" ] ]
2403.03165
Jingxiao Tian
Yaqian Qi, Yuan Feng, Xiangxiang Wang, Hanzhe Li, Jingxiao Tian
Leveraging Federated Learning and Edge Computing for Recommendation Systems within Cloud Computing Networks
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To enable large-scale and efficient deployment of artificial intelligence (AI), the combination of AI and edge computing has spawned Edge Intelligence, which leverages the computing and communication capabilities of end devices and edge servers to process data closer to where it is generated. A key technology for edge intelligence is the privacy-protecting machine learning paradigm known as Federated Learning (FL), which enables data owners to train models without having to transfer raw data to third-party servers. However, FL networks are expected to involve thousands of heterogeneous distributed devices. As a result, communication efficiency remains a key bottleneck. To reduce node failures and device exits, a Hierarchical Federated Learning (HFL) framework is proposed, where a designated cluster leader supports the data owner through intermediate model aggregation. Therefore, based on the improvement of edge server resource utilization, this paper can effectively make up for the limitation of cache capacity. In order to mitigate the impact of soft clicks on the quality of user experience (QoE), the authors model the user QoE as a comprehensive system cost. To solve the formulaic problem, the authors propose a decentralized caching algorithm with federated deep reinforcement learning (DRL) and federated learning (FL), where multiple agents learn and make decisions independently
[ { "version": "v1", "created": "Tue, 5 Mar 2024 17:58:26 GMT" }, { "version": "v2", "created": "Wed, 13 Mar 2024 05:46:39 GMT" } ]
1,710,374,400,000
[ [ "Qi", "Yaqian", "" ], [ "Feng", "Yuan", "" ], [ "Wang", "Xiangxiang", "" ], [ "Li", "Hanzhe", "" ], [ "Tian", "Jingxiao", "" ] ]
2403.03176
Michael Katz
Michael Katz, Junkyu Lee, Shirin Sohrabi
Unifying and Certifying Top-Quality Planning
To appear at ICAPS 2024
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growing utilization of planning tools in practical scenarios has sparked an interest in generating multiple high-quality plans. Consequently, a range of computational problems under the general umbrella of top-quality planning were introduced over a short time period, each with its own definition. In this work, we show that the existing definitions can be unified into one, based on a dominance relation. The different computational problems, therefore, simply correspond to different dominance relations. Given the unified definition, we can now certify the top-quality of the solutions, leveraging existing certification of unsolvability and optimality. We show that task transformations found in the existing literature can be employed for the efficient certification of various top-quality planning problems and propose a novel transformation to efficiently certify loopless top-quality planning.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 18:13:18 GMT" } ]
1,709,683,200,000
[ [ "Katz", "Michael", "" ], [ "Lee", "Junkyu", "" ], [ "Sohrabi", "Shirin", "" ] ]
2403.03186
Zongqing Lu
Weihao Tan, Ziluo Ding, Wentao Zhang, Boyu Li, Bohan Zhou, Junpeng Yue, Haochong Xia, Jiechuan Jiang, Longtao Zheng, Xinrun Xu, Yifei Bi, Pengjie Gu, Xinrun Wang, B\"orje F. Karlsson, Bo An, Zongqing Lu
Towards General Computer Control: A Multimodal Agent for Red Dead Redemption II as a Case Study
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the success in specific tasks and scenarios, existing foundation agents, empowered by large models (LMs) and advanced tools, still cannot generalize to different scenarios, mainly due to dramatic differences in the observations and actions across scenarios. In this work, we propose the General Computer Control (GCC) setting: building foundation agents that can master any computer task by taking only screen images (and possibly audio) of the computer as input, and producing keyboard and mouse operations as output, similar to human-computer interaction. The main challenges of achieving GCC are: 1) the multimodal observations for decision-making, 2) the requirements of accurate control of keyboard and mouse, 3) the need for long-term memory and reasoning, and 4) the abilities of efficient exploration and self-improvement. To target GCC, we introduce Cradle, an agent framework with six main modules, including: 1) information gathering to extract multi-modality information, 2) self-reflection to rethink past experiences, 3) task inference to choose the best next task, 4) skill curation for generating and updating relevant skills for given tasks, 5) action planning to generate specific operations for keyboard and mouse control, and 6) memory for storage and retrieval of past experiences and known skills. To demonstrate the capabilities of generalization and self-improvement of Cradle, we deploy it in the complex AAA game Red Dead Redemption II, serving as a preliminary attempt towards GCC with a challenging target. To our best knowledge, our work is the first to enable LMM-based agents to follow the main storyline and finish real missions in complex AAA games, with minimal reliance on prior knowledge or resources. The project website is at https://baai-agents.github.io/Cradle/.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 18:22:29 GMT" }, { "version": "v2", "created": "Thu, 7 Mar 2024 14:41:56 GMT" } ]
1,709,856,000,000
[ [ "Tan", "Weihao", "" ], [ "Ding", "Ziluo", "" ], [ "Zhang", "Wentao", "" ], [ "Li", "Boyu", "" ], [ "Zhou", "Bohan", "" ], [ "Yue", "Junpeng", "" ], [ "Xia", "Haochong", "" ], [ "Jiang", "Jiechuan", "" ], [ "Zheng", "Longtao", "" ], [ "Xu", "Xinrun", "" ], [ "Bi", "Yifei", "" ], [ "Gu", "Pengjie", "" ], [ "Wang", "Xinrun", "" ], [ "Karlsson", "Börje F.", "" ], [ "An", "Bo", "" ], [ "Lu", "Zongqing", "" ] ]
2403.03203
Marjan Alirezaie
Savitha Sam Abraham and Marjan Alirezaie and Luc De Raedt
CLEVR-POC: Reasoning-Intensive Visual Question Answering in Partially Observable Environments
17 pages, 10 images, Accepted at LREC-COLING 2024 - The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The integration of learning and reasoning is high on the research agenda in AI. Nevertheless, there is only a little attention to use existing background knowledge for reasoning about partially observed scenes to answer questions about the scene. Yet, we as humans use such knowledge frequently to infer plausible answers to visual questions (by eliminating all inconsistent ones). Such knowledge often comes in the form of constraints about objects and it tends to be highly domain or environment-specific. We contribute a novel benchmark called CLEVR-POC for reasoning-intensive visual question answering (VQA) in partially observable environments under constraints. In CLEVR-POC, knowledge in the form of logical constraints needs to be leveraged to generate plausible answers to questions about a hidden object in a given partial scene. For instance, if one has the knowledge that all cups are colored either red, green or blue and that there is only one green cup, it becomes possible to deduce the color of an occluded cup as either red or blue, provided that all other cups, including the green one, are observed. Through experiments, we observe that the low performance of pre-trained vision language models like CLIP (~ 22%) and a large language model (LLM) like GPT-4 (~ 46%) on CLEVR-POC ascertains the necessity for frameworks that can handle reasoning-intensive tasks where environment-specific background knowledge is available and crucial. Furthermore, our demonstration illustrates that a neuro-symbolic model, which integrates an LLM like GPT-4 with a visual perception network and a formal logical reasoner, exhibits exceptional performance on CLEVR-POC.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 18:41:37 GMT" } ]
1,709,683,200,000
[ [ "Abraham", "Savitha Sam", "" ], [ "Alirezaie", "Marjan", "" ], [ "De Raedt", "Luc", "" ] ]
2403.03288
Jianqiu Zhang
Jianqiiu Zhang
Should We Fear Large Language Models? A Structural Analysis of the Human Reasoning System for Elucidating LLM Capabilities and Risks Through the Lens of Heidegger's Philosophy
39 pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In the rapidly evolving field of Large Language Models (LLMs), there is a critical need to thoroughly analyze their capabilities and risks. Central to our investigation are two novel elements. Firstly, it is the innovative parallels between the statistical patterns of word relationships within LLMs and Martin Heidegger's concepts of "ready-to-hand" and "present-at-hand," which encapsulate the utilitarian and scientific altitudes humans employ in interacting with the world. This comparison lays the groundwork for positioning LLMs as the digital counterpart to the Faculty of Verbal Knowledge, shedding light on their capacity to emulate certain facets of human reasoning. Secondly, a structural analysis of human reasoning, viewed through Heidegger's notion of truth as "unconcealment" is conducted This foundational principle enables us to map out the inputs and outputs of the reasoning system and divide reasoning into four distinct categories. Respective cognitive faculties are delineated, allowing us to place LLMs within the broader schema of human reasoning, thus clarifying their strengths and inherent limitations. Our findings reveal that while LLMs possess the capability for Direct Explicative Reasoning and Pseudo Rational Reasoning, they fall short in authentic rational reasoning and have no creative reasoning capabilities, due to the current lack of many analogous AI models such as the Faculty of Judgement. The potential and risks of LLMs when they are augmented with other AI technologies are also evaluated. The results indicate that although LLMs have achieved proficiency in some reasoning abilities, the aspiration to match or exceed human intellectual capabilities is yet unattained. This research not only enriches our comprehension of LLMs but also propels forward the discourse on AI's potential and its bounds, paving the way for future explorations into AI's evolving landscape.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 19:40:53 GMT" } ]
1,709,769,600,000
[ [ "Zhang", "Jianqiiu", "" ] ]
2403.03293
Rrubaa Panchendrarajan
Anjalee De Silva, Janaka L. Wijekoon, Rashini Liyanarachchi, Rrubaa Panchendrarajan, Weranga Rajapaksha
AI Insights: A Case Study on Utilizing ChatGPT Intelligence for Research Paper Analysis
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper discusses the effectiveness of leveraging Chatbot: Generative Pre-trained Transformer (ChatGPT) versions 3.5 and 4 for analyzing research papers for effective writing of scientific literature surveys. The study selected the \textit{Application of Artificial Intelligence in Breast Cancer Treatment} as the research topic. Research papers related to this topic were collected from three major publication databases Google Scholar, Pubmed, and Scopus. ChatGPT models were used to identify the category, scope, and relevant information from the research papers for automatic identification of relevant papers related to Breast Cancer Treatment (BCT), organization of papers according to scope, and identification of key information for survey paper writing. Evaluations performed using ground truth data annotated using subject experts reveal, that GPT-4 achieves 77.3\% accuracy in identifying the research paper categories and 50\% of the papers were correctly identified by GPT-4 for their scopes. Further, the results demonstrate that GPT-4 can generate reasons for its decisions with an average of 27\% new words, and 67\% of the reasons given by the model were completely agreeable to the subject experts.
[ { "version": "v1", "created": "Tue, 5 Mar 2024 19:47:57 GMT" } ]
1,709,769,600,000
[ [ "De Silva", "Anjalee", "" ], [ "Wijekoon", "Janaka L.", "" ], [ "Liyanarachchi", "Rashini", "" ], [ "Panchendrarajan", "Rrubaa", "" ], [ "Rajapaksha", "Weranga", "" ] ]
2403.03382
Guangyao Chen
Guangyao Chen, Peixi Peng, Yangru Huang, Mengyue Geng, Yonghong Tian
Adaptive Discovering and Merging for Incremental Novel Class Discovery
AAAI 2024. arXiv admin note: text overlap with arXiv:2207.08605 by other authors
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One important desideratum of lifelong learning aims to discover novel classes from unlabelled data in a continuous manner. The central challenge is twofold: discovering and learning novel classes while mitigating the issue of catastrophic forgetting of established knowledge. To this end, we introduce a new paradigm called Adaptive Discovering and Merging (ADM) to discover novel categories adaptively in the incremental stage and integrate novel knowledge into the model without affecting the original knowledge. To discover novel classes adaptively, we decouple representation learning and novel class discovery, and use Triple Comparison (TC) and Probability Regularization (PR) to constrain the probability discrepancy and diversity for adaptive category assignment. To merge the learned novel knowledge adaptively, we propose a hybrid structure with base and novel branches named Adaptive Model Merging (AMM), which reduces the interference of the novel branch on the old classes to preserve the previous knowledge, and merges the novel branch to the base model without performance loss and parameter growth. Extensive experiments on several datasets show that ADM significantly outperforms existing class-incremental Novel Class Discovery (class-iNCD) approaches. Moreover, our AMM also benefits the class-incremental Learning (class-IL) task by alleviating the catastrophic forgetting problem.
[ { "version": "v1", "created": "Wed, 6 Mar 2024 00:17:03 GMT" } ]
1,709,769,600,000
[ [ "Chen", "Guangyao", "" ], [ "Peng", "Peixi", "" ], [ "Huang", "Yangru", "" ], [ "Geng", "Mengyue", "" ], [ "Tian", "Yonghong", "" ] ]