corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-665901
2410.03890
Safe Reference Tracking and Collision Avoidance for Taxiing Aircraft Using an MPC-CBF Framework
<|reference_start|>Safe Reference Tracking and Collision Avoidance for Taxiing Aircraft Using an MPC-CBF Framework: In this paper, we develop a framework for the automatic taxiing of aircraft between hangar and take-off given a graph-based model of an airport. We implement a high-level path-planning algorithm that models taxiway intersections as nodes in an undirected graph, algorithmically constructs a directed graph according to the physical limitations of the aircraft, and finds the shortest valid taxi path through the directed graph using Dijkstra's algorithm. We then use this shortest path to construct a reference trajectory for the aircraft to follow that considers the turning capabilities of a given aircraft. Using high-order control barrier functions (HOCBFs), we construct safety conditions for multi-obstacle avoidance and safe reference tracking for simple 2D unicycle dynamics with acceleration control inputs. We then use these safety conditions to design an MPC-CBF framework that tracks the reference trajectory while adhering to the safety constraints. We compare the performance of our MPC-CBF controller with a PID-CBF control method via simulations.<|reference_end|>
arxiv
@article{butler2024safe, title={Safe Reference Tracking and Collision Avoidance for Taxiing Aircraft Using an MPC-CBF Framework}, author={Brooks A. Butler, Zarif Cabrera, Andy Nguyen, and Philip E. Par'e}, journal={arXiv preprint arXiv:2410.03890}, year={2024}, archivePrefix={arXiv}, eprint={2410.03890}, primaryClass={eess.SY cs.RO cs.SY} }
butler2024safe
arxiv-665902
2410.03892
Towards Cost Sensitive Decision Making
<|reference_start|>Towards Cost Sensitive Decision Making: Many real-world situations allow for the acquisition of additional relevant information when making decisions with limited or uncertain data. However, traditional RL approaches either require all features to be acquired beforehand (e.g. in a MDP) or regard part of them as missing data that cannot be acquired (e.g. in a POMDP). In this work, we consider RL models that may actively acquire features from the environment to improve the decision quality and certainty, while automatically balancing the cost of feature acquisition process and the reward of task decision process. We propose the Active-Acquisition POMDP and identify two types of the acquisition process for different application domains. In order to assist the agent in the actively-acquired partially-observed environment and alleviate the exploration-exploitation dilemma, we develop a model-based approach, where a deep generative model is utilized to capture the dependencies of the features and impute the unobserved features. The imputations essentially represent the beliefs of the agent. Equipped with the dynamics model, we develop hierarchical RL algorithms to resolve both types of the AA-POMDPs. Empirical results demonstrate that our approach achieves considerably better performance than existing POMDP-RL solutions.<|reference_end|>
arxiv
@article{li2024towards, title={Towards Cost Sensitive Decision Making}, author={Yang Li, Junier Oliva}, journal={arXiv preprint arXiv:2410.03892}, year={2024}, archivePrefix={arXiv}, eprint={2410.03892}, primaryClass={cs.LG cs.AI} }
li2024towards
arxiv-665903
2410.03893
Human-aligned Chess with a Bit of Search
<|reference_start|>Human-aligned Chess with a Bit of Search: Chess has long been a testbed for AI's quest to match human intelligence, and in recent years, chess AI systems have surpassed the strongest humans at the game. However, these systems are not human-aligned; they are unable to match the skill levels of all human partners or model human-like behaviors beyond piece movement. In this paper, we introduce Allie, a chess-playing AI designed to bridge the gap between artificial and human intelligence in this classic game. Allie is trained on log sequences of real chess games to model the behaviors of human chess players across the skill spectrum, including non-move behaviors such as pondering times and resignations In offline evaluations, we find that Allie exhibits humanlike behavior: it outperforms the existing state-of-the-art in human chess move prediction and "ponders" at critical positions. The model learns to reliably assign reward at each game state, which can be used at inference as a reward function in a novel time-adaptive Monte-Carlo tree search (MCTS) procedure, where the amount of search depends on how long humans would think in the same positions. Adaptive search enables remarkable skill calibration; in a large-scale online evaluation against players with ratings from 1000 to 2600 Elo, our adaptive search method leads to a skill gap of only 49 Elo on average, substantially outperforming search-free and standard MCTS baselines. Against grandmaster-level (2500 Elo) opponents, Allie with adaptive search exhibits the strength of a fellow grandmaster, all while learning exclusively from humans.<|reference_end|>
arxiv
@article{zhang2024human-aligned, title={Human-aligned Chess with a Bit of Search}, author={Yiming Zhang, Athul Paul Jacob, Vivian Lai, Daniel Fried, Daphne Ippolito}, journal={arXiv preprint arXiv:2410.03893}, year={2024}, archivePrefix={arXiv}, eprint={2410.03893}, primaryClass={cs.LG cs.AI} }
zhang2024human-aligned
arxiv-665904
2410.03894
A Machine Learning-Based Reference Governor for Nonlinear Systems With Application to Automotive Fuel Cells
<|reference_start|>A Machine Learning-Based Reference Governor for Nonlinear Systems With Application to Automotive Fuel Cells: The prediction-based nonlinear reference governor (PRG) is an add-on algorithm to enforce constraints on pre-stabilized nonlinear systems by modifying, whenever necessary, the reference signal. The implementation of PRG carries a heavy computational burden, as it may require multiple numerical simulations of the plant model at each sample time. To this end, this paper proposes an alternative approach based on machine learning, where we first use a regression neural network (NN) to approximate the input-output map of the PRG from a set of training data. During the real-time operation, at each sample time, we use the trained NN to compute a nominal reference command, which may not be constraint admissible due to training errors and limited data. We adopt a novel sensitivity-based approach to minimally adjust the nominal reference while ensuring constraint enforcement. We thus refer to the resulting control strategy as the modified neural network reference governor (MNN-RG), which is significantly more computationally efficient than the PRG. The computational and theoretical properties of MNN-RG are presented. Finally, the effectiveness and limitations of the proposed method are studied by applying it as a load governor for constraint management in automotive fuel cell systems through simulation-based case studies.<|reference_end|>
arxiv
@article{ayubirad2024a, title={A Machine Learning-Based Reference Governor for Nonlinear Systems With Application to Automotive Fuel Cells}, author={Mostafaali Ayubirad, Hamid R. Ossareh}, journal={arXiv preprint arXiv:2410.03894}, year={2024}, archivePrefix={arXiv}, eprint={2410.03894}, primaryClass={eess.SY cs.SY} }
ayubirad2024a
arxiv-665905
2410.03895
Demystifying Technology for Policymaking: Exploring the Rideshare Context and Data Initiative Opportunities to Advance Tech Policymaking Efforts
<|reference_start|>Demystifying Technology for Policymaking: Exploring the Rideshare Context and Data Initiative Opportunities to Advance Tech Policymaking Efforts: In the face of rapidly advancing technologies, evidence of harms they can exacerbate, and insufficient policy to ensure accountability from tech companies, what are HCI opportunities for advancing policymaking of technology? In this paper, we explore challenges and opportunities for tech policymaking through a case study of app-based rideshare driving. We begin with background on rideshare platforms and how they operate. Next, we review literature on algorithmic management about how rideshare drivers actually experience platform features -- often to the detriment of their well-being -- and ways they respond. In light of this, researchers and advocates have called for increased worker protections, thus we turn to rideshare policy and regulation efforts in the U.S. Here, we differentiate the political strategies of platforms with those of drivers to illustrate the conflicting narratives policymakers face when trying to oversee gig work platforms. We reflect that past methods surfacing drivers' experiences may be insufficient for policymaker needs when developing oversight. To address this gap and our original inquiry -- what are HCI opportunities for advancing tech policymaking -- we briefly explore two paths forward for holding tech companies accountable in the rideshare context: (1) data transparency initiatives to enable collective auditing by workers and (2) legal frameworks for holding platforms accountable.<|reference_end|>
arxiv
@article{zhang2024demystifying, title={Demystifying Technology for Policymaking: Exploring the Rideshare Context and Data Initiative Opportunities to Advance Tech Policymaking Efforts}, author={Angie Zhang}, journal={arXiv preprint arXiv:2410.03895}, year={2024}, archivePrefix={arXiv}, eprint={2410.03895}, primaryClass={cs.HC} }
zhang2024demystifying
arxiv-665906
2410.03897
Harnessing Generative AI for Economic Insights
<|reference_start|>Harnessing Generative AI for Economic Insights: We use generative AI to extract managerial expectations about their economic outlook from over 120,000 corporate conference call transcripts. The overall measure, AI Economy Score, robustly predicts future economic indicators such as GDP growth, production, and employment, both in the short term and to 10 quarters. This predictive power is incremental to that of existing measures, including survey forecasts. Moreover, industry and firm-level measures provide valuable information about sector-specific and individual firm activities. Our findings suggest that managerial expectations carry unique insights about economic activities, with implications for both macroeconomic and microeconomic decision-making.<|reference_end|>
arxiv
@article{jha2024harnessing, title={Harnessing Generative AI for Economic Insights}, author={Manish Jha, Jialin Qian, Michael Weber, and Baozhong Yang}, journal={arXiv preprint arXiv:2410.03897}, year={2024}, archivePrefix={arXiv}, eprint={2410.03897}, primaryClass={q-fin.CP cs.LG econ.GN q-fin.EC} }
jha2024harnessing
arxiv-665907
2410.03900
The Wallpaper is Ugly: Indoor Localization using Vision and Language
<|reference_start|>The Wallpaper is Ugly: Indoor Localization using Vision and Language: We study the task of locating a user in a mapped indoor environment using natural language queries and images from the environment. Building on recent pretrained vision-language models, we learn a similarity score between text descriptions and images of locations in the environment. This score allows us to identify locations that best match the language query, estimating the user's location. Our approach is capable of localizing on environments, text, and images that were not seen during training. One model, finetuned CLIP, outperformed humans in our evaluation.<|reference_end|>
arxiv
@article{pate2024the, title={The Wallpaper is Ugly: Indoor Localization using Vision and Language}, author={Seth Pate and Lawson L.S. Wong}, journal={arXiv preprint arXiv:2410.03900}, year={2024}, archivePrefix={arXiv}, eprint={2410.03900}, primaryClass={cs.CV} }
pate2024the
arxiv-665908
2410.03901
Improving Node Representation by Boosting Target-Aware Contrastive Loss
<|reference_start|>Improving Node Representation by Boosting Target-Aware Contrastive Loss: Graphs model complex relationships between entities, with nodes and edges capturing intricate connections. Node representation learning involves transforming nodes into low-dimensional embeddings. These embeddings are typically used as features for downstream tasks. Therefore, their quality has a significant impact on task performance. Existing approaches for node representation learning span (semi-)supervised, unsupervised, and self-supervised paradigms. In graph domains, (semi-)supervised learning often only optimizes models based on class labels, neglecting other abundant graph signals, which limits generalization. While self-supervised or unsupervised learning produces representations that better capture underlying graph signals, the usefulness of these captured signals for downstream target tasks can vary. To bridge this gap, we introduce Target-Aware Contrastive Learning (Target-aware CL) which aims to enhance target task performance by maximizing the mutual information between the target task and node representations with a self-supervised learning process. This is achieved through a sampling function, XGBoost Sampler (XGSampler), to sample proper positive examples for the proposed Target-Aware Contrastive Loss (XTCL). By minimizing XTCL, Target-aware CL increases the mutual information between the target task and node representations, such that model generalization is improved. Additionally, XGSampler enhances the interpretability of each signal by showing the weights for sampling the proper positive examples. We show experimentally that XTCL significantly improves the performance on two target tasks: node classification and link prediction tasks, compared to state-of-the-art models.<|reference_end|>
arxiv
@article{lin2024improving, title={Improving Node Representation by Boosting Target-Aware Contrastive Loss}, author={Ying-Chun Lin, Jennifer Neville}, journal={arXiv preprint arXiv:2410.03901}, year={2024}, archivePrefix={arXiv}, eprint={2410.03901}, primaryClass={cs.LG cs.AI} }
lin2024improving
arxiv-665909
2410.03904
Did You Hear That? Introducing AADG: A Framework for Generating Benchmark Data in Audio Anomaly Detection
<|reference_start|>Did You Hear That? Introducing AADG: A Framework for Generating Benchmark Data in Audio Anomaly Detection: We introduce a novel, general-purpose audio generation framework specifically designed for anomaly detection and localization. Unlike existing datasets that predominantly focus on industrial and machine-related sounds, our framework focuses a broader range of environments, particularly useful in real-world scenarios where only audio data are available, such as in video-derived or telephonic audio. To generate such data, we propose a new method inspired by the LLM-Modulo framework, which leverages large language models(LLMs) as world models to simulate such real-world scenarios. This tool is modular allowing a plug-and-play approach. It operates by first using LLMs to predict plausible real-world scenarios. An LLM further extracts the constituent sounds, the order and the way in which these should be merged to create coherent wholes. Much like the LLM-Modulo framework, we include rigorous verification of each output stage, ensuring the reliability of the generated data. The data produced using the framework serves as a benchmark for anomaly detection applications, potentially enhancing the performance of models trained on audio data, particularly in handling out-of-distribution cases. Our contributions thus fill a critical void in audio anomaly detection resources and provide a scalable tool for generating diverse, realistic audio data.<|reference_end|>
arxiv
@article{raghavan2024did, title={Did You Hear That? Introducing AADG: A Framework for Generating Benchmark Data in Audio Anomaly Detection}, author={Ksheeraja Raghavan, Samiran Gode, Ankit Shah, Surabhi Raghavan, Wolfram Burgard, Bhiksha Raj, Rita Singh}, journal={arXiv preprint arXiv:2410.03904}, year={2024}, archivePrefix={arXiv}, eprint={2410.03904}, primaryClass={cs.SD cs.AI eess.AS} }
raghavan2024did
arxiv-665910
2410.03905
PersonalSum: A User-Subjective Guided Personalized Summarization Dataset for Large Language Models
<|reference_start|>PersonalSum: A User-Subjective Guided Personalized Summarization Dataset for Large Language Models: With the rapid advancement of Natural Language Processing in recent years, numerous studies have shown that generic summaries generated by Large Language Models (LLMs) can sometimes surpass those annotated by experts, such as journalists, according to human evaluations. However, there is limited research on whether these generic summaries meet the individual needs of ordinary people. The biggest obstacle is the lack of human-annotated datasets from the general public. Existing work on personalized summarization often relies on pseudo datasets created from generic summarization datasets or controllable tasks that focus on specific named entities or other aspects, such as the length and specificity of generated summaries, collected from hypothetical tasks without the annotators' initiative. To bridge this gap, we propose a high-quality, personalized, manually annotated abstractive summarization dataset called PersonalSum. This dataset is the first to investigate whether the focus of public readers differs from the generic summaries generated by LLMs. It includes user profiles, personalized summaries accompanied by source sentences from given articles, and machine-generated generic summaries along with their sources. We investigate several personal signals - entities/topics, plot, and structure of articles - that may affect the generation of personalized summaries using LLMs in a few-shot in-context learning scenario. Our preliminary results and analysis indicate that entities/topics are merely one of the key factors that impact the diverse preferences of users, and personalized summarization remains a significant challenge for existing LLMs.<|reference_end|>
arxiv
@article{zhang2024personalsum:, title={PersonalSum: A User-Subjective Guided Personalized Summarization Dataset for Large Language Models}, author={Lemei Zhang, Peng Liu, Marcus Tiedemann Oekland Henriksboe, Even W. Lauvrak, Jon Atle Gulla, Heri Ramampiaro}, journal={arXiv preprint arXiv:2410.03905}, year={2024}, archivePrefix={arXiv}, eprint={2410.03905}, primaryClass={cs.CL} }
zhang2024personalsum:
arxiv-665911
2410.03907
ActPlan-1K: Benchmarking the Procedural Planning Ability of Visual Language Models in Household Activities
<|reference_start|>ActPlan-1K: Benchmarking the Procedural Planning Ability of Visual Language Models in Household Activities: Large language models~(LLMs) have been adopted to process textual task description and accomplish procedural planning in embodied AI tasks because of their powerful reasoning ability. However, there is still lack of study on how vision language models~(VLMs) behave when multi-modal task inputs are considered. Counterfactual planning that evaluates the model's reasoning ability over alternative task situations are also under exploited. In order to evaluate the planning ability of both multi-modal and counterfactual aspects, we propose ActPlan-1K. ActPlan-1K is a multi-modal planning benchmark constructed based on ChatGPT and household activity simulator iGibson2. The benchmark consists of 153 activities and 1,187 instances. Each instance describing one activity has a natural language task description and multiple environment images from the simulator. The gold plan of each instance is action sequences over the objects in provided scenes. Both the correctness and commonsense satisfaction are evaluated on typical VLMs. It turns out that current VLMs are still struggling at generating human-level procedural plans for both normal activities and counterfactual activities. We further provide automatic evaluation metrics by finetuning over BLEURT model to facilitate future research on our benchmark.<|reference_end|>
arxiv
@article{su2024actplan-1k:, title={ActPlan-1K: Benchmarking the Procedural Planning Ability of Visual Language Models in Household Activities}, author={Ying Su, Zhan Ling, Haochen Shi, Jiayang Cheng, Yauwai Yim, Yangqiu Song}, journal={arXiv preprint arXiv:2410.03907}, year={2024}, archivePrefix={arXiv}, eprint={2410.03907}, primaryClass={cs.CL} }
su2024actplan-1k:
arxiv-665912
2410.03908
Still Not Quite There! Evaluating Large Language Models for Comorbid Mental Health Diagnosis
<|reference_start|>Still Not Quite There! Evaluating Large Language Models for Comorbid Mental Health Diagnosis: In this study, we introduce ANGST, a novel, first-of-its kind benchmark for depression-anxiety comorbidity classification from social media posts. Unlike contemporary datasets that often oversimplify the intricate interplay between different mental health disorders by treating them as isolated conditions, ANGST enables multi-label classification, allowing each post to be simultaneously identified as indicating depression and/or anxiety. Comprising 2876 meticulously annotated posts by expert psychologists and an additional 7667 silver-labeled posts, ANGST posits a more representative sample of online mental health discourse. Moreover, we benchmark ANGST using various state-of-the-art language models, ranging from Mental-BERT to GPT-4. Our results provide significant insights into the capabilities and limitations of these models in complex diagnostic scenarios. While GPT-4 generally outperforms other models, none achieve an F1 score exceeding 72% in multi-class comorbid classification, underscoring the ongoing challenges in applying language models to mental health diagnostics.<|reference_end|>
arxiv
@article{hengle2024still, title={Still Not Quite There! Evaluating Large Language Models for Comorbid Mental Health Diagnosis}, author={Amey Hengle, Atharva Kulkarni, Shantanu Patankar, Madhumitha Chandrasekaran, Sneha D'Silva, Jemima Jacob, Rashmi Gupta}, journal={arXiv preprint arXiv:2410.03908}, year={2024}, archivePrefix={arXiv}, eprint={2410.03908}, primaryClass={cs.CL cs.AI} }
hengle2024still
arxiv-665913
2410.03909
Improving Efficiency of Sampling-based Motion Planning via Message-Passing Monte Carlo
<|reference_start|>Improving Efficiency of Sampling-based Motion Planning via Message-Passing Monte Carlo: Sampling-based motion planning methods, while effective in high-dimensional spaces, often suffer from inefficiencies due to irregular sampling distributions, leading to suboptimal exploration of the configuration space. In this paper, we propose an approach that enhances the efficiency of these methods by utilizing low-discrepancy distributions generated through Message-Passing Monte Carlo (MPMC). MPMC leverages Graph Neural Networks (GNNs) to generate point sets that uniformly cover the space, with uniformity assessed using the the $\cL_p$-discrepancy measure, which quantifies the irregularity of sample distributions. By improving the uniformity of the point sets, our approach significantly reduces computational overhead and the number of samples required for solving motion planning problems. Experimental results demonstrate that our method outperforms traditional sampling techniques in terms of planning efficiency.<|reference_end|>
arxiv
@article{chahine2024improving, title={Improving Efficiency of Sampling-based Motion Planning via Message-Passing Monte Carlo}, author={Makram Chahine, T. Konstantin Rusch, Zach J. Patterson, Daniela Rus}, journal={arXiv preprint arXiv:2410.03909}, year={2024}, archivePrefix={arXiv}, eprint={2410.03909}, primaryClass={cs.RO} }
chahine2024improving
arxiv-665914
2410.03913
Leveraging Fundamental Analysis for Stock Trend Prediction for Profit
<|reference_start|>Leveraging Fundamental Analysis for Stock Trend Prediction for Profit: This paper investigates the application of machine learning models, Long Short-Term Memory (LSTM), one-dimensional Convolutional Neural Networks (1D CNN), and Logistic Regression (LR), for predicting stock trends based on fundamental analysis. Unlike most existing studies that predominantly utilize technical or sentiment analysis, we emphasize the use of a company's financial statements and intrinsic value for trend forecasting. Using a dataset of 269 data points from publicly traded companies across various sectors from 2019 to 2023, we employ key financial ratios and the Discounted Cash Flow (DCF) model to formulate two prediction tasks: Annual Stock Price Difference (ASPD) and Difference between Current Stock Price and Intrinsic Value (DCSPIV). These tasks assess the likelihood of annual profit and current profitability, respectively. Our results demonstrate that LR models outperform CNN and LSTM models, achieving an average test accuracy of 74.66% for ASPD and 72.85% for DCSPIV. This study contributes to the limited literature on integrating fundamental analysis into machine learning for stock prediction, offering valuable insights for both academic research and practical investment strategies. By leveraging fundamental data, our approach highlights the potential for long-term stock trend prediction, supporting portfolio managers in their decision-making processes.<|reference_end|>
arxiv
@article{phan2024leveraging, title={Leveraging Fundamental Analysis for Stock Trend Prediction for Profit}, author={John Phan and Hung-Fu Chang}, journal={arXiv preprint arXiv:2410.03913}, year={2024}, archivePrefix={arXiv}, eprint={2410.03913}, primaryClass={q-fin.ST cs.AI cs.LG} }
phan2024leveraging
arxiv-665915
2410.03915
Distribution Guided Active Feature Acquisition
<|reference_start|>Distribution Guided Active Feature Acquisition: Human agents routinely reason on instances with incomplete and muddied data (and weigh the cost of obtaining further features). In contrast, much of ML is devoted to the unrealistic, sterile environment where all features are observed and further information on an instance is obviated. Here we extend past static ML and develop an active feature acquisition (AFA) framework that interacts with the environment to obtain new information on-the-fly and can: 1) make inferences on an instance in the face of incomplete features, 2) determine a plan for feature acquisitions to obtain additional information on the instance at hand. We build our AFA framework on a backbone of understanding the information and conditional dependencies that are present in the data. First, we show how to build generative models that can capture dependencies over arbitrary subsets of features and employ these models for acquisitions in a greedy scheme. After, we show that it is possible to guide the training of RL agents for AFA via side-information and auxiliary rewards stemming from our generative models. We also examine two important factors for deploying AFA models in real-world scenarios, namely interpretability and robustness. Extensive experiments demonstrate the state-of-the-art performance of our AFA framework.<|reference_end|>
arxiv
@article{li2024distribution, title={Distribution Guided Active Feature Acquisition}, author={Yang Li, Junier Oliva}, journal={arXiv preprint arXiv:2410.03915}, year={2024}, archivePrefix={arXiv}, eprint={2410.03915}, primaryClass={cs.LG} }
li2024distribution
arxiv-665916
2410.03917
Multi-Objective Risk Assessment Framework for Exploration Planning Using Terrain and Traversability Analysis
<|reference_start|>Multi-Objective Risk Assessment Framework for Exploration Planning Using Terrain and Traversability Analysis: Exploration of unknown, unstructured environments, such as in search and rescue, cave exploration, and planetary missions,presents significant challenges due to their unpredictable nature. This unpredictability can lead to inefficient path planning and potential mission failures. We propose a multi-objective risk assessment method for exploration planning in such unconstrained environments. Our approach dynamically adjusts the weight of various risk factors to prevent the robot from undertaking lethal actions too early in the mission. By gradually increasing the allowable risk as the mission progresses, our method enables more efficient exploration. We evaluate risk based on environmental terrain properties, including elevation, slope, roughness, and traversability, and account for factors like battery life, mission duration, and travel distance. Our method is validated through experiments in various subterranean simulated cave environments. The results demonstrate that our approach ensures consistent exploration without incurring lethal actions, while introducing minimal computational overhead to the planning process.<|reference_end|>
arxiv
@article{souleiman2024multi-objective, title={Multi-Objective Risk Assessment Framework for Exploration Planning Using Terrain and Traversability Analysis}, author={Riana Gagnon Souleiman, Vivek Shankar Varadharajan, Giovanni Beltrame}, journal={arXiv preprint arXiv:2410.03917}, year={2024}, archivePrefix={arXiv}, eprint={2410.03917}, primaryClass={cs.RO} }
souleiman2024multi-objective
arxiv-665917
2410.03918
STONE: A Submodular Optimization Framework for Active 3D Object Detection
<|reference_start|>STONE: A Submodular Optimization Framework for Active 3D Object Detection: 3D object detection is fundamentally important for various emerging applications, including autonomous driving and robotics. A key requirement for training an accurate 3D object detector is the availability of a large amount of LiDAR-based point cloud data. Unfortunately, labeling point cloud data is extremely challenging, as accurate 3D bounding boxes and semantic labels are required for each potential object. This paper proposes a unified active 3D object detection framework, for greatly reducing the labeling cost of training 3D object detector. Our framework is based on a novel formulation of submodular optimization, specifically tailored to the problem of active 3D object detection. In particular, we address two fundamental challenges associated with active 3D object detection: data imbalance and the need to cover the distribution of the data, including LiDAR-based point cloud data of varying difficulty levels. Extensive experiments demonstrate that our method achieves state-of-the-art performance with high computational efficiency compared to existing active learning methods.<|reference_end|>
arxiv
@article{mao2024stone:, title={STONE: A Submodular Optimization Framework for Active 3D Object Detection}, author={Ruiyu Mao, Sarthak Kumar Maharana, Rishabh K Iyer, Yunhui Guo}, journal={arXiv preprint arXiv:2410.03918}, year={2024}, archivePrefix={arXiv}, eprint={2410.03918}, primaryClass={cs.CV} }
mao2024stone:
arxiv-665918
2410.03919
Online Posterior Sampling with a Diffusion Prior
<|reference_start|>Online Posterior Sampling with a Diffusion Prior: Posterior sampling in contextual bandits with a Gaussian prior can be implemented exactly or approximately using the Laplace approximation. The Gaussian prior is computationally efficient but it cannot describe complex distributions. In this work, we propose approximate posterior sampling algorithms for contextual bandits with a diffusion model prior. The key idea is to sample from a chain of approximate conditional posteriors, one for each stage of the reverse process, which are estimated in a closed form using the Laplace approximation. Our approximations are motivated by posterior sampling with a Gaussian prior, and inherit its simplicity and efficiency. They are asymptotically consistent and perform well empirically on a variety of contextual bandit problems.<|reference_end|>
arxiv
@article{kveton2024online, title={Online Posterior Sampling with a Diffusion Prior}, author={Branislav Kveton, Boris Oreshkin, Youngsuk Park, Aniket Deshmukh, and Rui Song}, journal={arXiv preprint arXiv:2410.03919}, year={2024}, archivePrefix={arXiv}, eprint={2410.03919}, primaryClass={cs.LG stat.ML} }
kveton2024online
arxiv-665919
2410.03920
Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction
<|reference_start|>Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction: Differentiable simulation has become a powerful tool for system identification. While prior work has focused on identifying robot properties using robot-specific data or object properties using object-specific data, our approach calibrates object properties by using information from the robot, without relying on data from the object itself. Specifically, we utilize robot joint encoder information, which is commonly available in standard robotic systems. Our key observation is that by analyzing the robot's reactions to manipulated objects, we can infer properties of those objects, such as inertia and softness. Leveraging this insight, we develop differentiable simulations of robot-object interactions to inversely identify the properties of the manipulated objects. Our approach relies solely on proprioception -- the robot's internal sensing capabilities -- and does not require external measurement tools or vision-based tracking systems. This general method is applicable to any articulated robot and requires only joint position information. We demonstrate the effectiveness of our method on a low-cost robotic platform, achieving accurate mass and elastic modulus estimations of manipulated objects with just a few seconds of computation on a laptop.<|reference_end|>
arxiv
@article{chen2024learning, title={Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction}, author={Peter Yichen Chen, Chao Liu, Pingchuan Ma, John Eastman, Daniela Rus, Dylan Randle, Yuri Ivanov, Wojciech Matusik}, journal={arXiv preprint arXiv:2410.03920}, year={2024}, archivePrefix={arXiv}, eprint={2410.03920}, primaryClass={cs.RO cs.AI cs.CE cs.CV physics.comp-ph} }
chen2024learning
arxiv-665920
2410.03923
Question-Answering System for Bangla: Fine-tuning BERT-Bangla for a Closed Domain
<|reference_start|>Question-Answering System for Bangla: Fine-tuning BERT-Bangla for a Closed Domain: Question-answering systems for Bengali have seen limited development, particularly in domain-specific applications. Leveraging advancements in natural language processing, this paper explores a fine-tuned BERT-Bangla model to address this gap. It presents the development of a question-answering system for Bengali using a fine-tuned BERT-Bangla model in a closed domain. The dataset was sourced from Khulna University of Engineering \& Technology's (KUET) website and other relevant texts. The system was trained and evaluated with 2500 question-answer pairs generated from curated data. Key metrics, including the Exact Match (EM) score and F1 score, were used for evaluation, achieving scores of 55.26\% and 74.21\%, respectively. The results demonstrate promising potential for domain-specific Bengali question-answering systems. Further refinements are needed to improve performance for more complex queries.<|reference_end|>
arxiv
@article{roy2024question-answering, title={Question-Answering System for Bangla: Fine-tuning BERT-Bangla for a Closed Domain}, author={Subal Chandra Roy, Md Motaleb Hossen Manik}, journal={arXiv preprint arXiv:2410.03923}, year={2024}, archivePrefix={arXiv}, eprint={2410.03923}, primaryClass={cs.CL} }
roy2024question-answering
arxiv-665921
2410.03924
Online Control-Informed Learning
<|reference_start|>Online Control-Informed Learning: This paper proposes an Online Control-Informed Learning (OCIL) framework, which synthesizes the well-established control theories to solve a broad class of learning and control tasks in real time. This novel integration effectively handles practical issues in machine learning such as noisy measurement data, online learning, and data efficiency. By considering any robot as a tunable optimal control system, we propose an online parameter estimator based on extended Kalman filter (EKF) to incrementally tune the system in real time, enabling it to complete designated learning or control tasks. The proposed method also improves robustness in learning by effectively managing noise in the data. Theoretical analysis is provided to demonstrate the convergence and regret of OCIL. Three learning modes of OCIL, i.e. Online Imitation Learning, Online System Identification, and Policy Tuning On-the-fly, are investigated via experiments, which validate their effectiveness.<|reference_end|>
arxiv
@article{liang2024online, title={Online Control-Informed Learning}, author={Zihao Liang, Tianyu Zhou, Zehui Lu, Shaoshuai Mou}, journal={arXiv preprint arXiv:2410.03924}, year={2024}, archivePrefix={arXiv}, eprint={2410.03924}, primaryClass={math.OC cs.LG cs.RO cs.SY eess.SY} }
liang2024online
arxiv-665922
2410.03925
C3PA: An Open Dataset of Expert-Annotated and Regulation-Aware Privacy Policies to Enable Scalable Regulatory Compliance Audits
<|reference_start|>C3PA: An Open Dataset of Expert-Annotated and Regulation-Aware Privacy Policies to Enable Scalable Regulatory Compliance Audits: The development of tools and techniques to analyze and extract organizations data habits from privacy policies are critical for scalable regulatory compliance audits. Unfortunately, these tools are becoming increasingly limited in their ability to identify compliance issues and fixes. After all, most were developed using regulation-agnostic datasets of annotated privacy policies obtained from a time before the introduction of landmark privacy regulations such as EUs GDPR and Californias CCPA. In this paper, we describe the first open regulation-aware dataset of expert-annotated privacy policies, C3PA (CCPA Privacy Policy Provision Annotations), aimed to address this challenge. C3PA contains over 48K expert-labeled privacy policy text segments associated with responses to CCPA-specific disclosure mandates from 411 unique organizations. We demonstrate that the C3PA dataset is uniquely suited for aiding automated audits of compliance with CCPA-related disclosure mandates.<|reference_end|>
arxiv
@article{musa2024c3pa:, title={C3PA: An Open Dataset of Expert-Annotated and Regulation-Aware Privacy Policies to Enable Scalable Regulatory Compliance Audits}, author={Maaz Bin Musa and Steven M. Winston and Garrison Allen and Jacob Schiller and Kevin Moore and Sean Quick and Johnathan Melvin and Padmini Srinivasan and Mihailis E. Diamantis and Rishab Nithyanand}, journal={arXiv preprint arXiv:2410.03925}, year={2024}, archivePrefix={arXiv}, eprint={2410.03925}, primaryClass={cs.CL cs.IR} }
musa2024c3pa:
arxiv-665923
2410.03927
End-to-End Reaction Field Energy Modeling via Deep Learning based Voxel-to-voxel Transform
<|reference_start|>End-to-End Reaction Field Energy Modeling via Deep Learning based Voxel-to-voxel Transform: In computational biochemistry and biophysics, understanding the role of electrostatic interactions is crucial for elucidating the structure, dynamics, and function of biomolecules. The Poisson-Boltzmann (PB) equation is a foundational tool for modeling these interactions by describing the electrostatic potential in and around charged molecules. However, solving the PB equation presents significant computational challenges due to the complexity of biomolecular surfaces and the need to account for mobile ions. While traditional numerical methods for solving the PB equation are accurate, they are computationally expensive and scale poorly with increasing system size. To address these challenges, we introduce PBNeF, a novel machine learning approach inspired by recent advancements in neural network-based partial differential equation solvers. Our method formulates the input and boundary electrostatic conditions of the PB equation into a learnable voxel representation, enabling the use of a neural field transformer to predict the PB solution and, subsequently, the reaction field potential energy. Extensive experiments demonstrate that PBNeF achieves over a 100-fold speedup compared to traditional PB solvers, while maintaining accuracy comparable to the Generalized Born (GB) model.<|reference_end|>
arxiv
@article{wu2024end-to-end, title={End-to-End Reaction Field Energy Modeling via Deep Learning based Voxel-to-voxel Transform}, author={Yongxian Wu, Qiang Zhu, Ray Luo}, journal={arXiv preprint arXiv:2410.03927}, year={2024}, archivePrefix={arXiv}, eprint={2410.03927}, primaryClass={q-bio.BM cs.LG q-bio.QM} }
wu2024end-to-end
arxiv-665924
2410.03929
Toward Understanding the Experiences of People in Late Adulthood with Embedded Information Displays in the Home
<|reference_start|>Toward Understanding the Experiences of People in Late Adulthood with Embedded Information Displays in the Home: Embedded information displays (EIDs) are becoming increasingly ubiquitous on home appliances and devices such as microwaves, coffee machines, fridges, or digital thermostats. These displays are often multi-purpose, functioning as interfaces for selecting device settings, communicating operating status using simple visualizations, and displaying notifications. However, their usability for people in the late adulthood (PLA) development stage is not well-understood. We report on two focus groups with PLA (n = 11, ages 76-94) from a local retirement community. Participants were shown images of everyday home electronics and appliances, answering questions about their experiences using the EIDs. Using open coding, we qualitatively analyzed their comments to distill key themes regarding how EIDs can negatively affect PLA's ability to take in information (e.g., poor labels) and interact with these devices (e.g., unintuitive steps) alongside strategies employed to work around these issues. We argue that understanding the equitable design and communication of devices' functions, operating status, and messages is important for future information display designers. We hope this work stimulates further investigation into more equitable EID design.<|reference_end|>
arxiv
@article{while2024toward, title={Toward Understanding the Experiences of People in Late Adulthood with Embedded Information Displays in the Home}, author={Zack While, Henry Wheeler-Klainberg, Tanja Blascheck, Petra Isenberg, and Ali Sarvghad}, journal={arXiv preprint arXiv:2410.03929}, year={2024}, archivePrefix={arXiv}, eprint={2410.03929}, primaryClass={cs.HC} }
while2024toward
arxiv-665925
2410.03930
Reverb: Open-Source ASR and Diarization from Rev
<|reference_start|>Reverb: Open-Source ASR and Diarization from Rev: Today, we are open-sourcing our core speech recognition and diarization models for non-commercial use. We are releasing both a full production pipeline for developers as well as pared-down research models for experimentation. Rev hopes that these releases will spur research and innovation in the fast-moving domain of voice technology. The speech recognition models released today outperform all existing open source speech recognition models across a variety of long-form speech recognition domains.<|reference_end|>
arxiv
@article{bhandari2024reverb:, title={Reverb: Open-Source ASR and Diarization from Rev}, author={Nishchal Bhandari, Danny Chen, Miguel 'Angel del R'io Fern'andez, Natalie Delworth, Jennifer Drexler Fox, Mig"uel Jett'e, Quinten McNamara, Corey Miller, Ondv{r}ej Novotn'y, J'an Profant, Nan Qin, Martin Ratajczak, and Jean-Philippe Robichaud}, journal={arXiv preprint arXiv:2410.03930}, year={2024}, archivePrefix={arXiv}, eprint={2410.03930}, primaryClass={cs.CL cs.SD eess.AS} }
bhandari2024reverb:
arxiv-665926
2410.03935
GAS-Norm: Score-Driven Adaptive Normalization for Non-Stationary Time Series Forecasting in Deep Learning
<|reference_start|>GAS-Norm: Score-Driven Adaptive Normalization for Non-Stationary Time Series Forecasting in Deep Learning: Despite their popularity, deep neural networks (DNNs) applied to time series forecasting often fail to beat simpler statistical models. One of the main causes of this suboptimal performance is the data non-stationarity present in many processes. In particular, changes in the mean and variance of the input data can disrupt the predictive capability of a DNN. In this paper, we first show how DNN forecasting models fail in simple non-stationary settings. We then introduce GAS-Norm, a novel methodology for adaptive time series normalization and forecasting based on the combination of a Generalized Autoregressive Score (GAS) model and a Deep Neural Network. The GAS approach encompasses a score-driven family of models that estimate the mean and variance at each new observation, providing updated statistics to normalize the input data of the deep model. The output of the DNN is eventually denormalized using the statistics forecasted by the GAS model, resulting in a hybrid approach that leverages the strengths of both statistical modeling and deep learning. The adaptive normalization improves the performance of the model in non-stationary settings. The proposed approach is model-agnostic and can be applied to any DNN forecasting model. To empirically validate our proposal, we first compare GAS-Norm with other state-of-the-art normalization methods. We then combine it with state-of-the-art DNN forecasting models and test them on real-world datasets from the Monash open-access forecasting repository. Results show that deep forecasting models improve their performance in 21 out of 25 settings when combined with GAS-Norm compared to other normalization methods.<|reference_end|>
arxiv
@article{urettini2024gas-norm:, title={GAS-Norm: Score-Driven Adaptive Normalization for Non-Stationary Time Series Forecasting in Deep Learning}, author={Edoardo Urettini, Daniele Atzeni, Reshawn J. Ramjattan, Antonio Carta}, journal={Proceedings of the 33rd ACM International Conference on Information and Knowledge Management (CIKM '24), October 21--25, 2024, Boise, ID, USA}, year={2024}, doi={10.1145/3627673.3679822}, archivePrefix={arXiv}, eprint={2410.03935}, primaryClass={cs.LG stat.ML} }
urettini2024gas-norm:
arxiv-665927
2410.03936
Learning Truncated Causal History Model for Video Restoration
<|reference_start|>Learning Truncated Causal History Model for Video Restoration: One key challenge to video restoration is to model the transition dynamics of video frames governed by motion. In this work, we propose TURTLE to learn the truncated causal history model for efficient and high-performing video restoration. Unlike traditional methods that process a range of contextual frames in parallel, TURTLE enhances efficiency by storing and summarizing a truncated history of the input frame latent representation into an evolving historical state. This is achieved through a sophisticated similarity-based retrieval mechanism that implicitly accounts for inter-frame motion and alignment. The causal design in TURTLE enables recurrence in inference through state-memorized historical features while allowing parallel training by sampling truncated video clips. We report new state-of-the-art results on a multitude of video restoration benchmark tasks, including video desnowing, nighttime video deraining, video raindrops and rain streak removal, video super-resolution, real-world and synthetic video deblurring, and blind video denoising while reducing the computational cost compared to existing best contextual methods on all these tasks.<|reference_end|>
arxiv
@article{ghasemabadi2024learning, title={Learning Truncated Causal History Model for Video Restoration}, author={Amirhosein Ghasemabadi, Muhammad Kamran Janjua, Mohammad Salameh, Di Niu}, journal={arXiv preprint arXiv:2410.03936}, year={2024}, archivePrefix={arXiv}, eprint={2410.03936}, primaryClass={cs.CV cs.AI cs.LG} }
ghasemabadi2024learning
arxiv-665928
2410.03937
Clustering Alzheimer's Disease Subtypes via Similarity Learning and Graph Diffusion
<|reference_start|>Clustering Alzheimer's Disease Subtypes via Similarity Learning and Graph Diffusion: Alzheimer's disease (AD) is a complex neurodegenerative disorder that affects millions of people worldwide. Due to the heterogeneous nature of AD, its diagnosis and treatment pose critical challenges. Consequently, there is a growing research interest in identifying homogeneous AD subtypes that can assist in addressing these challenges in recent years. In this study, we aim to identify subtypes of AD that represent distinctive clinical features and underlying pathology by utilizing unsupervised clustering with graph diffusion and similarity learning. We adopted SIMLR, a multi-kernel similarity learning framework, and graph diffusion to perform clustering on a group of 829 patients with AD and mild cognitive impairment (MCI, a prodromal stage of AD) based on their cortical thickness measurements extracted from magnetic resonance imaging (MRI) scans. Although the clustering approach we utilized has not been explored for the task of AD subtyping before, it demonstrated significantly better performance than several commonly used clustering methods. Specifically, we showed the power of graph diffusion in reducing the effects of noise in the subtype detection. Our results revealed five subtypes that differed remarkably in their biomarkers, cognitive status, and some other clinical features. To evaluate the resultant subtypes further, a genetic association study was carried out and successfully identified potential genetic underpinnings of different AD subtypes. Our source code is available at: https://github.com/PennShenLab/AD-SIMLR.<|reference_end|>
arxiv
@article{wei2024clustering, title={Clustering Alzheimer's Disease Subtypes via Similarity Learning and Graph Diffusion}, author={Tianyi Wei, Shu Yang, Davoud Ataee Tarzanagh, Jingxuan Bao, Jia Xu, Patryk Orzechowski, Joost B. Wagenaar, Qi Long, Li Shen}, journal={arXiv preprint arXiv:2410.03937}, year={2024}, archivePrefix={arXiv}, eprint={2410.03937}, primaryClass={cs.LG cs.CV eess.IV stat.ML} }
wei2024clustering
arxiv-665929
2410.03939
A Feasibility Study of a Soft, Low-Cost, 6-Axis Load Cell for Haptics
<|reference_start|>A Feasibility Study of a Soft, Low-Cost, 6-Axis Load Cell for Haptics: Haptic devices have shown to be valuable in supplementing surgical training, especially when providing haptic feedback based on user performance metrics such as wrench applied by the user on the tool. However, current 6-axis force/torque sensors are prohibitively expensive. This paper presents the design and calibration of a low-cost, six-axis force/torque sensor specially designed for laparoscopic haptic training applications. The proposed design uses Hall-effect sensors to measure the change in the position of magnets embedded in a silicone layer that results from an applied wrench to the device. Preliminary experimental validation demonstrates that these sensors can achieve an accuracy of 0.45 N and 0.014 Nm, and a theoretical XY range of +/-50N, Z range of +/-20N, and torque range of +/-0.2Nm. This study indicates that the proposed low-cost 6-axis force/torque sensor can accurately measure user force and provide useful feedback during laparoscopic training on a haptic device.<|reference_end|>
arxiv
@article{veliky2024a, title={A Feasibility Study of a Soft, Low-Cost, 6-Axis Load Cell for Haptics}, author={Madison Veliky, Garrison L.H. Johnston, Ahmet Yildiz, Nabil Simaan}, journal={arXiv preprint arXiv:2410.03939}, year={2024}, archivePrefix={arXiv}, eprint={2410.03939}, primaryClass={cs.RO} }
veliky2024a
arxiv-665930
2410.03941
AutoLoRA: AutoGuidance Meets Low-Rank Adaptation for Diffusion Models
<|reference_start|>AutoLoRA: AutoGuidance Meets Low-Rank Adaptation for Diffusion Models: Low-rank adaptation (LoRA) is a fine-tuning technique that can be applied to conditional generative diffusion models. LoRA utilizes a small number of context examples to adapt the model to a specific domain, character, style, or concept. However, due to the limited data utilized during training, the fine-tuned model performance is often characterized by strong context bias and a low degree of variability in the generated images. To solve this issue, we introduce AutoLoRA, a novel guidance technique for diffusion models fine-tuned with the LoRA approach. Inspired by other guidance techniques, AutoLoRA searches for a trade-off between consistency in the domain represented by LoRA weights and sample diversity from the base conditional diffusion model. Moreover, we show that incorporating classifier-free guidance for both LoRA fine-tuned and base models leads to generating samples with higher diversity and better quality. The experimental results for several fine-tuned LoRA domains show superiority over existing guidance techniques on selected metrics.<|reference_end|>
arxiv
@article{kasymov2024autolora:, title={AutoLoRA: AutoGuidance Meets Low-Rank Adaptation for Diffusion Models}, author={Artur Kasymov, Marcin Sendera, Micha{l} Stypu{l}kowski, Maciej Zik{e}ba, Przemys{l}aw Spurek}, journal={arXiv preprint arXiv:2410.03941}, year={2024}, archivePrefix={arXiv}, eprint={2410.03941}, primaryClass={cs.CV} }
kasymov2024autolora:
arxiv-665931
2410.03943
Oscillatory State-Space Models
<|reference_start|>Oscillatory State-Space Models: We propose Linear Oscillatory State-Space models (LinOSS) for efficiently learning on long sequences. Inspired by cortical dynamics of biological neural networks, we base our proposed LinOSS model on a system of forced harmonic oscillators. A stable discretization, integrated over time using fast associative parallel scans, yields the proposed state-space model. We prove that LinOSS produces stable dynamics only requiring nonnegative diagonal state matrix. This is in stark contrast to many previous state-space models relying heavily on restrictive parameterizations. Moreover, we rigorously show that LinOSS is universal, i.e., it can approximate any continuous and causal operator mapping between time-varying functions, to desired accuracy. In addition, we show that an implicit-explicit discretization of LinOSS perfectly conserves the symmetry of time reversibility of the underlying dynamics. Together, these properties enable efficient modeling of long-range interactions, while ensuring stable and accurate long-horizon forecasting. Finally, our empirical results, spanning a wide range of time-series tasks from mid-range to very long-range classification and regression, as well as long-horizon forecasting, demonstrate that our proposed LinOSS model consistently outperforms state-of-the-art sequence models. Notably, LinOSS outperforms Mamba by nearly 2x and LRU by 2.5x on a sequence modeling task with sequences of length 50k.<|reference_end|>
arxiv
@article{rusch2024oscillatory, title={Oscillatory State-Space Models}, author={T. Konstantin Rusch, Daniela Rus}, journal={arXiv preprint arXiv:2410.03943}, year={2024}, archivePrefix={arXiv}, eprint={2410.03943}, primaryClass={cs.LG cs.NE} }
rusch2024oscillatory
arxiv-665932
2410.03945
Interpolation-Free Deep Learning for Meteorological Downscaling on Unaligned Grids Across Multiple Domains with Application to Wind Power
<|reference_start|>Interpolation-Free Deep Learning for Meteorological Downscaling on Unaligned Grids Across Multiple Domains with Application to Wind Power: As climate change intensifies, the shift to cleaner energy sources becomes increasingly urgent. With wind energy production set to accelerate, reliable wind probabilistic forecasts are essential to ensure its efficient use. However, since numerical weather prediction models are computationally expensive, probabilistic forecasts are produced at resolutions too coarse to capture all mesoscale wind behaviors. Statistical downscaling, typically applied to enchance the resolution of climate model simulations, presents a viable solution with lower computational costs by learning a mapping from low-resolution (LR) variables to high-resolution (HR) meteorological variables. Leveraging deep learning, we evaluate a downscaling model based on a state-of-the-art U-Net architecture, applied to an ensemble member from a coarse-scale probabilistic forecast of wind velocity. The architecture is modified to incorporate (1) a learned grid alignment strategy to resolve LR-HR grid mismatches and (2) a processing module for multi-level atmospheric predictors. To extend the downscaling model's applicability from fixed spatial domains to the entire Canadian region, we assess a transfer learning approach. Our results show that the learned grid alignment strategy performs as well as conventional pre-processing interpolation steps and that LR wind speed at multiple levels is sufficient as a predictor, enabling a more compact architecture. Additionally, they suggest that extending to new spatial domains using transfer learning is promising, and that downscaled wind velocities demonstrate potential in improving the detection of wind power ramps, a critical phenomenon for wind energy.<|reference_end|>
arxiv
@article{giroux2024interpolation-free, title={Interpolation-Free Deep Learning for Meteorological Downscaling on Unaligned Grids Across Multiple Domains with Application to Wind Power}, author={Jean-S'ebastien Giroux and Simon-Philippe Breton and Julie Carreau}, journal={arXiv preprint arXiv:2410.03945}, year={2024}, archivePrefix={arXiv}, eprint={2410.03945}, primaryClass={cs.LG cs.CV} }
giroux2024interpolation-free
arxiv-665933
2410.03948
Unidirectional Key Update in Updatable Encryption, Revisited
<|reference_start|>Unidirectional Key Update in Updatable Encryption, Revisited: In this paper we construct a new efficient updatable encryption (UE) scheme based on FrodoPKE learning with errors key encapsulation. We analyse the security of the proposed scheme in the backward-leak uni-directional setting within the rand-ind-eu-cpa model. Since the underlying computationally hard problem here is LWE, the scheme is secure against both classical and quantum attacks.<|reference_end|>
arxiv
@article{jurkiewicz2024unidirectional, title={Unidirectional Key Update in Updatable Encryption, Revisited}, author={M. Jurkiewicz, K. Prabucka}, journal={arXiv preprint arXiv:2410.03948}, year={2024}, archivePrefix={arXiv}, eprint={2410.03948}, primaryClass={cs.CR} }
jurkiewicz2024unidirectional
arxiv-665934
2410.03950
Structured List-Grounded Question Answering
<|reference_start|>Structured List-Grounded Question Answering: Document-grounded dialogue systems aim to answer user queries by leveraging external information. Previous studies have mainly focused on handling free-form documents, often overlooking structured data such as lists, which can represent a range of nuanced semantic relations. Motivated by the observation that even advanced language models like GPT-3.5 often miss semantic cues from lists, this paper aims to enhance question answering (QA) systems for better interpretation and use of structured lists. To this end, we introduce the LIST2QA dataset, a novel benchmark to evaluate the ability of QA systems to respond effectively using list information. This dataset is created from unlabeled customer service documents using language models and model-based filtering processes to enhance data quality, and can be used to fine-tune and evaluate QA models. Apart from directly generating responses through fine-tuned models, we further explore the explicit use of Intermediate Steps for Lists (ISL), aligning list items with user backgrounds to better reflect how humans interpret list items before generating responses. Our experimental results demonstrate that models trained on LIST2QA with our ISL approach outperform baselines across various metrics. Specifically, our fine-tuned Flan-T5-XL model shows increases of 3.1% in ROUGE-L, 4.6% in correctness, 4.5% in faithfulness, and 20.6% in completeness compared to models without applying filtering and the proposed ISL method.<|reference_end|>
arxiv
@article{sung2024structured, title={Structured List-Grounded Question Answering}, author={Mujeen Sung, Song Feng, James Gung, Raphael Shu, Yi Zhang, Saab Mansour}, journal={arXiv preprint arXiv:2410.03950}, year={2024}, archivePrefix={arXiv}, eprint={2410.03950}, primaryClass={cs.CL} }
sung2024structured
arxiv-665935
2410.03951
UFLUX v20: A Process-Informed Machine Learning Framework for Efficient and Explainable Modelling of Terrestrial Carbon Uptake
<|reference_start|>UFLUX v20: A Process-Informed Machine Learning Framework for Efficient and Explainable Modelling of Terrestrial Carbon Uptake: Gross Primary Productivity (GPP), the amount of carbon plants fixed by photosynthesis, is pivotal for understanding the global carbon cycle and ecosystem functioning. Process-based models built on the knowledge of ecological processes are susceptible to biases stemming from their assumptions and approximations. These limitations potentially result in considerable uncertainties in global GPP estimation, which may pose significant challenges to our Net Zero goals. This study presents UFLUX v2.0, a process-informed model that integrates state-of-art ecological knowledge and advanced machine learning techniques to reduce uncertainties in GPP estimation by learning the biases between process-based models and eddy covariance (EC) measurements. In our findings, UFLUX v2.0 demonstrated a substantial improvement in model accuracy, achieving an R^2 of 0.79 with a reduced RMSE of 1.60 g C m^-2 d^-1, compared to the process-based model's R^2 of 0.51 and RMSE of 3.09 g C m^-2 d^-1. Our global GPP distribution analysis indicates that while UFLUX v2.0 and the process-based model achieved similar global total GPP (137.47 Pg C and 132.23 Pg C, respectively), they exhibited large differences in spatial distribution, particularly in latitudinal gradients. These differences are very likely due to systematic biases in the process-based model and differing sensitivities to climate and environmental conditions. This study offers improved adaptability for GPP modelling across diverse ecosystems, and further enhances our understanding of global carbon cycles and its responses to environmental changes.<|reference_end|>
arxiv
@article{dong2024uflux, title={UFLUX v2.0: A Process-Informed Machine Learning Framework for Efficient and Explainable Modelling of Terrestrial Carbon Uptake}, author={Wenquan Dong, Songyan Zhu, Jian Xu, Casey M. Ryan, Man Chen, Jingya Zeng, Hao Yu, Congfeng Cao, Jiancheng Shi}, journal={arXiv preprint arXiv:2410.03951}, year={2024}, archivePrefix={arXiv}, eprint={2410.03951}, primaryClass={cs.LG physics.ao-ph q-bio.QM} }
dong2024uflux
arxiv-665936
2410.03952
A Brain-Inspired Regularizer for Adversarial Robustness
<|reference_start|>A Brain-Inspired Regularizer for Adversarial Robustness: Convolutional Neural Networks (CNNs) excel in many visual tasks, but they tend to be sensitive to slight input perturbations that are imperceptible to the human eye, often resulting in task failures. Recent studies indicate that training CNNs with regularizers that promote brain-like representations, using neural recordings, can improve model robustness. However, the requirement to use neural data severely restricts the utility of these methods. Is it possible to develop regularizers that mimic the computational function of neural regularizers without the need for neural recordings, thereby expanding the usability and effectiveness of these techniques? In this work, we inspect a neural regularizer introduced in Li et al. (2019) to extract its underlying strength. The regularizer uses neural representational similarities, which we find also correlate with pixel similarities. Motivated by this finding, we introduce a new regularizer that retains the essence of the original but is computed using image pixel similarities, eliminating the need for neural recordings. We show that our regularization method 1) significantly increases model robustness to a range of black box attacks on various datasets and 2) is computationally inexpensive and relies only on original datasets. Our work explores how biologically motivated loss functions can be used to drive the performance of artificial neural networks.<|reference_end|>
arxiv
@article{attias2024a, title={A Brain-Inspired Regularizer for Adversarial Robustness}, author={Elie Attias, Cengiz Pehlevan, Dina Obeid}, journal={arXiv preprint arXiv:2410.03952}, year={2024}, archivePrefix={arXiv}, eprint={2410.03952}, primaryClass={cs.LG cs.AI cs.CV q-bio.NC} }
attias2024a
arxiv-665937
2410.03953
LLM-TOPLA: Efficient LLM Ensemble by Maximising Diversity
<|reference_start|>LLM-TOPLA: Efficient LLM Ensemble by Maximising Diversity: Combining large language models during training or at inference time has shown substantial performance gain over component LLMs. This paper presents LLM-TOPLA, a diversity-optimized LLM ensemble method with three unique properties: (i) We introduce the focal diversity metric to capture the diversity-performance correlation among component LLMs of an ensemble. (ii) We develop a diversity-optimized ensemble pruning algorithm to select the top-k sub-ensembles from a pool of $N$ base LLMs. Our pruning method recommends top-performing LLM subensembles of size $S$, often much smaller than $N$. (iii) We generate new output for each prompt query by utilizing a learn-to-ensemble approach, which learns to detect and resolve the output inconsistency among all component LLMs of an ensemble. Extensive evaluation on four different benchmarks shows good performance gain over the best LLM ensemble methods: (i) In constrained solution set problems, LLM-TOPLA outperforms the best-performing ensemble (Mixtral) by 2.2\% in accuracy on MMLU and the best-performing LLM ensemble (MoreAgent) on GSM8k by 2.1\%. (ii) In generative tasks, LLM-TOPLA outperforms the top-2 performers (Llama70b/Mixtral) on SearchQA by $3.9\mathrm{x}$ in F1, and on XSum by more than $38$ in ROUGE-1. Our code and dataset, which contains outputs of 8 modern LLMs on 4 benchmarks is available at https://github.com/git-disl/llm-topla<|reference_end|>
arxiv
@article{tekin2024llm-topla:, title={LLM-TOPLA: Efficient LLM Ensemble by Maximising Diversity}, author={Selim Furkan Tekin, Fatih Ilhan, Tiansheng Huang, Sihao Hu, Ling Liu}, journal={arXiv preprint arXiv:2410.03953}, year={2024}, archivePrefix={arXiv}, eprint={2410.03953}, primaryClass={cs.CL cs.LG} }
tekin2024llm-topla:
arxiv-665938
2410.03954
SDA-GRIN for Adaptive Spatial-Temporal Multivariate Time Series Imputation
<|reference_start|>SDA-GRIN for Adaptive Spatial-Temporal Multivariate Time Series Imputation: In various applications, the multivariate time series often suffers from missing data. This issue can significantly disrupt systems that rely on the data. Spatial and temporal dependencies can be leveraged to impute the missing samples. Existing imputation methods often ignore dynamic changes in spatial dependencies. We propose a Spatial Dynamic Aware Graph Recurrent Imputation Network (SDA-GRIN) which is capable of capturing dynamic changes in spatial dependencies.SDA-GRIN leverages a multi-head attention mechanism to adapt graph structures with time. SDA-GRIN models multivariate time series as a sequence of temporal graphs and uses a recurrent message-passing architecture for imputation. We evaluate SDA-GRIN on four real-world datasets: SDA-GRIN improves MSE by 9.51% for the AQI and 9.40% for AQI-36. On the PEMS-BAY dataset, it achieves a 1.94% improvement in MSE. Detailed ablation study demonstrates the effect of window sizes and missing data on the performance of the method. Project page:https://ameskandari.github.io/sda-grin/<|reference_end|>
arxiv
@article{eskandari2024sda-grin, title={SDA-GRIN for Adaptive Spatial-Temporal Multivariate Time Series Imputation}, author={Amir Eskandari, Aman Anand, Drishti Sharma, Farhana Zulkernine}, journal={arXiv preprint arXiv:2410.03954}, year={2024}, archivePrefix={arXiv}, eprint={2410.03954}, primaryClass={cs.LG cs.AI} }
eskandari2024sda-grin
arxiv-665939
2410.03955
Model Developmental Safety: A Safety-Centric Method and Applications in Vision-Language Models
<|reference_start|>Model Developmental Safety: A Safety-Centric Method and Applications in Vision-Language Models: In the real world, a learning-enabled system usually undergoes multiple cycles of model development to enhance the system's ability to handle difficult or emerging tasks. This continual model development process raises a significant issue that the model development for acquiring new or improving existing capabilities may inadvertently lose capabilities of the old model, also known as catastrophic forgetting. Existing continual learning studies focus on mitigating catastrophic forgetting by trading off performance on previous tasks and new tasks to ensure good average performance. However, they are inadequate for many applications especially in safety-critical domains, as failure to strictly preserve the performance of the old model not only introduces safety risks and uncertainties but also imposes substantial expenses in the re-improving and re-validation of existing properties. To address this issue, we introduce model developmental safety as a guarantee of a learning system such that in the model development process the new model should strictly preserve the existing protected capabilities of the old model while improving its performance on target tasks. To ensure the model developmental safety, we present a safety-centric framework by formulating the model developmental safety as data-dependent constraints. Under this framework, we study how to develop a pretrained vision-language model (aka the CLIP model) for acquiring new capabilities or improving existing capabilities of image classification. We propose an efficient constrained optimization algorithm with theoretical guarantee and use its insights to finetune a CLIP model with task-dependent heads for promoting the model developmental safety. Our experiments on improving vision perception capabilities on autonomous driving and scene recognition datasets demonstrate the efficacy of the proposed approach.<|reference_end|>
arxiv
@article{li2024model, title={Model Developmental Safety: A Safety-Centric Method and Applications in Vision-Language Models}, author={Gang Li, Wendi Yu, Yao Yao, Wei Tong, Yingbin Liang, Qihang Lin, Tianbao Yang}, journal={arXiv preprint arXiv:2410.03955}, year={2024}, archivePrefix={arXiv}, eprint={2410.03955}, primaryClass={cs.LG cs.AI math.OC stat.ML} }
li2024model
arxiv-665940
2410.03959
Grounding Language in Multi-Perspective Referential Communication
<|reference_start|>Grounding Language in Multi-Perspective Referential Communication: We introduce a task and dataset for referring expression generation and comprehension in multi-agent embodied environments. In this task, two agents in a shared scene must take into account one another's visual perspective, which may be different from their own, to both produce and understand references to objects in a scene and the spatial relations between them. We collect a dataset of 2,970 human-written referring expressions, each paired with human comprehension judgments, and evaluate the performance of automated models as speakers and listeners paired with human partners, finding that model performance in both reference generation and comprehension lags behind that of pairs of human agents. Finally, we experiment training an open-weight speaker model with evidence of communicative success when paired with a listener, resulting in an improvement from 58.9 to 69.3% in communicative success and even outperforming the strongest proprietary model.<|reference_end|>
arxiv
@article{tang2024grounding, title={Grounding Language in Multi-Perspective Referential Communication}, author={Zineng Tang, Lingjun Mao, Alane Suhr}, journal={arXiv preprint arXiv:2410.03959}, year={2024}, archivePrefix={arXiv}, eprint={2410.03959}, primaryClass={cs.CL cs.AI cs.CV cs.GR} }
tang2024grounding
arxiv-665941
2410.03960
SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation
<|reference_start|>SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation: LLM inference for popular enterprise use cases, such as summarization, RAG, and code-generation, typically observes orders of magnitude longer prompt lengths than generation lengths. This characteristic leads to high cost of prefill and increased response latency. In this paper, we present SwiftKV, a novel model transformation and distillation procedure specifically designed to reduce the time and cost of processing prompt tokens while preserving high quality of generated tokens. SwiftKV combines three key mechanisms: i) SingleInputKV, which prefills later layers' KV cache using a much earlier layer's output, allowing prompt tokens to skip much of the model computation, ii) AcrossKV, which merges the KV caches of neighboring layers to reduce the memory footprint and support larger batch size for higher throughput, and iii) a knowledge-preserving distillation procedure that can adapt existing LLMs for SwiftKV with minimal accuracy impact and low compute and data requirement. For Llama-3.1-8B and 70B, SwiftKV reduces the compute requirement of prefill by 50% and the memory requirement of the KV cache by 62.5% while incurring minimum quality degradation across a wide range of tasks. In the end-to-end inference serving using an optimized vLLM implementation, SwiftKV realizes up to 2x higher aggregate throughput and 60% lower time per output token. It can achieve a staggering 560 TFlops/GPU of normalized inference throughput, which translates to 16K tokens/s for Llama-3.1-70B in 16-bit precision on 4x H100 GPUs.<|reference_end|>
arxiv
@article{qiao2024swiftkv:, title={SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation}, author={Aurick Qiao, Zhewei Yao, Samyam Rajbhandari, Yuxiong He}, journal={arXiv preprint arXiv:2410.03960}, year={2024}, archivePrefix={arXiv}, eprint={2410.03960}, primaryClass={cs.LG cs.AI cs.CL} }
qiao2024swiftkv:
arxiv-665942
2410.03962
SpecSAR-Former: A Lightweight Transformer-based Network for Global LULC Mapping Using Integrated Sentinel-1 and Sentinel-2
<|reference_start|>SpecSAR-Former: A Lightweight Transformer-based Network for Global LULC Mapping Using Integrated Sentinel-1 and Sentinel-2: Recent approaches in remote sensing have increasingly focused on multimodal data, driven by the growing availability of diverse earth observation datasets. Integrating complementary information from different modalities has shown substantial potential in enhancing semantic understanding. However, existing global multimodal datasets often lack the inclusion of Synthetic Aperture Radar (SAR) data, which excels at capturing texture and structural details. SAR, as a complementary perspective to other modalities, facilitates the utilization of spatial information for global land use and land cover (LULC). To address this gap, we introduce the Dynamic World+ dataset, expanding the current authoritative multispectral dataset, Dynamic World, with aligned SAR data. Additionally, to facilitate the combination of multispectral and SAR data, we propose a lightweight transformer architecture termed SpecSAR-Former. It incorporates two innovative modules, Dual Modal Enhancement Module (DMEM) and Mutual Modal Aggregation Module (MMAM), designed to exploit cross-information between the two modalities in a split-fusion manner. These modules enhance the model's ability to integrate spectral and spatial information, thereby improving the overall performance of global LULC semantic segmentation. Furthermore, we adopt an imbalanced parameter allocation strategy that assigns parameters to different modalities based on their importance and information density. Extensive experiments demonstrate that our network outperforms existing transformer and CNN-based models, achieving a mean Intersection over Union (mIoU) of 59.58%, an Overall Accuracy (OA) of 79.48%, and an F1 Score of 71.68% with only 26.70M parameters. The code will be available at https://github.com/Reagan1311/LULC_segmentation.<|reference_end|>
arxiv
@article{yu2024specsar-former:, title={SpecSAR-Former: A Lightweight Transformer-based Network for Global LULC Mapping Using Integrated Sentinel-1 and Sentinel-2}, author={Hao Yu, Gen Li, Haoyu Liu, Songyan Zhu, Wenquan Dong, Changjian Li}, journal={arXiv preprint arXiv:2410.03962}, year={2024}, archivePrefix={arXiv}, eprint={2410.03962}, primaryClass={eess.IV cs.CV} }
yu2024specsar-former:
arxiv-665943
2410.03964
Variational Language Concepts for Interpreting Foundation Language Models
<|reference_start|>Variational Language Concepts for Interpreting Foundation Language Models: Foundation Language Models (FLMs) such as BERT and its variants have achieved remarkable success in natural language processing. To date, the interpretability of FLMs has primarily relied on the attention weights in their self-attention layers. However, these attention weights only provide word-level interpretations, failing to capture higher-level structures, and are therefore lacking in readability and intuitiveness. To address this challenge, we first provide a formal definition of conceptual interpretation and then propose a variational Bayesian framework, dubbed VAriational Language Concept (VALC), to go beyond word-level interpretations and provide concept-level interpretations. Our theoretical analysis shows that our VALC finds the optimal language concepts to interpret FLM predictions. Empirical results on several real-world datasets show that our method can successfully provide conceptual interpretation for FLMs.<|reference_end|>
arxiv
@article{wang2024variational, title={Variational Language Concepts for Interpreting Foundation Language Models}, author={Hengyi Wang, Shiwei Tan, Zhiqing Hong, Desheng Zhang, Hao Wang}, journal={arXiv preprint arXiv:2410.03964}, year={2024}, archivePrefix={arXiv}, eprint={2410.03964}, primaryClass={cs.LG cs.AI cs.CL stat.ML} }
wang2024variational
arxiv-665944
2410.03968
Decoding Game: On Minimax Optimality of Heuristic Text Generation Strategies
<|reference_start|>Decoding Game: On Minimax Optimality of Heuristic Text Generation Strategies: Decoding strategies play a pivotal role in text generation for modern language models, yet a puzzling gap divides theory and practice. Surprisingly, strategies that should intuitively be optimal, such as Maximum a Posteriori (MAP), often perform poorly in practice. Meanwhile, popular heuristic approaches like Top-$k$ and Nucleus sampling, which employ truncation and normalization of the conditional next-token probabilities, have achieved great empirical success but lack theoretical justifications. In this paper, we propose Decoding Game, a comprehensive theoretical framework which reimagines text generation as a two-player zero-sum game between Strategist, who seeks to produce text credible in the true distribution, and Nature, who distorts the true distribution adversarially. After discussing the decomposibility of multi-step generation, we derive the optimal strategy in closed form for one-step Decoding Game. It is shown that the adversarial Nature imposes an implicit regularization on likelihood maximization, and truncation-normalization methods are first-order approximations to the optimal strategy under this regularization. Additionally, by generalizing the objective and parameters of Decoding Game, near-optimal strategies encompass diverse methods such as greedy search, temperature scaling, and hybrids thereof. Numerical experiments are conducted to complement our theoretical analysis.<|reference_end|>
arxiv
@article{chen2024decoding, title={Decoding Game: On Minimax Optimality of Heuristic Text Generation Strategies}, author={Sijin Chen, Omar Hagrass, Jason M. Klusowski}, journal={arXiv preprint arXiv:2410.03968}, year={2024}, archivePrefix={arXiv}, eprint={2410.03968}, primaryClass={cs.LG cs.AI cs.GT math.OC} }
chen2024decoding
arxiv-665945
2410.03969
Embrace rejection: Kernel matrix approximation by accelerated randomly pivoted Cholesky
<|reference_start|>Embrace rejection: Kernel matrix approximation by accelerated randomly pivoted Cholesky: Randomly pivoted Cholesky (RPCholesky) is an algorithm for constructing a low-rank approximation of a positive-semidefinite matrix using a small number of columns. This paper develops an accelerated version of RPCholesky that employs block matrix computations and rejection sampling to efficiently simulate the execution of the original algorithm. For the task of approximating a kernel matrix, the accelerated algorithm can run over $40\times$ faster. The paper contains implementation details, theoretical guarantees, experiments on benchmark data sets, and an application to computational chemistry.<|reference_end|>
arxiv
@article{epperly2024embrace, title={Embrace rejection: Kernel matrix approximation by accelerated randomly pivoted Cholesky}, author={Ethan N. Epperly, Joel A. Tropp, Robert J. Webber}, journal={arXiv preprint arXiv:2410.03969}, year={2024}, archivePrefix={arXiv}, eprint={2410.03969}, primaryClass={math.NA cs.NA stat.CO stat.ML} }
epperly2024embrace
arxiv-665946
2410.03970
On the Convergence of CROP-Anderson Acceleration Method
<|reference_start|>On the Convergence of CROP-Anderson Acceleration Method: Anderson Acceleration is a well-established method that allows to speed up or encourage convergence of fixed-point iterations. It has been successfully used in a variety of applications, in particular within the Self-Consistent Field (SCF) iteration method for quantum chemistry and physics computations. In recent years, the Conjugate Residual with OPtimal trial vectors (CROP) algorithm was introduced and shown to have a better performance than the classical Anderson Acceleration with less storage needed. This paper aims to delve into the intricate connections between the classical Anderson Acceleration method and the CROP algorithm. Our objectives include a comprehensive study of their convergence properties, explaining the underlying relationships, and substantiating our findings through some numerical examples. Through this exploration, we contribute valuable insights that can enhance the understanding and application of acceleration methods in practical computations, as well as the developments of new and more efficient acceleration schemes.<|reference_end|>
arxiv
@article{wan2024on, title={On the Convergence of CROP-Anderson Acceleration Method}, author={Ning Wan, Agnieszka Mik{e}dlar}, journal={arXiv preprint arXiv:2410.03970}, year={2024}, archivePrefix={arXiv}, eprint={2410.03970}, primaryClass={math.NA cs.NA} }
wan2024on
arxiv-665947
2410.03971
ROS2-Based Simulation Framework for Cyberphysical Security Analysis of UAVs
<|reference_start|>ROS2-Based Simulation Framework for Cyberphysical Security Analysis of UAVs: We present a new simulator of Uncrewed Aerial Vehicles (UAVs) that is tailored to the needs of testing cyber-physical security attacks and defenses. Recent investigations into UAV safety have unveiled various attack surfaces and some defense mechanisms. However, due to escalating regulations imposed by aviation authorities on security research on real UAVs, and the substantial costs associated with hardware test-bed configurations, there arises a necessity for a simulator capable of substituting for hardware experiments, and/or narrowing down their scope to the strictly necessary. The study of different attack mechanisms requires specific features in a simulator. We propose a simulation framework based on ROS2, leveraging some of its key advantages, including modularity, replicability, customization, and the utilization of open-source tools such as Gazebo. Our framework has a built-in motion planner, controller, communication models and attack models. We share examples of research use cases that our framework can enable, demonstrating its utility.<|reference_end|>
arxiv
@article{patil2024ros2-based, title={ROS2-Based Simulation Framework for Cyberphysical Security Analysis of UAVs}, author={Unmesh Patil, Akshith Gunasekaran, Rakesh Bobba, Houssam Abbas}, journal={arXiv preprint arXiv:2410.03971}, year={2024}, archivePrefix={arXiv}, eprint={2410.03971}, primaryClass={cs.RO cs.CR} }
patil2024ros2-based
arxiv-665948
2410.03972
Measuring and Controlling Solution Degeneracy across Task-Trained Recurrent Neural Networks
<|reference_start|>Measuring and Controlling Solution Degeneracy across Task-Trained Recurrent Neural Networks: Task-trained recurrent neural networks (RNNs) are versatile models of dynamical processes widely used in machine learning and neuroscience. While RNNs are easily trained to perform a wide range of tasks, the nature and extent of the degeneracy in the resultant solutions (i.e., the variability across trained RNNs) remain poorly understood. Here, we provide a unified framework for analyzing degeneracy across three levels: behavior, neural dynamics, and weight space. We analyzed RNNs trained on diverse tasks across machine learning and neuroscience domains, including N-bit flip-flop, sine wave generation, delayed discrimination, and path integration. Our key finding is that the variability across RNN solutions, quantified on the basis of neural dynamics and trained weights, depends primarily on network capacity and task characteristics such as complexity. We introduce information-theoretic measures to quantify task complexity and demonstrate that increasing task complexity consistently reduces degeneracy in neural dynamics and generalization behavior while increasing degeneracy in weight space. These relationships hold across diverse tasks and can be used to control the degeneracy of the solution space of task-trained RNNs. Furthermore, we provide several strategies to control solution degeneracy, enabling task-trained RNNs to learn more consistent or diverse solutions as needed. We envision that these insights will lead to more reliable machine learning models and could inspire strategies to better understand and control degeneracy observed in neuroscience experiments.<|reference_end|>
arxiv
@article{huang2024measuring, title={Measuring and Controlling Solution Degeneracy across Task-Trained Recurrent Neural Networks}, author={Ann Huang, Satpreet H. Singh, Kanaka Rajan}, journal={arXiv preprint arXiv:2410.03972}, year={2024}, archivePrefix={arXiv}, eprint={2410.03972}, primaryClass={cs.LG cs.IT cs.NE math.IT q-bio.NC} }
huang2024measuring
arxiv-665949
2410.03973
Efficient Training of Neural Stochastic Differential Equations by Matching Finite Dimensional Distributions
<|reference_start|>Efficient Training of Neural Stochastic Differential Equations by Matching Finite Dimensional Distributions: Neural Stochastic Differential Equations (Neural SDEs) have emerged as powerful mesh-free generative models for continuous stochastic processes, with critical applications in fields such as finance, physics, and biology. Previous state-of-the-art methods have relied on adversarial training, such as GANs, or on minimizing distance measures between processes using signature kernels. However, GANs suffer from issues like instability, mode collapse, and the need for specialized training techniques, while signature kernel-based methods require solving linear PDEs and backpropagating gradients through the solver, whose computational complexity scales quadratically with the discretization steps. In this paper, we identify a novel class of strictly proper scoring rules for comparing continuous Markov processes. This theoretical finding naturally leads to a novel approach called Finite Dimensional Matching (FDM) for training Neural SDEs. Our method leverages the Markov property of SDEs to provide a computationally efficient training objective. This scoring rule allows us to bypass the computational overhead associated with signature kernels and reduces the training complexity from $O(D^2)$ to $O(D)$ per epoch, where $D$ represents the number of discretization steps of the process. We demonstrate that FDM achieves superior performance, consistently outperforming existing methods in terms of both computational efficiency and generative quality.<|reference_end|>
arxiv
@article{zhang2024efficient, title={Efficient Training of Neural Stochastic Differential Equations by Matching Finite Dimensional Distributions}, author={Jianxin Zhang, Josh Viktorov, Doosan Jung, Emily Pitler}, journal={arXiv preprint arXiv:2410.03973}, year={2024}, archivePrefix={arXiv}, eprint={2410.03973}, primaryClass={cs.LG stat.ML} }
zhang2024efficient
arxiv-665950
2410.03974
Robust Barycenter Estimation using Semi-Unbalanced Neural Optimal Transport
<|reference_start|>Robust Barycenter Estimation using Semi-Unbalanced Neural Optimal Transport: A common challenge in aggregating data from multiple sources can be formalized as an \textit{Optimal Transport} (OT) barycenter problem, which seeks to compute the average of probability distributions with respect to OT discrepancies. However, the presence of outliers and noise in the data measures can significantly hinder the performance of traditional statistical methods for estimating OT barycenters. To address this issue, we propose a novel, scalable approach for estimating the \textit{robust} continuous barycenter, leveraging the dual formulation of the \textit{(semi-)unbalanced} OT problem. To the best of our knowledge, this paper is the first attempt to develop an algorithm for robust barycenters under the continuous distribution setup. Our method is framed as a $\min$-$\max$ optimization problem and is adaptable to \textit{general} cost function. We rigorously establish the theoretical underpinnings of the proposed method and demonstrate its robustness to outliers and class imbalance through a number of illustrative experiments.<|reference_end|>
arxiv
@article{gazdieva2024robust, title={Robust Barycenter Estimation using Semi-Unbalanced Neural Optimal Transport}, author={Milena Gazdieva, Jaemoo Choi, Alexander Kolesov, Jaewoong Choi, Petr Mokrov, Alexander Korotin}, journal={arXiv preprint arXiv:2410.03974}, year={2024}, archivePrefix={arXiv}, eprint={2410.03974}, primaryClass={stat.ML cs.AI cs.LG} }
gazdieva2024robust
arxiv-665951
2410.03977
Learning to Balance: Diverse Normalization for Cloth-Changing Person Re-Identification
<|reference_start|>Learning to Balance: Diverse Normalization for Cloth-Changing Person Re-Identification: Cloth-Changing Person Re-Identification (CC-ReID) involves recognizing individuals in images regardless of clothing status. In this paper, we empirically and experimentally demonstrate that completely eliminating or fully retaining clothing features is detrimental to the task. Existing work, either relying on clothing labels, silhouettes, or other auxiliary data, fundamentally aim to balance the learning of clothing and identity features. However, we practically find that achieving this balance is challenging and nuanced. In this study, we introduce a novel module called Diverse Norm, which expands personal features into orthogonal spaces and employs channel attention to separate clothing and identity features. A sample re-weighting optimization strategy is also introduced to guarantee the opposite optimization direction. Diverse Norm presents a simple yet effective approach that does not require additional data. Furthermore, Diverse Norm can be seamlessly integrated ResNet50 and significantly outperforms the state-of-the-art methods.<|reference_end|>
arxiv
@article{wang2024learning, title={Learning to Balance: Diverse Normalization for Cloth-Changing Person Re-Identification}, author={Hongjun Wang, Jiyuan Chen, Zhengwei Yin, Xuan Song, Yinqiang Zheng}, journal={arXiv preprint arXiv:2410.03977}, year={2024}, archivePrefix={arXiv}, eprint={2410.03977}, primaryClass={cs.CV cs.AI cs.LG} }
wang2024learning
arxiv-665952
2410.03978
Optimizing Sparse Generalized Singular Vectors for Feature Selection in Proximal Support Vector Machines with Application to Breast and Ovarian Cancer Detection
<|reference_start|>Optimizing Sparse Generalized Singular Vectors for Feature Selection in Proximal Support Vector Machines with Application to Breast and Ovarian Cancer Detection: This paper presents approaches to compute sparse solutions of Generalized Singular Value Problem (GSVP). The GSVP is regularized by $\ell_1$-norm and $\ell_q$-penalty for $0<q<1$, resulting in the $\ell_1$-GSVP and $\ell_q$-GSVP formulations. The solutions of these problems are determined by applying the proximal gradient descent algorithm with a fixed step size. The inherent sparsity levels within the computed solutions are exploited for feature selection, and subsequently, binary classification with non-parallel Support Vector Machines (SVM). For our feature selection task, SVM is integrated into the $\ell_1$-GSVP and $\ell_q$-GSVP frameworks to derive the $\ell_1$-GSVPSVM and $\ell_q$-GSVPSVM variants. Machine learning applications to cancer detection are considered. We remarkably report near-to-perfect balanced accuracy across breast and ovarian cancer datasets using a few selected features.<|reference_end|>
arxiv
@article{ugwu2024optimizing, title={Optimizing Sparse Generalized Singular Vectors for Feature Selection in Proximal Support Vector Machines with Application to Breast and Ovarian Cancer Detection}, author={Ugochukwu O. Ugwu and Michael Kirby}, journal={arXiv preprint arXiv:2410.03978}, year={2024}, archivePrefix={arXiv}, eprint={2410.03978}, primaryClass={cs.LG cs.NA math.NA math.OC q-bio.QM stat.ML} }
ugwu2024optimizing
arxiv-665953
2410.03979
Improving Arabic Multi-Label Emotion Classification using Stacked Embeddings and Hybrid Loss Function
<|reference_start|>Improving Arabic Multi-Label Emotion Classification using Stacked Embeddings and Hybrid Loss Function: In multi-label emotion classification, particularly for low-resource languages like Arabic, the challenges of class imbalance and label correlation hinder model performance, especially in accurately predicting minority emotions. To address these issues, this study proposes a novel approach that combines stacked embeddings, meta-learning, and a hybrid loss function to enhance multi-label emotion classification for the Arabic language. The study extracts contextual embeddings from three fine-tuned language models-ArabicBERT, MarBERT, and AraBERT-which are then stacked to form enriched embeddings. A meta-learner is trained on these stacked embeddings, and the resulting concatenated representations are provided as input to a Bi-LSTM model, followed by a fully connected neural network for multi-label classification. To further improve performance, a hybrid loss function is introduced, incorporating class weighting, label correlation matrix, and contrastive learning, effectively addressing class imbalances and improving the handling of label correlations. Extensive experiments validate the proposed model's performance across key metrics such as Precision, Recall, F1-Score, Jaccard Accuracy, and Hamming Loss. The class-wise performance analysis demonstrates the hybrid loss function's ability to significantly reduce disparities between majority and minority classes, resulting in a more balanced emotion classification. An ablation study highlights the contribution of each component, showing the superiority of the model compared to baseline approaches and other loss functions. This study not only advances multi-label emotion classification for Arabic but also presents a generalizable framework that can be adapted to other languages and domains, providing a significant step forward in addressing the challenges of low-resource emotion classification tasks.<|reference_end|>
arxiv
@article{aslam2024improving, title={Improving Arabic Multi-Label Emotion Classification using Stacked Embeddings and Hybrid Loss Function}, author={Muhammad Azeem Aslam, Wang Jun, Nisar Ahmed, Muhammad Imran Zaman, Li Yanan, Hu Hongfei, Wang Shiyu, Xin Liu}, journal={arXiv preprint arXiv:2410.03979}, year={2024}, archivePrefix={arXiv}, eprint={2410.03979}, primaryClass={cs.CV cs.CL} }
aslam2024improving
arxiv-665954
2410.03981
Survey on Code Generation for Low resource and Domain Specific Programming Languages
<|reference_start|>Survey on Code Generation for Low resource and Domain Specific Programming Languages: Large Language Models (LLMs) have shown impressive capabilities in code generation for popular programming languages. However, their performance on Low-Resource Programming Languages (LRPLs) and Domain-Specific Languages (DSLs) remains a significant challenge, affecting millions of developers-3.5 million users in Rust alone-who cannot fully utilize LLM capabilities. LRPLs and DSLs encounter unique obstacles, including data scarcity and, for DSLs, specialized syntax that is poorly represented in general-purpose datasets. Addressing these challenges is crucial, as LRPLs and DSLs enhance development efficiency in specialized domains, such as finance and science. While several surveys discuss LLMs in software engineering, none focus specifically on the challenges and opportunities associated with LRPLs and DSLs. Our survey fills this gap by systematically reviewing the current state, methodologies, and challenges in leveraging LLMs for code generation in these languages. We filtered 111 papers from over 27,000 published studies between 2020 and 2024 to evaluate the capabilities and limitations of LLMs in LRPLs and DSLs. We report the LLMs used, benchmarks, and metrics for evaluation, strategies for enhancing performance, and methods for dataset collection and curation. We identified four main evaluation techniques and several metrics for assessing code generation in LRPLs and DSLs. Our analysis categorizes improvement methods into six groups and summarizes novel architectures proposed by researchers. Despite various techniques and metrics, a standard approach and benchmark dataset for evaluating code generation in LRPLs and DSLs are lacking. This survey serves as a resource for researchers and practitioners at the intersection of LLMs, software engineering, and specialized programming languages, laying the groundwork for future advancements in code generation for LRPLs and DSLs.<|reference_end|>
arxiv
@article{joel2024a, title={A Survey on LLM-based Code Generation for Low-Resource and Domain-Specific Programming Languages}, author={Sathvik Joel, Jie JW Wu, Fatemeh H. Fard}, journal={arXiv preprint arXiv:2410.03981}, year={2024}, archivePrefix={arXiv}, eprint={2410.03981}, primaryClass={cs.SE cs.LG} }
joel2024a
arxiv-665955
2410.03982
Certified Randomness implies Secure Classical Position-Verification
<|reference_start|>Certified Randomness implies Secure Classical Position-Verification: Liu et al. (ITCS22) initiated the study of designing a secure position verification protocol based on a specific proof of quantumness protocol and classical communication. In this paper, we study this interesting topic further and answer some of the open questions that are left in that paper. We provide a new generic compiler that can convert any single round proof of quantumness-based certified randomness protocol to a secure classical communication-based position verification scheme. Later, we extend our compiler to different kinds of multi-round proof of quantumness-based certified randomness protocols. Moreover, we instantiate our compiler with a random circuit sampling (RCS)-based certified randomness protocol proposed by Aaronson and Hung (STOC 23). RCS-based techniques are within reach of today's NISQ devices; therefore, our design overcomes the limitation of the Liu et al. protocol that would require a fault-tolerant quantum computer to realize. Moreover, this is one of the first cryptographic applications of RCS-based techniques other than certified randomness.<|reference_end|>
arxiv
@article{amer2024certified, title={Certified Randomness implies Secure Classical Position-Verification}, author={Omar Amer, Kaushik Chakraborty, David Cui, Fatih Kaleoglu, Charles Lim, Minzhao Liu, and Marco Pistoia}, journal={arXiv preprint arXiv:2410.03982}, year={2024}, archivePrefix={arXiv}, eprint={2410.03982}, primaryClass={quant-ph cs.CR} }
amer2024certified
arxiv-665956
2410.03983
MetricX-24: The Google Submission to the WMT 2024 Metrics Shared Task
<|reference_start|>MetricX-24: The Google Submission to the WMT 2024 Metrics Shared Task: In this paper, we present the MetricX-24 submissions to the WMT24 Metrics Shared Task and provide details on the improvements we made over the previous version of MetricX. Our primary submission is a hybrid reference-based/-free metric, which can score a translation irrespective of whether it is given the source segment, the reference, or both. The metric is trained on previous WMT data in a two-stage fashion, first on the DA ratings only, then on a mixture of MQM and DA ratings. The training set in both stages is augmented with synthetic examples that we created to make the metric more robust to several common failure modes, such as fluent but unrelated translation, or undertranslation. We demonstrate the benefits of the individual modifications via an ablation study, and show a significant performance increase over MetricX-23 on the WMT23 MQM ratings, as well as our new synthetic challenge set.<|reference_end|>
arxiv
@article{juraska2024metricx-24:, title={MetricX-24: The Google Submission to the WMT 2024 Metrics Shared Task}, author={Juraj Juraska, Daniel Deutsch, Mara Finkelstein, Markus Freitag}, journal={arXiv preprint arXiv:2410.03983}, year={2024}, archivePrefix={arXiv}, eprint={2410.03983}, primaryClass={cs.CL} }
juraska2024metricx-24:
arxiv-665957
2410.03986
Smart Air Quality Monitoring for Automotive Workshop Environments
<|reference_start|>Smart Air Quality Monitoring for Automotive Workshop Environments: Air quality monitoring in automotive workshops is crucial for occupational health and regulatory compliance. This study presents the development of an environmental monitoring system based on Internet of Things (IoT) and Artificial Intelligence (AI) technologies. DHT-11 and MQ-135 sensors were employed to measure temperature, humidity, and toxic gas concentrations, with real-time data transmission to the ThingSpeak platform via the MQTT protocol. Machine learning algorithms, including Linear Regression, Decision Trees, and SVM, were applied to analyze the data and compute an air salubrity index based on Gaussian functions. The system proved effective in detecting pollutant peaks and issuing automatic alerts, significantly improving worker health and safety. Workshops that implemented the system reported greater regulatory compliance and reduced occupational risks. The study concludes that the combination of IoT and AI provides an efficient and replicable solution for environmental monitoring in industrial settings.<|reference_end|>
arxiv
@article{mariano2024smart, title={Smart Air Quality Monitoring for Automotive Workshop Environments}, author={Kauan Divino Pouso Mariano, Fabrycio Leite Nakano Almada, Maykon Adriell Dutra}, journal={arXiv preprint arXiv:2410.03986}, year={2024}, archivePrefix={arXiv}, eprint={2410.03986}, primaryClass={eess.SY cs.SY} }
mariano2024smart
arxiv-665958
2410.03987
Mamba Capsule Routing Towards Part-Whole Relational Camouflaged Object Detection
<|reference_start|>Mamba Capsule Routing Towards Part-Whole Relational Camouflaged Object Detection: The part-whole relational property endowed by Capsule Networks (CapsNets) has been known successful for camouflaged object detection due to its segmentation integrity. However, the previous Expectation Maximization (EM) capsule routing algorithm with heavy computation and large parameters obstructs this trend. The primary attribution behind lies in the pixel-level capsule routing. Alternatively, in this paper, we propose a novel mamba capsule routing at the type level. Specifically, we first extract the implicit latent state in mamba as capsule vectors, which abstract type-level capsules from pixel-level versions. These type-level mamba capsules are fed into the EM routing algorithm to get the high-layer mamba capsules, which greatly reduce the computation and parameters caused by the pixel-level capsule routing for part-whole relationships exploration. On top of that, to retrieve the pixel-level capsule features for further camouflaged prediction, we achieve this on the basis of the low-layer pixel-level capsules with the guidance of the correlations from adjacent-layer type-level mamba capsules. Extensive experiments on three widely used COD benchmark datasets demonstrate that our method significantly outperforms state-of-the-arts. Code has been available on https://github.com/Liangbo-Cheng/mamba\_capsule.<|reference_end|>
arxiv
@article{zhang2024mamba, title={Mamba Capsule Routing Towards Part-Whole Relational Camouflaged Object Detection}, author={Dingwen Zhang, Liangbo Cheng, Yi Liu, Xinggang Wang, Junwei Han}, journal={arXiv preprint arXiv:2410.03987}, year={2024}, archivePrefix={arXiv}, eprint={2410.03987}, primaryClass={cs.CV} }
zhang2024mamba
arxiv-665959
2410.03988
Implicit Bias of Mirror Descent for Shallow Neural Networks in Univariate Regression
<|reference_start|>Implicit Bias of Mirror Descent for Shallow Neural Networks in Univariate Regression: We examine the implicit bias of mirror flow in univariate least squares error regression with wide and shallow neural networks. For a broad class of potential functions, we show that mirror flow exhibits lazy training and has the same implicit bias as ordinary gradient flow when the network width tends to infinity. For ReLU networks, we characterize this bias through a variational problem in function space. Our analysis includes prior results for ordinary gradient flow as a special case and lifts limitations which required either an intractable adjustment of the training data or networks with skip connections. We further introduce scaled potentials and show that for these, mirror flow still exhibits lazy training but is not in the kernel regime. For networks with absolute value activations, we show that mirror flow with scaled potentials induces a rich class of biases, which generally cannot be captured by an RKHS norm. A takeaway is that whereas the parameter initialization determines how strongly the curvature of the learned function is penalized at different locations of the input space, the scaled potential determines how the different magnitudes of the curvature are penalized.<|reference_end|>
arxiv
@article{liang2024implicit, title={Implicit Bias of Mirror Descent for Shallow Neural Networks in Univariate Regression}, author={Shuang Liang and Guido Mont'ufar}, journal={arXiv preprint arXiv:2410.03988}, year={2024}, archivePrefix={arXiv}, eprint={2410.03988}, primaryClass={stat.ML cs.LG} }
liang2024implicit
arxiv-665960
2410.03989
Symmetry From Scratch: Group Equivariance as a Supervised Learning Task
<|reference_start|>Symmetry From Scratch: Group Equivariance as a Supervised Learning Task: In machine learning datasets with symmetries, the paradigm for backward compatibility with symmetry-breaking has been to relax equivariant architectural constraints, engineering extra weights to differentiate symmetries of interest. However, this process becomes increasingly over-engineered as models are geared towards specific symmetries/asymmetries hardwired of a particular set of equivariant basis functions. In this work, we introduce symmetry-cloning, a method for inducing equivariance in machine learning models. We show that general machine learning architectures (i.e., MLPs) can learn symmetries directly as a supervised learning task from group equivariant architectures and retain/break the learned symmetry for downstream tasks. This simple formulation enables machine learning models with group-agnostic architectures to capture the inductive bias of group-equivariant architectures.<|reference_end|>
arxiv
@article{huang2024symmetry, title={Symmetry From Scratch: Group Equivariance as a Supervised Learning Task}, author={Haozhe Huang, Leo Kaixuan Cheng, Kaiwen Chen, Al'an Aspuru-Guzik}, journal={arXiv preprint arXiv:2410.03989}, year={2024}, archivePrefix={arXiv}, eprint={2410.03989}, primaryClass={cs.LG} }
huang2024symmetry
arxiv-665961
2410.03992
UDE-III: An Enhanced Unified Differential Evolution Algorithm for Constrained Optimization Problems
<|reference_start|>UDE-III: An Enhanced Unified Differential Evolution Algorithm for Constrained Optimization Problems: In this paper, an enhanced unified differential evolution algorithm, named UDE-III, is presented for real parameter-constrained optimization problems (COPs). The proposed UDE-III is a significantly enhanced version of the Improved UDE (i.e., IUDE or UDE-II), which secured the 1st rank in the CEC 2018 competition on real parameter COPs. To design UDE-III, we extensively targeted the weaknesses of UDE-II. Specifically, UDE-III uses three trial vector generation strategies - DE/rand/1, DE/current-to-rand/1, and DE/current-to-pbest/1. It is based on a dual population approach, and for each generation, it divides the current population into two sub-populations. In the top sub-population, it employs all three trial vector generation strategies on each target vector. On the other hand, the bottom sub-population employs strategy adaptation and one trial vector generation strategy is implemented on each target vector. The mutation operation in UDE-III is based on ranking-based mutation. Further, it employs the parameter adaptation principle of SHADE. The constraint handling principle in UDE-III is based on a combination of the feasibility rule and epsilon-constraint handling technique. We observed that stagnation is a major weakness of UDE-II. To overcome this weakness, we took inspiration from the best-discarded vector selection (BDVS) strategy proposed in the literature and integrated a novel strategy in UDE-III to address stagnation. Additionally, unlike UDE-II, UDE-III considers the size of the two sub-populations to be a design element. Moreover, in comparison to UDE-II, UDE-III improves upon the strategy adaptation, ranking-based mutation, and the constraint handling technique. The proposed UDE-III algorithm is tested on the 28 benchmark 30D problems provided for the CEC 2024 competition on real parameter COPs. The experimental results demonstrate the superiority of UDE-III over UDE-II.<|reference_end|>
arxiv
@article{trivedi2024ude-iii:, title={UDE-III: An Enhanced Unified Differential Evolution Algorithm for Constrained Optimization Problems}, author={Anupam Trivedi and Dikshit Chauhan}, journal={arXiv preprint arXiv:2410.03992}, year={2024}, archivePrefix={arXiv}, eprint={2410.03992}, primaryClass={cs.NE} }
trivedi2024ude-iii:
arxiv-665962
2410.03993
TR-LLM: Integrating Trajectory Data for Scene-Aware LLM-Based Human Action Prediction
<|reference_start|>TR-LLM: Integrating Trajectory Data for Scene-Aware LLM-Based Human Action Prediction: Accurate prediction of human behavior is crucial for AI systems to effectively support real-world applications, such as autonomous robots anticipating and assisting with human tasks. Real-world scenarios frequently present challenges such as occlusions and incomplete scene observations, which can compromise predictive accuracy. Thus, traditional video-based methods often struggle due to limited temporal and spatial perspectives. Large Language Models (LLMs) offer a promising alternative. Having been trained on a large text corpus describing human behaviors, LLMs likely encode plausible sequences of human actions in a home environment. However, LLMs, trained primarily on text data, lack inherent spatial awareness and real-time environmental perception. They struggle with understanding physical constraints and spatial geometry. Therefore, to be effective in a real-world spatial scenario, we propose a multimodal prediction framework that enhances LLM-based action prediction by integrating physical constraints derived from human trajectories. Our experiments demonstrate that combining LLM predictions with trajectory data significantly improves overall prediction performance. This enhancement is particularly notable in situations where the LLM receives limited scene information, highlighting the complementary nature of linguistic knowledge and physical constraints in understanding and anticipating human behavior.<|reference_end|>
arxiv
@article{takeyama2024tr-llm:, title={TR-LLM: Integrating Trajectory Data for Scene-Aware LLM-Based Human Action Prediction}, author={Kojiro Takeyama, Yimeng Liu, Misha Sra}, journal={arXiv preprint arXiv:2410.03993}, year={2024}, archivePrefix={arXiv}, eprint={2410.03993}, primaryClass={cs.HC} }
takeyama2024tr-llm:
arxiv-665963
2410.03996
On the Influence of Gender and Race in Romantic Relationship Prediction from Large Language Models
<|reference_start|>On the Influence of Gender and Race in Romantic Relationship Prediction from Large Language Models: We study the presence of heteronormative biases and prejudice against interracial romantic relationships in large language models by performing controlled name-replacement experiments for the task of relationship prediction. We show that models are less likely to predict romantic relationships for (a) same-gender character pairs than different-gender pairs; and (b) intra/inter-racial character pairs involving Asian names as compared to Black, Hispanic, or White names. We examine the contextualized embeddings of first names and find that gender for Asian names is less discernible than non-Asian names. We discuss the social implications of our findings, underlining the need to prioritize the development of inclusive and equitable technology.<|reference_end|>
arxiv
@article{sancheti2024on, title={On the Influence of Gender and Race in Romantic Relationship Prediction from Large Language Models}, author={Abhilasha Sancheti, Haozhe An, Rachel Rudinger}, journal={arXiv preprint arXiv:2410.03996}, year={2024}, archivePrefix={arXiv}, eprint={2410.03996}, primaryClass={cs.CL} }
sancheti2024on
arxiv-665964
2410.03997
YOLO-MARL: You Only LLM Once for Multi-agent Reinforcement Learning
<|reference_start|>YOLO-MARL: You Only LLM Once for Multi-agent Reinforcement Learning: Advancements in deep multi-agent reinforcement learning (MARL) have positioned it as a promising approach for decision-making in cooperative games. However, it still remains challenging for MARL agents to learn cooperative strategies for some game environments. Recently, large language models (LLMs) have demonstrated emergent reasoning capabilities, making them promising candidates for enhancing coordination among the agents. However, due to the model size of LLMs, it can be expensive to frequently infer LLMs for actions that agents can take. In this work, we propose You Only LLM Once for MARL (YOLO-MARL), a novel framework that leverages the high-level task planning capabilities of LLMs to improve the policy learning process of multi-agents in cooperative games. Notably, for each game environment, YOLO-MARL only requires one time interaction with LLMs in the proposed strategy generation, state interpretation and planning function generation modules, before the MARL policy training process. This avoids the ongoing costs and computational time associated with frequent LLMs API calls during training. Moreover, the trained decentralized normal-sized neural network-based policies operate independently of the LLM. We evaluate our method across three different environments and demonstrate that YOLO-MARL outperforms traditional MARL algorithms.<|reference_end|>
arxiv
@article{zhuang2024yolo-marl:, title={YOLO-MARL: You Only LLM Once for Multi-agent Reinforcement Learning}, author={Yuan Zhuang, Yi Shen, Zhili Zhang, Yuxiao Chen, Fei Miao}, journal={arXiv preprint arXiv:2410.03997}, year={2024}, archivePrefix={arXiv}, eprint={2410.03997}, primaryClass={cs.MA} }
zhuang2024yolo-marl:
arxiv-665965
2410.03999
Impact of Regularization on Calibration and Robustness: from the Representation Space Perspective
<|reference_start|>Impact of Regularization on Calibration and Robustness: from the Representation Space Perspective: Recent studies have shown that regularization techniques using soft labels, e.g., label smoothing, Mixup, and CutMix, not only enhance image classification accuracy but also improve model calibration and robustness against adversarial attacks. However, the underlying mechanisms of such improvements remain underexplored. In this paper, we offer a novel explanation from the perspective of the representation space (i.e., the space of the features obtained at the penultimate layer). Our investigation first reveals that the decision regions in the representation space form cone-like shapes around the origin after training regardless of the presence of regularization. However, applying regularization causes changes in the distribution of features (or representation vectors). The magnitudes of the representation vectors are reduced and subsequently the cosine similarities between the representation vectors and the class centers (minimal loss points for each class) become higher, which acts as a central mechanism inducing improved calibration and robustness. Our findings provide new insights into the characteristics of the high-dimensional representation space in relation to training and regularization using soft labels.<|reference_end|>
arxiv
@article{park2024impact, title={Impact of Regularization on Calibration and Robustness: from the Representation Space Perspective}, author={Jonghyun Park, Juyeop Kim, Jong-Seok Lee}, journal={arXiv preprint arXiv:2410.03999}, year={2024}, archivePrefix={arXiv}, eprint={2410.03999}, primaryClass={cs.CV} }
park2024impact
arxiv-665966
2410.04000
Multiscale Latent Diffusion Model for Enhanced Feature Extraction from Medical Images
<|reference_start|>Multiscale Latent Diffusion Model for Enhanced Feature Extraction from Medical Images: Various imaging modalities are used in patient diagnosis, each offering unique advantages and valuable insights into anatomy and pathology. Computed Tomography (CT) is crucial in diagnostics, providing high-resolution images for precise internal organ visualization. CT's ability to detect subtle tissue variations is vital for diagnosing diseases like lung cancer, enabling early detection and accurate tumor assessment. However, variations in CT scanner models and acquisition protocols introduce significant variability in the extracted radiomic features, even when imaging the same patient. This variability poses considerable challenges for downstream research and clinical analysis, which depend on consistent and reliable feature extraction. Current methods for medical image feature extraction, often based on supervised learning approaches, including GAN-based models, face limitations in generalizing across different imaging environments. In response to these challenges, we propose LTDiff++, a multiscale latent diffusion model designed to enhance feature extraction in medical imaging. The model addresses variability by standardizing non-uniform distributions in the latent space, improving feature consistency. LTDiff++ utilizes a UNet++ encoder-decoder architecture coupled with a conditional Denoising Diffusion Probabilistic Model (DDPM) at the latent bottleneck to achieve robust feature extraction and standardization. Extensive empirical evaluations on both patient and phantom CT datasets demonstrate significant improvements in image standardization, with higher Concordance Correlation Coefficients (CCC) across multiple radiomic feature categories. Through these advancements, LTDiff++ represents a promising solution for overcoming the inherent variability in medical imaging data, offering improved reliability and accuracy in feature extraction processes.<|reference_end|>
arxiv
@article{sadia2024multiscale, title={Multiscale Latent Diffusion Model for Enhanced Feature Extraction from Medical Images}, author={Rabeya Tus Sadia, Jie Zhang, Jin Chen}, journal={arXiv preprint arXiv:2410.04000}, year={2024}, archivePrefix={arXiv}, eprint={2410.04000}, primaryClass={eess.IV cs.CV} }
sadia2024multiscale
arxiv-665967
2410.04001
FastLRNR and Sparse Physics Informed Backpropagation
<|reference_start|>FastLRNR and Sparse Physics Informed Backpropagation: We introduce Sparse Physics Informed Backpropagation (SPInProp), a new class of methods for accelerating backpropagation for a specialized neural network architecture called Low Rank Neural Representation (LRNR). The approach exploits the low rank structure within LRNR and constructs a reduced neural network approximation that is much smaller in size. We call the smaller network FastLRNR. We show that backpropagation of FastLRNR can be substituted for that of LRNR, enabling a significant reduction in complexity. We apply SPInProp to a physics informed neural networks framework and demonstrate how the solution of parametrized partial differential equations is accelerated.<|reference_end|>
arxiv
@article{cho2024fastlrnr, title={FastLRNR and Sparse Physics Informed Backpropagation}, author={Woojin Cho, Kookjin Lee, Noseong Park, Donsub Rim, Gerrit Welper}, journal={arXiv preprint arXiv:2410.04001}, year={2024}, archivePrefix={arXiv}, eprint={2410.04001}, primaryClass={cs.LG cs.AI cs.NA math.NA} }
cho2024fastlrnr
arxiv-665968
2410.04002
Take It Easy: Label-Adaptive Self-Rationalization for Fact Verification and Explanation Generation
<|reference_start|>Take It Easy: Label-Adaptive Self-Rationalization for Fact Verification and Explanation Generation: Computational methods to aid journalists in the task often require adapting a model to specific domains and generating explanations. However, most automated fact-checking methods rely on three-class datasets, which do not accurately reflect real-world misinformation. Moreover, fact-checking explanations are often generated based on text summarization of evidence, failing to address the relationship between the claim and the evidence. To address these issues, we extend the self-rationalization method--typically used in natural language inference (NLI) tasks--to fact verification. We propose a label-adaptive learning approach: first, we fine-tune a model to learn veracity prediction with annotated labels (step-1 model). Then, we fine-tune the step-1 model again to learn self-rationalization, using the same data and additional annotated explanations. Our results show that our label-adaptive approach improves veracity prediction by more than ten percentage points (Macro F1) on both the PubHealth and AVeriTec datasets, outperforming the GPT-4 model. Furthermore, to address the high cost of explanation annotation, we generated 64 synthetic explanations from three large language models: GPT-4-turbo, GPT-3.5-turbo, and Llama-3-8B and few-shot fine-tune our step-1 model. The few-shot synthetic explanation fine-tuned model performed comparably to the fully fine-tuned self-rationalization model, demonstrating the potential of low-budget learning with synthetic data. Our label-adaptive self-rationalization approach presents a promising direction for future research on real-world explainable fact-checking with different labeling schemes.<|reference_end|>
arxiv
@article{yang2024take, title={Take It Easy: Label-Adaptive Self-Rationalization for Fact Verification and Explanation Generation}, author={Jing Yang and Anderson Rocha}, journal={arXiv preprint arXiv:2410.04002}, year={2024}, archivePrefix={arXiv}, eprint={2410.04002}, primaryClass={cs.CL cs.AI} }
yang2024take
arxiv-665969
2410.04004
Compositional Planning for Logically Constrained Multi-Agent Markov Decision Processes
<|reference_start|>Compositional Planning for Logically Constrained Multi-Agent Markov Decision Processes: Designing control policies for large, distributed systems is challenging, especially in the context of critical, temporal logic based specifications (e.g., safety) that must be met with high probability. Compositional methods for such problems are needed for scalability, yet relying on worst-case assumptions for decomposition tends to be overly conservative. In this work, we use the framework of Constrained Markov Decision Processes (CMDPs) to provide an assume-guarantee based decomposition for synthesizing decentralized control policies, subject to logical constraints in a multi-agent setting. The returned policies are guaranteed to satisfy the constraints with high probability and provide a lower bound on the achieved objective reward. We empirically find the returned policies to achieve near-optimal rewards while enjoying an order of magnitude reduction in problem size and execution time.<|reference_end|>
arxiv
@article{kalagarla2024compositional, title={Compositional Planning for Logically Constrained Multi-Agent Markov Decision Processes}, author={Krishna C. Kalagarla (1 and 2), Matthew Low (1), Rahul Jain (1), Ashutosh Nayyar (1), Pierluigi Nuzzo (1 and 3) ((1) University of Southern California, Los Angeles, (2) University of New Mexico, Albuquerque, (3) University of California, Berkeley)}, journal={arXiv preprint arXiv:2410.04004}, year={2024}, archivePrefix={arXiv}, eprint={2410.04004}, primaryClass={eess.SY cs.SY} }
kalagarla2024compositional
arxiv-665970
2410.04005
Enhancing the Travel Experience for People with Visual Impairments through Multimodal Interaction: NaviGPT, A Real-Time AI-Driven Mobile Navigation System
<|reference_start|>Enhancing the Travel Experience for People with Visual Impairments through Multimodal Interaction: NaviGPT, A Real-Time AI-Driven Mobile Navigation System: Assistive technologies for people with visual impairments (PVI) have made significant advancements, particularly with the integration of artificial intelligence (AI) and real-time sensor technologies. However, current solutions often require PVI to switch between multiple apps and tools for tasks like image recognition, navigation, and obstacle detection, which can hinder a seamless and efficient user experience. In this paper, we present NaviGPT, a high-fidelity prototype that integrates LiDAR-based obstacle detection, vibration feedback, and large language model (LLM) responses to provide a comprehensive and real-time navigation aid for PVI. Unlike existing applications such as Be My AI and Seeing AI, NaviGPT combines image recognition and contextual navigation guidance into a single system, offering continuous feedback on the user's surroundings without the need for app-switching. Meanwhile, NaviGPT compensates for the response delays of LLM by using location and sensor data, aiming to provide practical and efficient navigation support for PVI in dynamic environments.<|reference_end|>
arxiv
@article{zhang2024enhancing, title={Enhancing the Travel Experience for People with Visual Impairments through Multimodal Interaction: NaviGPT, A Real-Time AI-Driven Mobile Navigation System}, author={He Zhang, Nicholas J. Falletta, Jingyi Xie, Rui Yu, Sooyeon Lee, Syed Masum Billah, John M. Carroll}, journal={arXiv preprint arXiv:2410.04005}, year={2024}, archivePrefix={arXiv}, eprint={2410.04005}, primaryClass={cs.HC} }
zhang2024enhancing
arxiv-665971
2410.04009
ASPIRER: Bypassing System Prompts With Permutation-based Backdoors in LLMs
<|reference_start|>ASPIRER: Bypassing System Prompts With Permutation-based Backdoors in LLMs: Large Language Models (LLMs) have become integral to many applications, with system prompts serving as a key mechanism to regulate model behavior and ensure ethical outputs. In this paper, we introduce a novel backdoor attack that systematically bypasses these system prompts, posing significant risks to the AI supply chain. Under normal conditions, the model adheres strictly to its system prompts. However, our backdoor allows malicious actors to circumvent these safeguards when triggered. Specifically, we explore a scenario where an LLM provider embeds a covert trigger within the base model. A downstream deployer, unaware of the hidden trigger, fine-tunes the model and offers it as a service to users. Malicious actors can purchase the trigger from the provider and use it to exploit the deployed model, disabling system prompts and achieving restricted outcomes. Our attack utilizes a permutation trigger, which activates only when its components are arranged in a precise order, making it computationally challenging to detect or reverse-engineer. We evaluate our approach on five state-of-the-art models, demonstrating that our method achieves an attack success rate (ASR) of up to 99.50% while maintaining a clean accuracy (CACC) of 98.58%, even after defensive fine-tuning. These findings highlight critical vulnerabilities in LLM deployment pipelines and underscore the need for stronger defenses.<|reference_end|>
arxiv
@article{yan2024aspirer:, title={ASPIRER: Bypassing System Prompts With Permutation-based Backdoors in LLMs}, author={Lu Yan, Siyuan Cheng, Xuan Chen, Kaiyuan Zhang, Guangyu Shen, Zhuo Zhang, Xiangyu Zhang}, journal={arXiv preprint arXiv:2410.04009}, year={2024}, archivePrefix={arXiv}, eprint={2410.04009}, primaryClass={cs.CR} }
yan2024aspirer:
arxiv-665972
2410.04010
Hyperbolic Fine-tuning for Large Language Models
<|reference_start|>Hyperbolic Fine-tuning for Large Language Models: Large language models (LLMs) have demonstrated remarkable performance on various tasks. However, it remains an open question whether the default Euclidean space is the most suitable choice for embedding tokens in LLMs. In this study, we first investigate the non-Euclidean characteristics of LLMs. Our findings reveal that token frequency follows a power-law distribution, with high-frequency tokens clustering near the origin and low-frequency tokens positioned farther away. Additionally, token embeddings exhibit a high degree of hyperbolicity, indicating a latent tree-like structure in the embedding space. Building on the observation, we propose to efficiently fine-tune LLMs in hyperbolic space to better exploit the underlying complex structures. However, we found that this fine-tuning in hyperbolic space cannot be achieved with naive application of exponential and logarithmic maps, when the embedding and weight matrices both reside in Euclidean space. To address this technique issue, we introduce a new method called hyperbolic low-rank efficient fine-tuning, HypLoRA, that performs low-rank adaptation directly on the hyperbolic manifold, avoiding the cancellation effect caused by the exponential and logarithmic maps, thus preserving the hyperbolic modeling capabilities. Through extensive experiments, we demonstrate that HypLoRA significantly enhances the performance of LLMs on reasoning tasks, particularly for complex reasoning problems. In particular, HypLoRA improves the performance in the complex AQuA dataset by up to 13.0%, showcasing its effectiveness in handling complex reasoning challenges<|reference_end|>
arxiv
@article{yang2024hyperbolic, title={Hyperbolic Fine-tuning for Large Language Models}, author={Menglin Yang, Aosong Feng, Bo Xiong, Jihong Liu, Irwin King, Rex Ying}, journal={arXiv preprint arXiv:2410.04010}, year={2024}, archivePrefix={arXiv}, eprint={2410.04010}, primaryClass={cs.LG cs.AI cs.CL cs.NE} }
yang2024hyperbolic
arxiv-665973
2410.04011
Kalman Filter Applied To A Differential Robot
<|reference_start|>Kalman Filter Applied To A Differential Robot: This document presents the study of the problem of location and trajectory that a robot must follow. It focuses on applying the Kalman filter to achieve location and trajectory estimation in an autonomous mobile differential robot. The experimental data was carried out through tests obtained with the help of two incremental encoders that are part of the construction of the differential robot. The data transmission is carried out from a PC where the control is carried out with the Matlab/Simulink software. The results are expressed in graphs showing the path followed by the robot using PI control, the estimator of the Kalman filter in a real system.<|reference_end|>
arxiv
@article{vera2024kalman, title={Kalman Filter Applied To A Differential Robot}, author={Sendey Vera, Luis Chuquimarca, Douglas Plaza}, journal={2023 1st International Conference on Circuits, Power and Intelligent Systems (CCPIS) (pp. 1-6). IEEE}, year={2024}, doi={10.1109/CCPIS59145.2023.10291441}, archivePrefix={arXiv}, eprint={2410.04011}, primaryClass={cs.RO cs.SY eess.SY} }
vera2024kalman
arxiv-665974
2410.04012
JAM: A Comprehensive Model for Age Estimation, Verification, and Comparability
<|reference_start|>JAM: A Comprehensive Model for Age Estimation, Verification, and Comparability: This paper introduces a comprehensive model for age estimation, verification, and comparability, offering a comprehensive solution for a wide range of applications. It employs advanced learning techniques to understand age distribution and uses confidence scores to create probabilistic age ranges, enhancing its ability to handle ambiguous cases. The model has been tested on both proprietary and public datasets and compared against one of the top-performing models in the field. Additionally, it has recently been evaluated by NIST as part of the FATE challenge, achieving top places in many categories.<|reference_end|>
arxiv
@article{david2024jam:, title={JAM: A Comprehensive Model for Age Estimation, Verification, and Comparability}, author={Franc{c}ois David, Alexey A. Novikov, Ruslan Parkhomenko, Artem Voronin, Alix Melchy}, journal={arXiv preprint arXiv:2410.04012}, year={2024}, archivePrefix={arXiv}, eprint={2410.04012}, primaryClass={cs.CV cs.AI} }
david2024jam:
arxiv-665975
2410.04013
Improving Temporal Link Prediction via Temporal Walk Matrix Projection
<|reference_start|>Improving Temporal Link Prediction via Temporal Walk Matrix Projection: Temporal link prediction, aiming at predicting future interactions among entities based on historical interactions, is crucial for a series of real-world applications. Although previous methods have demonstrated the importance of relative encodings for effective temporal link prediction, computational efficiency remains a major concern in constructing these encodings. Moreover, existing relative encodings are usually constructed based on structural connectivity, where temporal information is seldom considered. To address the aforementioned issues, we first analyze existing relative encodings and unify them as a function of temporal walk matrices. This unification establishes a connection between relative encodings and temporal walk matrices, providing a more principled way for analyzing and designing relative encodings. Based on this analysis, we propose a new temporal graph neural network called TPNet, which introduces a temporal walk matrix that incorporates the time decay effect to simultaneously consider both temporal and structural information. Moreover, TPNet designs a random feature propagation mechanism with theoretical guarantees to implicitly maintain the temporal walk matrices, which improves the computation and storage efficiency. Experimental results on 13 benchmark datasets verify the effectiveness and efficiency of TPNet, where TPNet outperforms other baselines on most datasets and achieves a maximum speedup of $33.3 \times$ compared to the SOTA baseline. Our code can be found at \url{https://github.com/lxd99/TPNet}.<|reference_end|>
arxiv
@article{lu2024improving, title={Improving Temporal Link Prediction via Temporal Walk Matrix Projection}, author={Xiaodong Lu, Leilei Sun, Tongyu Zhu, Weifeng Lv}, journal={arXiv preprint arXiv:2410.04013}, year={2024}, archivePrefix={arXiv}, eprint={2410.04013}, primaryClass={cs.LG} }
lu2024improving
arxiv-665976
2410.04016
Development of a Mouse for Individuals Without Upper Limbs Using Arduino Technology
<|reference_start|>Development of a Mouse for Individuals Without Upper Limbs Using Arduino Technology: This project focuses on the design and construction of a prototype mouse based on the Arduino platform, intended for individuals without upper limbs to use computers more effectively. The prototype comprises a microcontroller responsible for processing signals from the MPU-6050 sensor, used as a reference for cursor position, and foot-operated buttons for right and left-click functions. Its design enables cursor control through head movements, providing users with an easy and intuitive way to interact with the computer's graphical interface. Feasibility testing was conducted through experimental trials, resulting in ideal accuracy and precision. These trials indicate that the device is viable for use in individuals without upper limbs.<|reference_end|>
arxiv
@article{gunsha2024development, title={Development of a Mouse for Individuals Without Upper Limbs Using Arduino Technology}, author={Alfonso Gunsha, Luis Chuquimarca, Pedro Pardo, David Herrera}, journal={2024 Second International Conference on Emerging Trends in Information Technology and Engineering (ICETITE) (pp. 1-5). IEEE}, year={2024}, doi={10.1109/ic-ETITE58242.2024.10493246}, archivePrefix={arXiv}, eprint={2410.04016}, primaryClass={cs.HC cs.SY eess.SP eess.SY} }
gunsha2024development
arxiv-665977
2410.04018
High order ADER-DG method with local DG predictor for solutions of differential-algebraic systems of equations
<|reference_start|>High order ADER-DG method with local DG predictor for solutions of differential-algebraic systems of equations: A numerical method ADER-DG with a local DG predictor for solving a DAE system has been developed, which was based on the formulation of ADER-DG methods using a local DG predictor for solving ODE and PDE systems. The basis functions were chosen in the form of Lagrange interpolation polynomials with nodal points at the roots of the Radau polynomials, which differs from the classical formulations of the ADER-DG method, where it is customary to use the roots of Legendre polynomials. It was shown that the use of this basis leads to A-stability and L1-stability in the case of using the DAE solver as ODE solver. The numerical method ADER-DG allows one to obtain a highly accurate numerical solution even on very coarse grids, with a step greater than the main characteristic scale of solution variation. The local discrete time solution can be used as a numerical solution of the DAE system between grid nodes, thereby providing subgrid resolution even in the case of very coarse grids. The classical test examples were solved by developed numerical method ADER-DG. With increasing index of the DAE system, a decrease in the empirical convergence orders p is observed. An unexpected result was obtained in the numerical solution of the stiff DAE system -- the empirical convergence orders of the numerical solution obtained using the developed method turned out to be significantly higher than the values expected for this method in the case of stiff problems. It turns out that the use of Lagrange interpolation polynomials with nodal points at the roots of the Radau polynomials is much better suited for solving stiff problems. Estimates showed that the computational costs of the ADER-DG method are approximately comparable to the computational costs of implicit Runge-Kutta methods used to solve DAE systems. Methods were proposed to reduce the computational costs of the ADER-DG method.<|reference_end|>
arxiv
@article{popov2024high, title={High order ADER-DG method with local DG predictor for solutions of differential-algebraic systems of equations}, author={I.S. Popov}, journal={arXiv preprint arXiv:2410.04018}, year={2024}, archivePrefix={arXiv}, eprint={2410.04018}, primaryClass={math.NA cs.NA math.FA physics.app-ph physics.comp-ph} }
popov2024high
arxiv-665978
2410.04022
Efficient Large-Scale Urban Parking Prediction: Graph Coarsening Based on Real-Time Parking Service Capability
<|reference_start|>Efficient Large-Scale Urban Parking Prediction: Graph Coarsening Based on Real-Time Parking Service Capability: With the sharp increase in the number of vehicles, the issue of parking difficulties has emerged as an urgent challenge that many cities need to address promptly. In the task of predicting large-scale urban parking data, existing research often lacks effective deep learning models and strategies. To tackle this challenge, this paper proposes an innovative framework for predicting large-scale urban parking graphs leveraging real-time service capabilities, aimed at improving the accuracy and efficiency of parking predictions. Specifically, we introduce a graph attention mechanism that assesses the real-time service capabilities of parking lots to construct a dynamic parking graph that accurately reflects real preferences in parking behavior. To effectively handle large-scale parking data, this study combines graph coarsening techniques with temporal convolutional autoencoders to achieve unified dimension reduction of the complex urban parking graph structure and features. Subsequently, we use a spatio-temporal graph convolutional model to make predictions based on the coarsened graph, and a pre-trained autoencoder-decoder module restores the predicted results to their original data dimensions, completing the task. Our methodology has been rigorously tested on a real dataset from parking lots in Shenzhen. The experimental results indicate that compared to traditional parking prediction models, our framework achieves improvements of 46.8\% and 30.5\% in accuracy and efficiency, respectively. Remarkably, with the expansion of the graph's scale, our framework's advantages become even more apparent, showcasing its substantial potential for solving complex urban parking dilemmas in practical scenarios.<|reference_end|>
arxiv
@article{wang2024efficient, title={Efficient Large-Scale Urban Parking Prediction: Graph Coarsening Based on Real-Time Parking Service Capability}, author={Yixuan Wang, Zhenwu Chen, Kangshuai Zhang, Yunduan Cui, Lei Peng}, journal={arXiv preprint arXiv:2410.04022}, year={2024}, archivePrefix={arXiv}, eprint={2410.04022}, primaryClass={cs.LG cs.AI} }
wang2024efficient
arxiv-665979
2410.04025
IdeaSynth: Iterative Research Idea Development Through Evolving and Composing Idea Facets with Literature-Grounded Feedback
<|reference_start|>IdeaSynth: Iterative Research Idea Development Through Evolving and Composing Idea Facets with Literature-Grounded Feedback: Research ideation involves broad exploring and deep refining ideas. Both require deep engagement with literature. Existing tools focus primarily on idea broad generation, yet offer little support for iterative specification, refinement, and evaluation needed to further develop initial ideas. To bridge this gap, we introduce IdeaSynth, a research idea development system that uses LLMs to provide literature-grounded feedback for articulating research problems, solutions, evaluations, and contributions. IdeaSynth represents these idea facets as nodes on a canvas, and allow researchers to iteratively refine them by creating and exploring variations and composing them. Our lab study (N=20) showed that participants, while using IdeaSynth, explored more alternative ideas and expanded initial ideas with more details compared to a strong LLM-based baseline. Our deployment study (N=7) demonstrated that participants effectively used IdeaSynth for real-world research projects at various ideation stages from developing initial ideas to revising framings of mature manuscripts, highlighting the possibilities to adopt IdeaSynth in researcher's workflows.<|reference_end|>
arxiv
@article{pu2024ideasynth:, title={IdeaSynth: Iterative Research Idea Development Through Evolving and Composing Idea Facets with Literature-Grounded Feedback}, author={Kevin Pu, K. J. Kevin Feng, Tovi Grossman, Tom Hope, Bhavana Dalvi Mishra, Matt Latzke, Jonathan Bragg, Joseph Chee Chang, Pao Siangliulue}, journal={arXiv preprint arXiv:2410.04025}, year={2024}, archivePrefix={arXiv}, eprint={2410.04025}, primaryClass={cs.HC cs.AI} }
pu2024ideasynth:
arxiv-665980
2410.04027
A Simple yet Effective Training-free Prompt-free Approach to Chinese Spelling Correction Based on Large Language Models
<|reference_start|>A Simple yet Effective Training-free Prompt-free Approach to Chinese Spelling Correction Based on Large Language Models: This work proposes a simple training-free prompt-free approach to leverage large language models (LLMs) for the Chinese spelling correction (CSC) task, which is totally different from all previous CSC approaches. The key idea is to use an LLM as a pure language model in a conventional manner. The LLM goes through the input sentence from the beginning, and at each inference step, produces a distribution over its vocabulary for deciding the next token, given a partial sentence. To ensure that the output sentence remains faithful to the input sentence, we design a minimal distortion model that utilizes pronunciation or shape similarities between the original and replaced characters. Furthermore, we propose two useful reward strategies to address practical challenges specific to the CSC task. Experiments on five public datasets demonstrate that our approach significantly improves LLM performance, enabling them to compete with state-of-the-art domain-general CSC models.<|reference_end|>
arxiv
@article{zhou2024a, title={A Simple yet Effective Training-free Prompt-free Approach to Chinese Spelling Correction Based on Large Language Models}, author={Houquan Zhou, Zhenghua Li, Bo Zhang, Chen Li, Shaopeng Lai, Ji Zhang, Fei Huang, Min Zhang}, journal={arXiv preprint arXiv:2410.04027}, year={2024}, archivePrefix={arXiv}, eprint={2410.04027}, primaryClass={cs.CL} }
zhou2024a
arxiv-665981
2410.04029
SyllableLM: Learning Coarse Semantic Units for Speech Language Models
<|reference_start|>SyllableLM: Learning Coarse Semantic Units for Speech Language Models: Language models require tokenized inputs. However, tokenization strategies for continuous data like audio and vision are often based on simple heuristics such as fixed sized convolutions or discrete clustering, which do not necessarily align with the semantic structure of the data. For speech in particular, the high resolution of waveforms (16,000 samples/second or more) presents a significant challenge as speech-based language models have had to use several times more tokens per word than text-based language models. In this work, we introduce a controllable self-supervised technique to merge speech representations into coarser syllable-like units while still preserving semantic information. We do this by 1) extracting noisy boundaries through analyzing correlations in pretrained encoder losses and 2) iteratively improving model representations with a novel distillation technique. Our method produces controllable-rate semantic units at as low as 5Hz and 60bps and achieves SotA in syllabic segmentation and clustering. Using these coarse tokens, we successfully train SyllableLM, a Speech Language Model (SpeechLM) that matches or outperforms current SotA SpeechLMs on a range of spoken language modeling tasks. SyllableLM also achieves significant improvements in efficiency with a 30x reduction in training compute and a 4x wall-clock inference speedup.<|reference_end|>
arxiv
@article{baade2024syllablelm:, title={SyllableLM: Learning Coarse Semantic Units for Speech Language Models}, author={Alan Baade, Puyuan Peng, David Harwath}, journal={arXiv preprint arXiv:2410.04029}, year={2024}, archivePrefix={arXiv}, eprint={2410.04029}, primaryClass={cs.CL cs.AI eess.AS} }
baade2024syllablelm:
arxiv-665982
2410.04032
ForgeryTTT: Zero-Shot Image Manipulation Localization with Test-Time Training
<|reference_start|>ForgeryTTT: Zero-Shot Image Manipulation Localization with Test-Time Training: Social media is increasingly plagued by realistic fake images, making it hard to trust content. Previous algorithms to detect these fakes often fail in new, real-world scenarios because they are trained on specific datasets. To address the problem, we introduce ForgeryTTT, the first method leveraging test-time training (TTT) to identify manipulated regions in images. The proposed approach fine-tunes the model for each individual test sample, improving its performance. ForgeryTTT first employs vision transformers as a shared image encoder to learn both classification and localization tasks simultaneously during the training-time training using a large synthetic dataset. Precisely, the localization head predicts a mask to highlight manipulated areas. Given such a mask, the input tokens can be divided into manipulated and genuine groups, which are then fed into the classification head to distinguish between manipulated and genuine parts. During test-time training, the predicted mask from the localization head is used for the classification head to update the image encoder for better adaptation. Additionally, using the classical dropout strategy in each token group significantly improves performance and efficiency. We test ForgeryTTT on five standard benchmarks. Despite its simplicity, ForgeryTTT achieves a 20.1% improvement in localization accuracy compared to other zero-shot methods and a 4.3% improvement over non-zero-shot techniques. Our code and data will be released upon publication.<|reference_end|>
arxiv
@article{liu2024forgeryttt:, title={ForgeryTTT: Zero-Shot Image Manipulation Localization with Test-Time Training}, author={Weihuang Liu, Xi Shen, Chi-Man Pun, Xiaodong Cun}, journal={arXiv preprint arXiv:2410.04032}, year={2024}, archivePrefix={arXiv}, eprint={2410.04032}, primaryClass={cs.CV} }
liu2024forgeryttt:
arxiv-665983
2410.04034
GraHTP: A Provable Newton-like Algorithm for Sparse Phase Retrieval
<|reference_start|>GraHTP: A Provable Newton-like Algorithm for Sparse Phase Retrieval: This paper investigates the sparse phase retrieval problem, which aims to recover a sparse signal from a system of quadratic measurements. In this work, we propose a novel non-convex algorithm, termed Gradient Hard Thresholding Pursuit (GraHTP), for sparse phase retrieval with complex sensing vectors. GraHTP is theoretically provable and exhibits high efficiency, achieving a quadratic convergence rate after a finite number of iterations, while maintaining low computational complexity per iteration. Numerical experiments further demonstrate GraHTP's superior performance compared to state-of-the-art algorithms.<|reference_end|>
arxiv
@article{dai2024grahtp:, title={GraHTP: A Provable Newton-like Algorithm for Sparse Phase Retrieval}, author={Licheng Dai, Xiliang Lu, Juntao You}, journal={arXiv preprint arXiv:2410.04034}, year={2024}, archivePrefix={arXiv}, eprint={2410.04034}, primaryClass={math.NA cs.NA} }
dai2024grahtp:
arxiv-665984
2410.04035
Gamifying XAI: Enhancing AI Explainability for Non-technical Users through LLM-Powered Narrative Gamifications
<|reference_start|>Gamifying XAI: Enhancing AI Explainability for Non-technical Users through LLM-Powered Narrative Gamifications: Artificial intelligence (AI) has become tightly integrated into modern technology, yet existing exploratory visualizations for explainable AI (XAI) are primarily designed for users with technical expertise. This leaves everyday users, who also regularly interact with AI systems, with limited resources to explore or understand AI technologies they use. We propose a novel framework that enables non-technical users to collect insights by conversing directly with visualization elements via LLM-powered narrative gamifications. We implemented a prototype that utilizes such gamification to facilitate non-technical users' exploration of AI embedding projections. We conducted a comparative study with 10 participants to assess our prototype quantitatively and qualitatively. Our study results indicate that although our prototype effectively enhances non-technical users' AI/XAI knowledge, and users believe they learn more through the gamification feature, it remains inconclusive whether the gamification itself leads to further improvements in understanding. In addition, opinions among participants regarding the framework's engagement are mixed: some believe it enhances their exploration of the visualizations, while others feel it disrupts their workflow.<|reference_end|>
arxiv
@article{you2024gamifying, title={Gamifying XAI: Enhancing AI Explainability for Non-technical Users through LLM-Powered Narrative Gamifications}, author={Yuzhe You and Jian Zhao}, journal={arXiv preprint arXiv:2410.04035}, year={2024}, archivePrefix={arXiv}, eprint={2410.04035}, primaryClass={cs.HC} }
you2024gamifying
arxiv-665985
2410.04037
Is Score Matching Suitable for Estimating Point Processes?
<|reference_start|>Is Score Matching Suitable for Estimating Point Processes?: Score matching estimators have gained widespread attention in recent years partly because they are free from calculating the integral of normalizing constant, thereby addressing the computational challenges in maximum likelihood estimation (MLE). Some existing works have proposed score matching estimators for point processes. However, this work demonstrates that the incompleteness of the estimators proposed in those works renders them applicable only to specific problems, and they fail for more general point processes. To address this issue, this work introduces the weighted score matching estimator to point processes. Theoretically, we prove the consistency of our estimator and establish its rate of convergence. Experimental results indicate that our estimator accurately estimates model parameters on synthetic data and yields results consistent with MLE on real data. In contrast, existing score matching estimators fail to perform effectively. Codes are publicly available at \url{https://github.com/KenCao2007/WSM_TPP}.<|reference_end|>
arxiv
@article{cao2024is, title={Is Score Matching Suitable for Estimating Point Processes?}, author={Haoqun Cao, Zizhuo Meng, Tianjun Ke, Feng Zhou}, journal={arXiv preprint arXiv:2410.04037}, year={2024}, archivePrefix={arXiv}, eprint={2410.04037}, primaryClass={stat.ML cs.LG} }
cao2024is
arxiv-665986
2410.04038
Gamified crowd-sourcing of high-quality data for visual fine-tuning
<|reference_start|>Gamified crowd-sourcing of high-quality data for visual fine-tuning: This paper introduces Gamified Adversarial Prompting (GAP), a framework that crowd-sources high-quality data for visual instruction tuning of large multimodal models. GAP transforms the data collection process into an engaging game, incentivizing players to provide fine-grained, challenging questions and answers that target gaps in the model's knowledge. Our contributions include (1) an approach to capture question-answer pairs from humans that directly address weaknesses in a model's knowledge, (2) a method for evaluating and rewarding players that successfully incentivizes them to provide high-quality submissions, and (3) a scalable, gamified platform that succeeds in collecting this data from over 50,000 participants in just a few weeks. Our implementation of GAP has significantly improved the accuracy of a small multimodal model, namely MiniCPM-Llama3-V-2.5-8B, increasing its GPT score from 0.147 to 0.477 on our dataset, approaching the benchmark set by the much larger GPT-4V. Moreover, we demonstrate that the data generated using MiniCPM-Llama3-V-2.5-8B also enhances its performance across other benchmarks, and exhibits cross-model benefits. Specifically, the same data improves the performance of QWEN2-VL-2B and QWEN2-VL-7B on the same multiple benchmarks.<|reference_end|>
arxiv
@article{yadav2024gamified, title={Gamified crowd-sourcing of high-quality data for visual fine-tuning}, author={Shashank Yadav, Rohan Tomar, Garvit Jain, Chirag Ahooja, Shubham Chaudhary, Charles Elkan}, journal={arXiv preprint arXiv:2410.04038}, year={2024}, archivePrefix={arXiv}, eprint={2410.04038}, primaryClass={cs.AI cs.CV} }
yadav2024gamified
arxiv-665987
2410.04039
BlockFound: Customized blockchain foundation model for anomaly detection
<|reference_start|>BlockFound: Customized blockchain foundation model for anomaly detection: We propose BlockFound, a customized foundation model for anomaly blockchain transaction detection. Unlike existing methods that rely on rule-based systems or directly apply off-the-shelf large language models, BlockFound introduces a series of customized designs to model the unique data structure of blockchain transactions. First, a blockchain transaction is multi-modal, containing blockchain-specific tokens, texts, and numbers. We design a modularized tokenizer to handle these multi-modal inputs, balancing the information across different modalities. Second, we design a customized mask language learning mechanism for pretraining with RoPE embedding and FlashAttention for handling longer sequences. After training the foundation model, we further design a novel detection method for anomaly detection. Extensive evaluations on Ethereum and Solana transactions demonstrate BlockFound's exceptional capability in anomaly detection while maintaining a low false positive rate. Remarkably, BlockFound is the only method that successfully detects anomalous transactions on Solana with high accuracy, whereas all other approaches achieved very low or zero detection recall scores. This work not only provides new foundation models for blockchain but also sets a new benchmark for applying LLMs in blockchain data.<|reference_end|>
arxiv
@article{yu2024blockfound:, title={BlockFound: Customized blockchain foundation model for anomaly detection}, author={Jiahao Yu, Xian Wu, Hao Liu, Wenbo Guo, Xinyu Xing}, journal={arXiv preprint arXiv:2410.04039}, year={2024}, archivePrefix={arXiv}, eprint={2410.04039}, primaryClass={cs.CR cs.AI} }
yu2024blockfound:
arxiv-665988
2410.04041
Hybrid NeRF-Stereo Vision: Pioneering Depth Estimation and 3D Reconstruction in Endoscopy
<|reference_start|>Hybrid NeRF-Stereo Vision: Pioneering Depth Estimation and 3D Reconstruction in Endoscopy: The 3D reconstruction of the surgical field in minimally invasive endoscopic surgery has posed a formidable challenge when using conventional monocular endoscopes. Existing 3D reconstruction methodologies are frequently encumbered by suboptimal accuracy and limited generalization capabilities. In this study, we introduce an innovative pipeline using Neural Radiance Fields (NeRF) for 3D reconstruction. Our approach utilizes a preliminary NeRF reconstruction that yields a coarse model, then creates a binocular scene within the reconstructed environment, which derives an initial depth map via stereo vision. This initial depth map serves as depth supervision for subsequent NeRF iterations, progressively refining the 3D reconstruction with enhanced accuracy. The binocular depth is iteratively recalculated, with the refinement process continuing until the depth map converges, and exhibits negligible variations. Through this recursive process, high-fidelity depth maps are generated from monocular endoscopic video of a realistic cranial phantom. By repeated measures of the final 3D reconstruction compared to X-ray computed tomography, all differences of relevant clinical distances result in sub-millimeter accuracy.<|reference_end|>
arxiv
@article{chen2024hybrid, title={Hybrid NeRF-Stereo Vision: Pioneering Depth Estimation and 3D Reconstruction in Endoscopy}, author={Pengcheng Chen, Wenhao Li, Nicole Gunderson, Jeremy Ruthberg, Randall Bly, Waleed M. Abuzeid, Zhenglong Sun, Eric J. Seibel}, journal={arXiv preprint arXiv:2410.04041}, year={2024}, archivePrefix={arXiv}, eprint={2410.04041}, primaryClass={eess.IV cs.CV} }
chen2024hybrid
arxiv-665989
2410.04045
Neuron-Level Sequential Editing for Large Language Models
<|reference_start|>Neuron-Level Sequential Editing for Large Language Models: This work explores sequential model editing in large language models (LLMs), a critical task that involves modifying internal knowledge within LLMs continuously through multi-round editing, each incorporating updates or corrections to adjust the model outputs without the need for costly retraining. Existing model editing methods, especially those that alter model parameters, typically focus on single-round editing and often face significant challenges in sequential model editing-most notably issues of model forgetting and failure. To address these challenges, we introduce a new model editing method, namely \textbf{N}euron-level \textbf{S}equential \textbf{E}diting (NSE), tailored for supporting sequential model editing. Specifically, we optimize the target layer's hidden states using the model's original weights to prevent model failure. Furthermore, we iteratively select neurons in multiple layers for editing based on their activation values to mitigate model forgetting. Our empirical experiments demonstrate that NSE significantly outperforms current modifying parameters model editing methods, marking a substantial advancement in the field of sequential model editing. Our code is released on \url{https://github.com/jianghoucheng/NSE}.<|reference_end|>
arxiv
@article{jiang2024neuron-level, title={Neuron-Level Sequential Editing for Large Language Models}, author={Houcheng Jiang, Junfeng Fang, Tianyu Zhang, An Zhang, Ruipeng Wang, Tao Liang, Xiang Wang}, journal={arXiv preprint arXiv:2410.04045}, year={2024}, archivePrefix={arXiv}, eprint={2410.04045}, primaryClass={cs.CL} }
jiang2024neuron-level
arxiv-665990
2410.04046
Lane Detection System for Driver Assistance in Vehicles
<|reference_start|>Lane Detection System for Driver Assistance in Vehicles: This work presents the development of a lane detection system aimed at assisting the driving of conventional and autonomous vehicles. The system was implemented using traditional computer vision techniques, focusing on robustness and efficiency to operate in real-time, even under adverse conditions such as worn-out lanes and weather variations. The methodology employs an image processing pipeline that includes camera calibration, distortion correction, perspective transformation, and binary image generation. Lane detection is performed using sliding window techniques and segmentation based on gradients and color channels, enabling the precise identification of lanes in various road scenarios. The results indicate that the system can effectively detect and track lanes, performing well under different lighting conditions and road surfaces. However, challenges were identified in extreme situations, such as intense shadows and sharp curves. It is concluded that, despite its limitations, the traditional computer vision approach shows significant potential for application in driver assistance systems and autonomous navigation, with room for future improvements.<|reference_end|>
arxiv
@article{mariano2024lane, title={Lane Detection System for Driver Assistance in Vehicles}, author={Kauan Divino Pouso Mariano, Fernanda de Castro Fernandes, Luan Gabriel Silva Oliveira, Lyan Eduardo Sakuno Rodrigues, Matheus Andrade Brand~ao}, journal={arXiv preprint arXiv:2410.04046}, year={2024}, archivePrefix={arXiv}, eprint={2410.04046}, primaryClass={cs.CV} }
mariano2024lane
arxiv-665991
2410.04047
Beyond Forecasting: Compositional Time Series Reasoning for End-to-End Task Execution
<|reference_start|>Beyond Forecasting: Compositional Time Series Reasoning for End-to-End Task Execution: In recent decades, there has been substantial advances in time series models and benchmarks across various individual tasks, such as time series forecasting, classification, and anomaly detection. Meanwhile, compositional reasoning in time series is prevalent in real-world applications (e.g., decision-making and compositional question answering) and is in great demand. Unlike simple tasks that primarily focus on predictive accuracy, compositional reasoning emphasizes the synthesis of diverse information from both time series data and various domain knowledge, making it distinct and extremely more challenging. In this paper, we introduce Compositional Time Series Reasoning, a new task of handling intricate multistep reasoning tasks from time series data. Specifically, this new task focuses on various question instances requiring structural and compositional reasoning abilities on time series data, such as decision-making and compositional question answering. As an initial attempt to tackle this novel task, we developed TS-Reasoner, a program-aided approach that utilizes large language model (LLM) to decompose a complex task into steps of programs that leverage existing time series models and numerical subroutines. Unlike existing reasoning work which only calls off-the-shelf modules, TS-Reasoner allows for the creation of custom modules and provides greater flexibility to incorporate domain knowledge as well as user-specified constraints. We demonstrate the effectiveness of our method through a comprehensive set of experiments. These promising results indicate potential opportunities in the new task of time series reasoning and highlight the need for further research.<|reference_end|>
arxiv
@article{ye2024beyond, title={Beyond Forecasting: Compositional Time Series Reasoning for End-to-End Task Execution}, author={Wen Ye, Yizhou Zhang, Wei Yang, Lumingyuan Tang, Defu Cao, Jie Cai, Yan Liu}, journal={arXiv preprint arXiv:2410.04047}, year={2024}, archivePrefix={arXiv}, eprint={2410.04047}, primaryClass={cs.LG cs.AI} }
ye2024beyond
arxiv-665992
2410.04050
Dispersion on Time-Varying Graphs
<|reference_start|>Dispersion on Time-Varying Graphs: The dispersion involves the coordination of $k \leq n$ agents on a graph of size $n$ to reach a configuration where at each node at most one agent can be present. It is a well-studied problem. Also, this problem is studied on dynamic graphs with $n$ nodes where at each discrete time step the graph is a connected sub-graph of the complete graph $K_n$. An optimal algorithm is provided assuming global communication and 1-hop visibility of the agents. How this problem pans out on Time-Varying Graphs (TVG) is an open question in the literature. In this work we study this problem on TVG where at each discrete time step the graph is a connected sub-graph of an underlying graph $G$ (known as a footprint) consisting of $n$ nodes. We have the following results even if only one edge from $G$ is missing in the connected sub-graph at any time step and all agents start from a rooted initial configuration. Even with unlimited memory at each agent and 1-hop visibility, it is impossible to solve dispersion for $n$ co-located agents on a TVG in the local communication model. Furthermore, even with unlimited memory at each agent but without 1-hop visibility, it is impossible to achieve dispersion for $n$ co-located agents in the global communication model. From the positive side, the existing algorithm for dispersion on dynamic graphs with the assumptions of global communication and 1-hop visibility works on TVGs as well. This fact and the impossibility results push us to come up with a modified definition of the dispersion problem on TVGs, as one needs to start with more than $n$ agents if the objective is to drop the strong assumptions of global communication and 1-hop visibility. Then, we provide an algorithm to solve the modified dispersion problem on TVG starting with $n+1$ agents with $O(\log n)$ memory per agent while dropping both the assumptions of global communication and 1-hop visibility.<|reference_end|>
arxiv
@article{saxena2024dispersion, title={Dispersion on Time-Varying Graphs}, author={Ashish Saxena, Tanvir Kaur, Kaushik Mondal}, journal={arXiv preprint arXiv:2410.04050}, year={2024}, archivePrefix={arXiv}, eprint={2410.04050}, primaryClass={cs.DC} }
saxena2024dispersion
arxiv-665993
2410.04052
Beyond Imperfections: A Conditional Inpainting Approach for End-to-End Artifact Removal in VTON and Pose Transfer
<|reference_start|>Beyond Imperfections: A Conditional Inpainting Approach for End-to-End Artifact Removal in VTON and Pose Transfer: Artifacts often degrade the visual quality of virtual try-on (VTON) and pose transfer applications, impacting user experience. This study introduces a novel conditional inpainting technique designed to detect and remove such distortions, improving image aesthetics. Our work is the first to present an end-to-end framework addressing this specific issue, and we developed a specialized dataset of artifacts in VTON and pose transfer tasks, complete with masks highlighting the affected areas. Experimental results show that our method not only effectively removes artifacts but also significantly enhances the visual quality of the final images, setting a new benchmark in computer vision and image processing.<|reference_end|>
arxiv
@article{tabatabaei2024beyond, title={Beyond Imperfections: A Conditional Inpainting Approach for End-to-End Artifact Removal in VTON and Pose Transfer}, author={Aref Tabatabaei, Zahra Dehghanian, and Maryam Amirmazlaghani}, journal={arXiv preprint arXiv:2410.04052}, year={2024}, archivePrefix={arXiv}, eprint={2410.04052}, primaryClass={cs.CV} }
tabatabaei2024beyond
arxiv-665994
2410.04054
Large Language Models can Achieve Social Balance
<|reference_start|>Large Language Models can Achieve Social Balance: Social balance is a concept in sociology which states that if every three individuals in a population achieve certain structures of positive or negative interactions, then the whole population ends up in one faction of positive interactions or divided between two or more antagonistic factions. In this paper, we consider a group of interacting large language models (LLMs) and study how, after continuous interactions, they can achieve social balance. Across three different LLM models, we found that social balance depends on (i) whether interactions are updated based on "relationships", "appraisals", or "opinions"; (ii) whether agents update their interactions based on homophily or influence from their peers; and (iii) the number of simultaneous interactions the LLMs consider. When social balance is achieved, its particular structure of positive or negative interactions depends on these three conditions and are different across LLM models and sizes. The stability of interactions and the justification for their update also vary across models. Thus, social balance is driven by the pre-training and alignment particular to each LLM model.<|reference_end|>
arxiv
@article{cisneros-velarde2024large, title={Large Language Models can Achieve Social Balance}, author={Pedro Cisneros-Velarde}, journal={arXiv preprint arXiv:2410.04054}, year={2024}, archivePrefix={arXiv}, eprint={2410.04054}, primaryClass={cs.CL cs.AI cs.MA cs.SI physics.soc-ph} }
cisneros-velarde2024large
arxiv-665995
2410.04055
Self-Correction is More than Refinement: A Learning Framework for Visual and Language Reasoning Tasks
<|reference_start|>Self-Correction is More than Refinement: A Learning Framework for Visual and Language Reasoning Tasks: While Vision-Language Models (VLMs) have shown remarkable abilities in visual and language reasoning tasks, they invariably generate flawed responses. Self-correction that instructs models to refine their outputs presents a promising solution to this issue. Previous studies have mainly concentrated on Large Language Models (LLMs), while the self-correction abilities of VLMs, particularly concerning both visual and linguistic information, remain largely unexamined. This study investigates the self-correction capabilities of VLMs during both inference and fine-tuning stages. We introduce a Self-Correction Learning (SCL) approach that enables VLMs to learn from their self-generated self-correction data through Direct Preference Optimization (DPO) without relying on external feedback, facilitating self-improvement. Specifically, we collect preferred and disfavored samples based on the correctness of initial and refined responses, which are obtained by two-turn self-correction with VLMs during the inference stage. Experimental results demonstrate that although VLMs struggle to self-correct effectively during iterative inference without additional fine-tuning and external feedback, they can enhance their performance and avoid previous mistakes through preference fine-tuning when their self-generated self-correction data are categorized into preferred and disfavored samples. This study emphasizes that self-correction is not merely a refinement process; rather, it should enhance the reasoning abilities of models through additional training, enabling them to generate high-quality responses directly without further refinement.<|reference_end|>
arxiv
@article{he2024self-correction, title={Self-Correction is More than Refinement: A Learning Framework for Visual and Language Reasoning Tasks}, author={Jiayi He, Hehai Lin, Qingyun Wang, Yi Fung, Heng Ji}, journal={arXiv preprint arXiv:2410.04055}, year={2024}, archivePrefix={arXiv}, eprint={2410.04055}, primaryClass={cs.CL} }
he2024self-correction
arxiv-665996
2410.04056
RetCompletion:High-Speed Inference Image Completion with Retentive Network
<|reference_start|>RetCompletion:High-Speed Inference Image Completion with Retentive Network: Time cost is a major challenge in achieving high-quality pluralistic image completion. Recently, the Retentive Network (RetNet) in natural language processing offers a novel approach to this problem with its low-cost inference capabilities. Inspired by this, we apply RetNet to the pluralistic image completion task in computer vision. We present RetCompletion, a two-stage framework. In the first stage, we introduce Bi-RetNet, a bidirectional sequence information fusion model that integrates contextual information from images. During inference, we employ a unidirectional pixel-wise update strategy to restore consistent image structures, achieving both high reconstruction quality and fast inference speed. In the second stage, we use a CNN for low-resolution upsampling to enhance texture details. Experiments on ImageNet and CelebA-HQ demonstrate that our inference speed is 10$\times$ faster than ICT and 15$\times$ faster than RePaint. The proposed RetCompletion significantly improves inference speed and delivers strong performance, especially when masks cover large areas of the image.<|reference_end|>
arxiv
@article{cang2024retcompletion:high-speed, title={RetCompletion:High-Speed Inference Image Completion with Retentive Network}, author={Yueyang Cang, Pingge Hu, Xiaoteng Zhang, Xingtong Wang, Yuhang Liu}, journal={arXiv preprint arXiv:2410.04056}, year={2024}, archivePrefix={arXiv}, eprint={2410.04056}, primaryClass={cs.CV} }
cang2024retcompletion:high-speed
arxiv-665997
2410.04058
pFedGame -- Decentralized Federated Learning using Game Theory in Dynamic Topology
<|reference_start|>pFedGame -- Decentralized Federated Learning using Game Theory in Dynamic Topology: Conventional federated learning frameworks suffer from several challenges including performance bottlenecks at the central aggregation server, data bias, poor model convergence, and exposure to model poisoning attacks, and limited trust in the centralized infrastructure. In the current paper, a novel game theory-based approach called pFedGame is proposed for decentralized federated learning, best suitable for temporally dynamic networks. The proposed algorithm works without any centralized server for aggregation and incorporates the problem of vanishing gradients and poor convergence over temporally dynamic topology among federated learning participants. The solution comprises two sequential steps in every federated learning round, for every participant. First, it selects suitable peers for collaboration in federated learning. Secondly, it executes a two-player constant sum cooperative game to reach convergence by applying an optimal federated learning aggregation strategy. Experiments performed to assess the performance of pFedGame in comparison to existing methods in decentralized federated learning have shown promising results with accuracy higher than 70% for heterogeneous data.<|reference_end|>
arxiv
@article{behera2024pfedgame, title={pFedGame -- Decentralized Federated Learning using Game Theory in Dynamic Topology}, author={Monik Raj Behera, Suchetana Chakraborty}, journal={16th International Conference on COMmunication Systems & NETworkS (COMSNETS), Bengaluru, India, 2024, pp. 651-655}, year={2024}, doi={10.1109/COMSNETS59351.2024.10427470}, archivePrefix={arXiv}, eprint={2410.04058}, primaryClass={stat.ML cs.CR cs.GT cs.LG} }
behera2024pfedgame
arxiv-665998
2410.04060
LoRTA: Low Rank Tensor Adaptation of Large Language Models
<|reference_start|>LoRTA: Low Rank Tensor Adaptation of Large Language Models: Low Rank Adaptation (LoRA) is a popular Parameter Efficient Fine Tuning (PEFT) method that effectively adapts large pre-trained models for downstream tasks. LoRA parameterizes model updates using low-rank matrices at each layer, significantly reducing the number of trainable parameters and, consequently, resource requirements during fine-tuning. However, the lower bound on the number of trainable parameters remains high due to the use of the low-rank matrix model. In this paper, we address this limitation by proposing a novel approach that employs a low rank tensor parametrization for model updates. The proposed low rank tensor model can significantly reduce the number of trainable parameters, while also allowing for finer-grained control over adapter size. Our experiments on Natural Language Understanding, Instruction Tuning, Preference Optimization and Protein Folding benchmarks demonstrate that our method is both efficient and effective for fine-tuning large language models, achieving a substantial reduction in the number of parameters while maintaining comparable performance.<|reference_end|>
arxiv
@article{hounie2024lorta:, title={LoRTA: Low Rank Tensor Adaptation of Large Language Models}, author={Ignacio Hounie, Charilaos Kanatsoulis, Arnuv Tandon, Alejandro Ribeiro}, journal={arXiv preprint arXiv:2410.04060}, year={2024}, archivePrefix={arXiv}, eprint={2410.04060}, primaryClass={cs.CL cs.AI} }
hounie2024lorta:
arxiv-665999
2410.04061
Enhancing Graph Self-Supervised Learning with Graph Interplay
<|reference_start|>Enhancing Graph Self-Supervised Learning with Graph Interplay: Graph self-supervised learning (GSSL) has emerged as a compelling framework for extracting informative representations from graph-structured data without extensive reliance on labeled inputs. In this study, we introduce Graph Interplay (GIP), an innovative and versatile approach that significantly enhances the performance equipped with various existing GSSL methods. To this end, GIP advocates direct graph-level communications by introducing random inter-graph edges within standard batches. Against GIP's simplicity, we further theoretically show that \textsc{GIP} essentially performs a principled manifold separation via combining inter-graph message passing and GSSL, bringing about more structured embedding manifolds and thus benefits a series of downstream tasks. Our empirical study demonstrates that GIP surpasses the performance of prevailing GSSL methods across multiple benchmarks by significant margins, highlighting its potential as a breakthrough approach. Besides, GIP can be readily integrated into a series of GSSL methods and consistently offers additional performance gain. This advancement not only amplifies the capability of GSSL but also potentially sets the stage for a novel graph learning paradigm in a broader sense.<|reference_end|>
arxiv
@article{zhao2024enhancing, title={Enhancing Graph Self-Supervised Learning with Graph Interplay}, author={Xinjian Zhao, Wei Pang, Xiangru Jian, Yaoyao Xu, Chaolong Ying, Tianshu Yu}, journal={arXiv preprint arXiv:2410.04061}, year={2024}, archivePrefix={arXiv}, eprint={2410.04061}, primaryClass={cs.LG cs.AI stat.ML} }
zhao2024enhancing
arxiv-666000
2410.04063
Unique ID based Trust Scheme for Improved IoV Wireless Sensor Network Security Against Power Controlled Sybil Attacks
<|reference_start|>Unique ID based Trust Scheme for Improved IoV Wireless Sensor Network Security Against Power Controlled Sybil Attacks: Wireless sensor networks (WSN) are widely used in vehicular networks to support Vehicle-to-Everything (V2X) communications. Wireless sensors in vehicular networks support sensing and monitoring of various environmental factors and vehicle movement, which can help to enhance traffic management, road safety, and transportation efficiency. However, WSNs face security challenges due to their distributed nature and resource limited modules. In Sybil attacks, attackers create multiple fake identities to disrupt network operations (e.g., denial-of-service (DoS)), which is one of the major security concerns in WSNs. Defensive techniques have been proposed, which recently include a received signal strength indicator (RSSI) profiling scheme that improves the performance and is not affected by internal forgeable information. However, even this new RSSI based robust detection scheme was found to be vulnerable when Sybil attackers are mobile or intentionally manipulate their radio transmission power in addition to their device address. In this paper, a unique identification based trust path routing scheme (UITrust) is proposed, which uses the device's physically invariable unique identifiers and routing path trust level estimations to avoid power-controlled Sybil attacks, where the simulation results show the proposed scheme can provide a significant improvement compared to existing schemes.<|reference_end|>
arxiv
@article{kim2024unique, title={Unique ID based Trust Scheme for Improved IoV Wireless Sensor Network Security Against Power Controlled Sybil Attacks}, author={Jae-Dong Kim, Dabin Kim, Minseok Ko, and Jong-Moon Chung}, journal={arXiv preprint arXiv:2410.04063}, year={2024}, archivePrefix={arXiv}, eprint={2410.04063}, primaryClass={cs.CR} }
kim2024unique