corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-667201
2410.06285
Monocular Visual Place Recognition in LiDAR Maps via Cross-Modal State Space Model and Multi-View Matching
<|reference_start|>Monocular Visual Place Recognition in LiDAR Maps via Cross-Modal State Space Model and Multi-View Matching: Achieving monocular camera localization within pre-built LiDAR maps can bypass the simultaneous mapping process of visual SLAM systems, potentially reducing the computational overhead of autonomous localization. To this end, one of the key challenges is cross-modal place recognition, which involves retrieving 3D scenes (point clouds) from a LiDAR map according to online RGB images. In this paper, we introduce an efficient framework to learn descriptors for both RGB images and point clouds. It takes visual state space model (VMamba) as the backbone and employs a pixel-view-scene joint training strategy for cross-modal contrastive learning. To address the field-of-view differences, independent descriptors are generated from multiple evenly distributed viewpoints for point clouds. A visible 3D points overlap strategy is then designed to quantify the similarity between point cloud views and RGB images for multi-view supervision. Additionally, when generating descriptors from pixel-level features using NetVLAD, we compensate for the loss of geometric information, and introduce an efficient scheme for multi-view generation. Experimental results on the KITTI and KITTI-360 datasets demonstrate the effectiveness and generalization of our method. The code will be released upon acceptance.<|reference_end|>
arxiv
@article{yao2024monocular, title={Monocular Visual Place Recognition in LiDAR Maps via Cross-Modal State Space Model and Multi-View Matching}, author={Gongxin Yao, Xinyang Li, Luowei Fu and Yu Pan}, journal={arXiv preprint arXiv:2410.06285}, year={2024}, archivePrefix={arXiv}, eprint={2410.06285}, primaryClass={cs.CV cs.RO} }
yao2024monocular
arxiv-667202
2410.06287
Non-Halting Queries: Exploiting Fixed Points in LLMs
<|reference_start|>Non-Halting Queries: Exploiting Fixed Points in LLMs: We introduce a new vulnerability that exploits fixed points in autoregressive models and use it to craft queries that never halt, i.e. an LLM output that does not terminate. More precisely, for what we call non-halting queries, the LLM never samples the end-of-string token (<eos>). We rigorously analyze the conditions under which the non-halting anomaly presents itself. In particular, at temperature zero, we prove that if a repeating (cyclic) sequence of tokens is observed at the output beyond the context size, then the LLM does not halt. We demonstrate the non-halting anomaly in a number of experiments performed in base (unaligned) models where repeating tokens immediately lead to a non-halting cyclic behavior as predicted by the analysis. Further, we develop a simple recipe that takes the same fixed points observed in the base model and creates a prompt structure to target aligned models. We study the recipe behavior in bypassing alignment in a number of LLMs including GPT-4o, llama-3-8b-instruct, and gemma-2-9b-it where all models are forced into a non-halting state. Further, we demonstrate the recipe's success in sending most major models released over the past year into a non-halting state with the same simple prompt even at higher temperatures. Further, we study direct inversion based techniques to craft new short prompts to induce the non-halting state. Our experiments with the gradient search based inversion technique ARCA show that non-halting is prevalent across models and may be easily induced with a few input tokens. While its impact on the reliability of hosted systems can be mitigated by configuring a hard maximum token limit in the sampler, the non-halting anomaly still manages to break alignment. This underlines the need for further studies and stronger forms of alignment against non-halting anomalies.<|reference_end|>
arxiv
@article{hammouri2024non-halting, title={Non-Halting Queries: Exploiting Fixed Points in LLMs}, author={Ghaith Hammouri, Kemal Derya and Berk Sunar}, journal={arXiv preprint arXiv:2410.06287}, year={2024}, archivePrefix={arXiv}, eprint={2410.06287}, primaryClass={cs.LG cs.AI cs.CL} }
hammouri2024non-halting
arxiv-667203
2410.06290
Score Design for Multi-Criteria Incentivization
<|reference_start|>Score Design for Multi-Criteria Incentivization: We present a framework for designing scores to summarize performance metrics. Our design has two multi-criteria objectives: (1) improving on scores should improve all performance metrics, and (2) achieving pareto-optimal scores should achieve pareto-optimal metrics. We formulate our design to minimize the dimensionality of scores while satisfying the objectives. We give algorithms to design scores, which are provably minimal under mild assumptions on the structure of performance metrics. This framework draws motivation from real-world practices in hospital rating systems, where misaligned scores and performance metrics lead to unintended consequences.<|reference_end|>
arxiv
@article{kabra2024score, title={Score Design for Multi-Criteria Incentivization}, author={Anmol Kabra, Mina Karzand, Tosca Lechner, Nathan Srebro, Serena Wang}, journal={arXiv preprint arXiv:2410.06290}, year={2024}, doi={10.4230/LIPIcs.FORC.2024.8}, archivePrefix={arXiv}, eprint={2410.06290}, primaryClass={cs.CY cs.CG cs.LG} }
kabra2024score
arxiv-667204
2410.06293
Accelerated Preference Optimization for Large Language Model Alignment
<|reference_start|>Accelerated Preference Optimization for Large Language Model Alignment: Reinforcement Learning from Human Feedback (RLHF) has emerged as a pivotal tool for aligning large language models (LLMs) with human preferences. Direct Preference Optimization (DPO), one of the most popular approaches, formulates RLHF as a policy optimization problem without explicitly estimating the reward function. It overcomes the stability and efficiency issues of two-step approaches, which typically involve first estimating the reward function and then optimizing the policy via proximal policy optimization (PPO). Since RLHF is essentially an optimization problem, and it is well-known that momentum techniques can accelerate optimization both theoretically and empirically, a natural question arises: Can RLHF be accelerated by momentum? This paper answers this question in the affirmative. In detail, we first show that the iterative preference optimization method can be viewed as a proximal point method. Based on this observation, we propose a general Accelerated Preference Optimization (APO) framework, which unifies many existing preference optimization algorithms and employs Nesterov's momentum technique to speed up the alignment of LLMs. Theoretically, we demonstrate that APO can achieve a faster convergence rate than the standard iterative preference optimization methods, including DPO and Self-Play Preference Optimization (SPPO). Empirically, we show the superiority of APO over DPO, iterative DPO, and other strong baselines for RLHF on the AlpacaEval 2.0 benchmark.<|reference_end|>
arxiv
@article{he2024accelerated, title={Accelerated Preference Optimization for Large Language Model Alignment}, author={Jiafan He, Huizhuo Yuan, Quanquan Gu}, journal={arXiv preprint arXiv:2410.06293}, year={2024}, archivePrefix={arXiv}, eprint={2410.06293}, primaryClass={cs.LG cs.AI cs.CL} }
he2024accelerated
arxiv-667205
2410.06294
A New Architecture for Neural Enhanced Multiobject Tracking
<|reference_start|>A New Architecture for Neural Enhanced Multiobject Tracking: Multiobject tracking (MOT) is an important task in robotics, autonomous driving, and maritime surveillance. Traditional work on MOT is model-based and aims to establish algorithms in the framework of sequential Bayesian estimation. More recent methods are fully data-driven and rely on the training of neural networks. The two approaches have demonstrated advantages in certain scenarios. In particular, in problems where plenty of labeled data for the training of neural networks is available, data-driven MOT tends to have advantages compared to traditional methods. A natural thought is whether a general and efficient framework can integrate the two approaches. This paper advances a recently introduced hybrid model-based and data-driven method called neural-enhanced belief propagation (NEBP). Compared to existing work on NEBP for MOT, it introduces a novel neural architecture that can improve data association and new object initialization, two critical aspects of MOT. The proposed tracking method is leading the nuScenes LiDAR-only tracking challenge at the time of submission of this paper.<|reference_end|>
arxiv
@article{wei2024a, title={A New Architecture for Neural Enhanced Multiobject Tracking}, author={Shaoxiu Wei and Mingchao Liang and Florian Meyer}, journal={arXiv preprint arXiv:2410.06294}, year={2024}, archivePrefix={arXiv}, eprint={2410.06294}, primaryClass={eess.SP cs.LG cs.RO} }
wei2024a
arxiv-667206
2410.06295
A General Formulation for Path Constrained Time-Optimized Trajectory Planning with Environmental and Object Contacts
<|reference_start|>A General Formulation for Path Constrained Time-Optimized Trajectory Planning with Environmental and Object Contacts: A typical manipulation task consists of a manipulator equipped with a gripper to grasp and move an object with constraints on the motion of the hand-held object, which may be due to the nature of the task itself or from object-environment contacts. In this paper, we study the problem of computing joint torques and grasping forces for time-optimal motion of an object, while ensuring that the grasp is not lost and any constraints on the motion of the object, either due to dynamics, environment contact, or no-slip requirements, are also satisfied. We present a second-order cone program (SOCP) formulation of the time-optimal trajectory planning problem that considers nonlinear friction cone constraints at the hand-object and object-environment contacts. Since SOCPs are convex optimization problems that can be solved optimally in polynomial time using interior point methods, we can solve the trajectory optimization problem efficiently. We present simulation results on three examples, including a non-prehensile manipulation task, which shows the generality and effectiveness of our approach.<|reference_end|>
arxiv
@article{mahalingam2024a, title={A General Formulation for Path Constrained Time-Optimized Trajectory Planning with Environmental and Object Contacts}, author={Dasharadhan Mahalingam, Aditya Patankar, Riddhiman Laha, Srinivasan Lakshminarayanan, Sami Haddadin, Nilanjan Chakraborty}, journal={arXiv preprint arXiv:2410.06295}, year={2024}, archivePrefix={arXiv}, eprint={2410.06295}, primaryClass={cs.RO} }
mahalingam2024a
arxiv-667207
2410.06296
Conformal Structured Prediction
<|reference_start|>Conformal Structured Prediction: Conformal prediction has recently emerged as a promising strategy for quantifying the uncertainty of a predictive model; these algorithms modify the model to output sets of labels that are guaranteed to contain the true label with high probability. However, existing conformal prediction algorithms have largely targeted classification and regression settings, where the structure of the prediction set has a simple form as a level set of the scoring function. However, for complex structured outputs such as text generation, these prediction sets might include a large number of labels and therefore be hard for users to interpret. In this paper, we propose a general framework for conformal prediction in the structured prediction setting, that modifies existing conformal prediction algorithms to output structured prediction sets that implicitly represent sets of labels. In addition, we demonstrate how our approach can be applied in domains where the prediction sets can be represented as a set of nodes in a directed acyclic graph; for instance, for hierarchical labels such as image classification, a prediction set might be a small subset of coarse labels implicitly representing the prediction set of all their more fine-descendants. We demonstrate how our algorithm can be used to construct prediction sets that satisfy a desired coverage guarantee in several domains.<|reference_end|>
arxiv
@article{zhang2024conformal, title={Conformal Structured Prediction}, author={Botong Zhang, Shuo Li, Osbert Bastani}, journal={arXiv preprint arXiv:2410.06296}, year={2024}, archivePrefix={arXiv}, eprint={2410.06296}, primaryClass={cs.LG} }
zhang2024conformal
arxiv-667208
2410.06299
A Taxonomy of Collectible Card Games from a Game-Playing AI Perspective
<|reference_start|>A Taxonomy of Collectible Card Games from a Game-Playing AI Perspective: Collectible card games are challenging, widely played games that have received increasing attention from the AI research community in recent years. Despite important breakthroughs, the field still poses many unresolved challenges. This work aims to help further research on the genre by proposing a taxonomy of collectible card games by analyzing their rules, mechanics, and game modes from the perspective of game-playing AI research. To achieve this, we studied a set of popular games and provided a thorough discussion about their characteristics.<|reference_end|>
arxiv
@article{vieira2024a, title={A Taxonomy of Collectible Card Games from a Game-Playing AI Perspective}, author={Ronaldo e Silva Vieira, Anderson Rocha Tavares, Luiz Chaimowicz}, journal={arXiv preprint arXiv:2410.06299}, year={2024}, archivePrefix={arXiv}, eprint={2410.06299}, primaryClass={cs.AI} }
vieira2024a
arxiv-667209
2410.06300
Amortized SHAP values via sparse Fourier function approximation
<|reference_start|>Amortized SHAP values via sparse Fourier function approximation: SHAP values are a popular local feature-attribution method widely used in interpretable and explainable AI. We tackle the problem of efficiently computing these values. We cover both the model-agnostic (black-box) setting, where one only has query access to the model and also the case of (ensembles of) trees where one has access to the structure of the tree. For both the black-box and the tree setting we propose a two-stage approach for estimating SHAP values. Our algorithm's first step harnesses recent results showing that many real-world predictors have a spectral bias that allows us to either exactly represent (in the case of ensembles of decision trees), or efficiently approximate them (in the case of neural networks) using a compact Fourier representation. In the second step of the algorithm, we use the Fourier representation to exactly compute SHAP values. The second step is computationally very cheap because firstly, the representation is compact and secondly, we prove that there exists a closed-form expression for SHAP values for the Fourier basis functions. Furthermore, the expression we derive effectively linearizes the computation into a simple summation and is amenable to parallelization on multiple cores or a GPU. Since the function approximation (first step) is only done once, it allows us to produce Shapley values in an amortized way. We show speedups compared to relevant baseline methods equal levels of accuracy for both the tree and black-box settings. Moreover, this approach introduces a reliable and fine-grained continuous trade-off between computation and accuracy through the sparsity of the Fourier approximation, a feature previously unavailable in all black-box methods.<|reference_end|>
arxiv
@article{gorji2024amortized, title={Amortized SHAP values via sparse Fourier function approximation}, author={Ali Gorji, Andisheh Amrollahi, Andreas Krause}, journal={arXiv preprint arXiv:2410.06300}, year={2024}, archivePrefix={arXiv}, eprint={2410.06300}, primaryClass={cs.LG} }
gorji2024amortized
arxiv-667210
2410.06303
Compositional Risk Minimization
<|reference_start|>Compositional Risk Minimization: In this work, we tackle a challenging and extreme form of subpopulation shift, which is termed compositional shift. Under compositional shifts, some combinations of attributes are totally absent from the training distribution but present in the test distribution. We model the data with flexible additive energy distributions, where each energy term represents an attribute, and derive a simple alternative to empirical risk minimization termed compositional risk minimization (CRM). We first train an additive energy classifier to predict the multiple attributes and then adjust this classifier to tackle compositional shifts. We provide an extensive theoretical analysis of CRM, where we show that our proposal extrapolates to special affine hulls of seen attribute combinations. Empirical evaluations on benchmark datasets confirms the improved robustness of CRM compared to other methods from the literature designed to tackle various forms of subpopulation shifts.<|reference_end|>
arxiv
@article{mahajan2024compositional, title={Compositional Risk Minimization}, author={Divyat Mahajan, Mohammad Pezeshki, Ioannis Mitliagkas, Kartik Ahuja, Pascal Vincent}, journal={arXiv preprint arXiv:2410.06303}, year={2024}, archivePrefix={arXiv}, eprint={2410.06303}, primaryClass={cs.LG cs.AI} }
mahajan2024compositional
arxiv-667211
2410.06304
Fine-grained Hallucination Detection and Mitigation in Language Model Mathematical Reasoning
<|reference_start|>Fine-grained Hallucination Detection and Mitigation in Language Model Mathematical Reasoning: Hallucinations in large language models (LLMs) pose significant challenges in tasks requiring complex multi-step reasoning, such as mathematical problem-solving. Existing approaches primarily detect the presence of hallucinations but lack a nuanced understanding of their types and manifestations. In this paper, we first introduce a comprehensive taxonomy that categorizes the common hallucinations in mathematical reasoning task into six types: fabrication, factual inconsistency, context inconsistency, instruction inconsistency, logical inconsistency, and logical error. We then propose FG-PRM (Fine-Grained Process Reward Model), an augmented model designed to detect and mitigate hallucinations in a fine-grained, step-level manner. To address the limitations of manually labeling training data, we propose an automated method for generating fine-grained hallucination data using LLMs. By injecting hallucinations into reasoning steps of correct solutions, we create a diverse and balanced synthetic dataset for training FG-PRM, which consists of six specialized Process Reward Models (PRMs), each tailored to detect a specific hallucination type. Our FG-PRM demonstrates superior performance across two key tasks: 1) Fine-grained hallucination detection: classifying hallucination types for each reasoning step; and 2) Verification: ranking multiple LLM-generated outputs to select the most accurate solution, mitigating reasoning hallucinations. Our experiments show that FG-PRM outperforms ChatGPT-3.5 and Claude-3 on fine-grained hallucination detection and substantially boosts the performance of LLMs on GSM8K and MATH benchmarks.<|reference_end|>
arxiv
@article{li2024fine-grained, title={Fine-grained Hallucination Detection and Mitigation in Language Model Mathematical Reasoning}, author={Ruosen Li, Ziming Luo, Xinya Du}, journal={arXiv preprint arXiv:2410.06304}, year={2024}, archivePrefix={arXiv}, eprint={2410.06304}, primaryClass={cs.CL} }
li2024fine-grained
arxiv-667212
2410.06306
Benchmarking of a new data splitting method on volcanic eruption data
<|reference_start|>Benchmarking of a new data splitting method on volcanic eruption data: In this paper, a novel method for data splitting is presented: an iterative procedure divides the input dataset of volcanic eruption, chosen as the proposed use case, into two parts using a dissimilarity index calculated on the cumulative histograms of these two parts. The Cumulative Histogram Dissimilarity (CHD) index is introduced as part of the design. Based on the obtained results the proposed model in this case, compared to both Random splitting and K-means implemented over different configurations, achieves the best performance, with a slightly higher number of epochs. However, this demonstrates that the model can learn more deeply from the input dataset, which is attributable to the quality of the splitting. In fact, each model was trained with early stopping, suitable in case of overfitting, and the higher number of epochs in the proposed method demonstrates that early stopping did not detect overfitting, and consequently, the learning was optimal.<|reference_end|>
arxiv
@article{reale2024benchmarking, title={Benchmarking of a new data splitting method on volcanic eruption data}, author={Simona Reale, Pietro Di Stasio, Francesco Mauro, Alessandro Sebastianelli, Paolo Gamba, Silvia Liberata Ullo}, journal={arXiv preprint arXiv:2410.06306}, year={2024}, archivePrefix={arXiv}, eprint={2410.06306}, primaryClass={cs.CV} }
reale2024benchmarking
arxiv-667213
2410.06307
Model Predictive Control is Almost Optimal for Restless Bandit
<|reference_start|>Model Predictive Control is Almost Optimal for Restless Bandit: We consider the discrete time infinite horizon average reward restless markovian bandit (RMAB) problem. We propose a \emph{model predictive control} based non-stationary policy with a rolling computational horizon $\tau$. At each time-slot, this policy solves a $\tau$ horizon linear program whose first control value is kept as a control for the RMAB. Our solution requires minimal assumptions and quantifies the loss in optimality in terms of $\tau$ and the number of arms, $N$. We show that its sub-optimality gap is $O(1/\sqrt{N})$ in general, and $\exp(-\Omega(N))$ under a local-stability condition. Our proof is based on a framework from dynamic control known as \emph{dissipativity}. Our solution easy to implement and performs very well in practice when compared to the state of the art. Further, both our solution and our proof methodology can easily be generalized to more general constrained MDP settings and should thus, be of great interest to the burgeoning RMAB community.<|reference_end|>
arxiv
@article{gast2024model, title={Model Predictive Control is Almost Optimal for Restless Bandit}, author={Nicolas Gast, Dheeraj Narasimha}, journal={arXiv preprint arXiv:2410.06307}, year={2024}, archivePrefix={arXiv}, eprint={2410.06307}, primaryClass={math.OC cs.LG math.PR stat.ML} }
gast2024model
arxiv-667214
2410.06308
Quantifying Training Difficulty and Accelerating Convergence in Neural Network-Based PDE Solvers
<|reference_start|>Quantifying Training Difficulty and Accelerating Convergence in Neural Network-Based PDE Solvers: Neural network-based methods have emerged as powerful tools for solving partial differential equations (PDEs) in scientific and engineering applications, particularly when handling complex domains or incorporating empirical data. These methods leverage neural networks as basis functions to approximate PDE solutions. However, training such networks can be challenging, often resulting in limited accuracy. In this paper, we investigate the training dynamics of neural network-based PDE solvers with a focus on the impact of initialization techniques. We assess training difficulty by analyzing the eigenvalue distribution of the kernel and apply the concept of effective rank to quantify this difficulty, where a larger effective rank correlates with faster convergence of the training error. Building upon this, we discover through theoretical analysis and numerical experiments that two initialization techniques, partition of unity (PoU) and variance scaling (VS), enhance the effective rank, thereby accelerating the convergence of training error. Furthermore, comprehensive experiments using popular PDE-solving frameworks, such as PINN, Deep Ritz, and the operator learning framework DeepOnet, confirm that these initialization techniques consistently speed up convergence, in line with our theoretical findings.<|reference_end|>
arxiv
@article{chen2024quantifying, title={Quantifying Training Difficulty and Accelerating Convergence in Neural Network-Based PDE Solvers}, author={Chuqi Chen, Qixuan Zhou, Yahong Yang, Yang Xiang, Tao Luo}, journal={arXiv preprint arXiv:2410.06308}, year={2024}, archivePrefix={arXiv}, eprint={2410.06308}, primaryClass={math.NA cs.LG cs.NA} }
chen2024quantifying
arxiv-667215
2410.06311
A Comparative Study of Hybrid Models in Health Misinformation Text Classification
<|reference_start|>A Comparative Study of Hybrid Models in Health Misinformation Text Classification: This study evaluates the effectiveness of machine learning (ML) and deep learning (DL) models in detecting COVID-19-related misinformation on online social networks (OSNs), aiming to develop more effective tools for countering the spread of health misinformation during the pan-demic. The study trained and tested various ML classifiers (Naive Bayes, SVM, Random Forest, etc.), DL models (CNN, LSTM, hybrid CNN+LSTM), and pretrained language models (DistilBERT, RoBERTa) on the "COVID19-FNIR DATASET". These models were evaluated for accuracy, F1 score, recall, precision, and ROC, and used preprocessing techniques like stemming and lemmatization. The results showed SVM performed well, achieving a 94.41% F1-score. DL models with Word2Vec embeddings exceeded 98% in all performance metrics (accuracy, F1 score, recall, precision & ROC). The CNN+LSTM hybrid models also exceeded 98% across performance metrics, outperforming pretrained models like DistilBERT and RoBERTa. Our study concludes that DL and hybrid DL models are more effective than conventional ML algorithms for detecting COVID-19 misinformation on OSNs. The findings highlight the importance of advanced neural network approaches and large-scale pretraining in misinformation detection. Future research should optimize these models for various misinformation types and adapt to changing OSNs, aiding in combating health misinformation.<|reference_end|>
arxiv
@article{sikosana2024a, title={A Comparative Study of Hybrid Models in Health Misinformation Text Classification}, author={Mkululi Sikosana, Oluwaseun Ajao and Sean Maudsley-Barton}, journal={In Proceedings of the 4th International Workshop on Open Challenges in Online Social Networks (pp. 18-25) 2024}, year={2024}, doi={10.1145/3677117.3685007}, archivePrefix={arXiv}, eprint={2410.06311}, primaryClass={cs.IR cs.AI cs.LG} }
sikosana2024a
arxiv-667216
2410.06314
Temporal Image Caption Retrieval Competition -- Description and Results
<|reference_start|>Temporal Image Caption Retrieval Competition -- Description and Results: Multimodal models, which combine visual and textual information, have recently gained significant recognition. This paper addresses the multimodal challenge of Text-Image retrieval and introduces a novel task that extends the modalities to include temporal data. The Temporal Image Caption Retrieval Competition (TICRC) presented in this paper is based on the Chronicling America and Challenging America projects, which offer access to an extensive collection of digitized historic American newspapers spanning 274 years. In addition to the competition results, we provide an analysis of the delivered dataset and the process of its creation.<|reference_end|>
arxiv
@article{pokrywka2024temporal, title={Temporal Image Caption Retrieval Competition -- Description and Results}, author={Jakub Pokrywka, Piotr Wierzcho'n, Kornel Weryszko, Krzysztof Jassem}, journal={Proceedings of the 18th Conference on Computer Science and Intelligence Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, D. \'Sl\k{e}zak (eds). ACSIS, Vol. 35, pages 1331-1336 (2023)}, year={2024}, doi={10.15439/2023F7280}, archivePrefix={arXiv}, eprint={2410.06314}, primaryClass={cs.CV cs.CL} }
pokrywka2024temporal
arxiv-667217
2410.06315
Incremental Learning for Robot Shared Autonomy
<|reference_start|>Incremental Learning for Robot Shared Autonomy: Shared autonomy holds promise for improving the usability and accessibility of assistive robotic arms, but current methods often rely on costly expert demonstrations and lack the ability to adapt post-deployment. This paper introduces ILSA, an Incrementally Learned Shared Autonomy framework that continually improves its assistive control policy through repeated user interactions. ILSA leverages synthetic kinematic trajectories for initial pretraining, reducing the need for expert demonstrations, and then incrementally finetunes its policy after each manipulation interaction, with mechanisms to balance new knowledge acquisition with existing knowledge retention during incremental learning. We validate ILSA for complex long-horizon tasks through a comprehensive ablation study and a user study with 20 participants, demonstrating its effectiveness and robustness in both quantitative performance and user-reported qualitative metrics. Code and videos are available at https://ilsa-robo.github.io/.<|reference_end|>
arxiv
@article{tao2024incremental, title={Incremental Learning for Robot Shared Autonomy}, author={Yiran Tao, Guixiu Qiao, Dan Ding, Zackory Erickson}, journal={arXiv preprint arXiv:2410.06315}, year={2024}, archivePrefix={arXiv}, eprint={2410.06315}, primaryClass={cs.RO} }
tao2024incremental
arxiv-667218
2410.06317
Learning in complex action spaces without policy gradients
<|reference_start|>Learning in complex action spaces without policy gradients: Conventional wisdom suggests that policy gradient methods are better suited to complex action spaces than action-value methods. However, foundational studies have shown equivalences between these paradigms in small and finite action spaces (O'Donoghue et al., 2017; Schulman et al., 2017a). This raises the question of why their computational applicability and performance diverge as the complexity of the action space increases. We hypothesize that the apparent superiority of policy gradients in such settings stems not from intrinsic qualities of the paradigm, but from universal principles that can also be applied to action-value methods to serve similar functionality. We identify three such principles and provide a framework for incorporating them into action-value methods. To support our hypothesis, we instantiate this framework in what we term QMLE, for Q-learning with maximum likelihood estimation. Our results show that QMLE can be applied to complex action spaces with a controllable computational cost that is comparable to that of policy gradient methods, all without using policy gradients. Furthermore, QMLE demonstrates strong performance on the DeepMind Control Suite, even when compared to the state-of-the-art methods such as DMPO and D4PG.<|reference_end|>
arxiv
@article{tavakoli2024learning, title={Learning in complex action spaces without policy gradients}, author={Arash Tavakoli, Sina Ghiassian, Nemanja Raki'cevi'c}, journal={arXiv preprint arXiv:2410.06317}, year={2024}, archivePrefix={arXiv}, eprint={2410.06317}, primaryClass={cs.LG cs.AI stat.ML} }
tavakoli2024learning
arxiv-667219
2410.06319
Mixed precision sketching for least-squares problems and its application in GMRES-based iterative refinement
<|reference_start|>Mixed precision sketching for least-squares problems and its application in GMRES-based iterative refinement: Sketching-based preconditioners have been shown to accelerate the solution of dense least-squares problems with coefficient matrices having substantially more rows than columns. The cost of generating these preconditioners can be reduced by employing low precision floating-point formats for all or part of the computations. We perform finite precision analysis of a mixed precision algorithm that computes the $R$-factor of a QR factorization of the sketched coefficient matrix. Two precisions can be chosen and the analysis allows understanding how to set these precisions to exploit the potential benefits of low precision formats and still guarantee an effective preconditioner. If the nature of the least-squares problem requires a solution with a small forward error, then mixed precision iterative refinement (IR) may be needed. For ill-conditioned problems the GMRES-based IR approach can be used, but good preconditioner is crucial to ensure convergence. We theoretically show when the sketching-based preconditioner can guarantee that the GMRES-based IR reduces the relative forward error of the least-squares solution and the residual to the level of the working precision unit roundoff. Small numerical examples illustrate the analysis.<|reference_end|>
arxiv
@article{carson2024mixed, title={Mixed precision sketching for least-squares problems and its application in GMRES-based iterative refinement}, author={Erin Carson and Ieva Dauv{z}ickait.e}, journal={arXiv preprint arXiv:2410.06319}, year={2024}, archivePrefix={arXiv}, eprint={2410.06319}, primaryClass={math.NA cs.NA} }
carson2024mixed
arxiv-667220
2410.06321
An Algorithm for Distributed Computation of Reachable Sets for Multi-Agent Systems
<|reference_start|>An Algorithm for Distributed Computation of Reachable Sets for Multi-Agent Systems: In this paper, we consider the problem of distributed reachable set computation for multi-agent systems (MASs) interacting over an undirected, stationary graph. A full state-feedback control input for such MASs depends no only on the current agent's state, but also of its neighbors. However, in most MAS applications, the dynamics are obscured by individual agents. This makes reachable set computation, in a fully distributed manner, a challenging problem. We utilize the ideas of polytopic reachable set approximation and generalize it to a MAS setup. We formulate the resulting sub-problems in a fully distributed manner and provide convergence guarantees for the associated computations. The proposed algorithm's convergence is proved for two cases: static MAS graphs, and time-varying graphs under certain restrictions.<|reference_end|>
arxiv
@article{thapliyal2024an, title={An Algorithm for Distributed Computation of Reachable Sets for Multi-Agent Systems}, author={Omanshu Thapliyal, Shanelle Clarke, Inseok Hwang}, journal={arXiv preprint arXiv:2410.06321}, year={2024}, archivePrefix={arXiv}, eprint={2410.06321}, primaryClass={eess.SY cs.RO cs.SY} }
thapliyal2024an
arxiv-667221
2410.06322
A Banach space formulation for the fully dynamic Navier-Stokes-Biot coupled problem
<|reference_start|>A Banach space formulation for the fully dynamic Navier-Stokes-Biot coupled problem: We introduce and analyse a fully-mixed formulation for the coupled problem arising in the interaction between a free fluid and a poroelastic medium. The flows in the free fluid and poroelastic regions are governed by the Navier-Stokes and Biot equations, respectively, and the transmission conditions are given by mass conservation, balance of stresses, and the Beavers-Joseph-Saffman law. We apply dual-mixed formulations in both Navier-Stokes and Darcy equations, where the symmetry of the Navier-Stokes pseudostress tensor is imposed in a weak sense and a displacement-based formulation for elasticity equation. In turn, since the transmission conditions are essential in the fully mixed formulation, they are imposed weakly by introducing the traces of the fluid velocity and the poroelastic medium pressure on the interface as the associated Lagrange multipliers. Existence and uniqueness of a solution are established for the continuous weak formulation, as well as a semidiscrete continuous-in-time formulation with nonmatching grids, in a Banach space setting, employing classical results on monotone and nonlinear operators and a regularization technique together with the Banach fixed point approach. We then present error analysis with corresponding rates of convergence for semidiscrete continuous-in-time formulation. Numerical experiments are presented to verify the theoretical rates of convergence and illustrate the performance of the method for application to flow through a filter.<|reference_end|>
arxiv
@article{caucao2024a, title={A Banach space formulation for the fully dynamic Navier-Stokes-Biot coupled problem}, author={Sergio Caucao, Aashi Dalal, and Ivan Yotov}, journal={arXiv preprint arXiv:2410.06322}, year={2024}, archivePrefix={arXiv}, eprint={2410.06322}, primaryClass={math.NA cs.NA} }
caucao2024a
arxiv-667222
2410.06324
Differentiation Through Black-Box Quadratic Programming Solvers
<|reference_start|>Differentiation Through Black-Box Quadratic Programming Solvers: In recent years, many deep learning approaches have incorporated layers that solve optimization problems (e.g., linear, quadratic, and semidefinite programs). Integrating these optimization problems as differentiable layers requires computing the derivatives of the optimization problem's solution with respect to its objective and constraints. This has so far prevented the use of state-of-the-art black-box numerical solvers within neural networks, as they lack a differentiable interface. To address this issue for one of the most common convex optimization problems -- quadratic programming (QP) -- we introduce dQP, a modular framework that enables plug-and-play differentiation for any QP solver, allowing seamless integration into neural networks and bi-level optimization tasks. Our solution is based on the core theoretical insight that knowledge of the active constraint set at the QP optimum allows for explicit differentiation. This insight reveals a unique relationship between the computation of the solution and its derivative, enabling efficient differentiation of any solver, that only requires the primal solution. Our implementation, which will be made publicly available, interfaces with an existing framework that supports over 15 state-of-the-art QP solvers, providing each with a fully differentiable backbone for immediate use as a differentiable layer in learning setups. To demonstrate the scalability and effectiveness of dQP, we evaluate it on a large benchmark dataset of QPs with varying structures. We compare dQP with existing differentiable QP methods, demonstrating its advantages across a range of problems, from challenging small and dense problems to large-scale sparse ones, including a novel bi-level geometry optimization problem.<|reference_end|>
arxiv
@article{magoon2024differentiation, title={Differentiation Through Black-Box Quadratic Programming Solvers}, author={Connor W. Magoon, Fengyu Yang, Noam Aigerman, Shahar Z. Kovalsky}, journal={arXiv preprint arXiv:2410.06324}, year={2024}, archivePrefix={arXiv}, eprint={2410.06324}, primaryClass={cs.LG math.OC} }
magoon2024differentiation
arxiv-667223
2410.06325
Meta-Learning Augmented MPC for Disturbance-Aware Motion Planning and Control of Quadrotors
<|reference_start|>Meta-Learning Augmented MPC for Disturbance-Aware Motion Planning and Control of Quadrotors: A major challenge in autonomous flights is unknown disturbances, which can jeopardize safety and lead to collisions, especially in obstacle-rich environments. This paper presents a disturbance-aware motion planning and control framework designed for autonomous aerial flights. The framework is composed of two key components: a disturbance-aware motion planner and a tracking controller. The disturbance-aware motion planner consists of a predictive control scheme and a learned model of disturbances that is adapted online. The tracking controller is designed using contraction control methods to provide safety bounds on the quadrotor behaviour in the vicinity of the obstacles with respect to the disturbance-aware motion plan. Finally, the algorithm is tested in simulation scenarios with a quadrotor facing strong crosswind and ground-induced disturbances.<|reference_end|>
arxiv
@article{lapandić2024meta-learning, title={Meta-Learning Augmented MPC for Disturbance-Aware Motion Planning and Control of Quadrotors}, author={Dv{z}enan Lapandi'c, Fengze Xie, Christos K. Verginis, Soon-Jo Chung, Dimos V. Dimarogonas, Bo Wahlberg}, journal={arXiv preprint arXiv:2410.06325}, year={2024}, archivePrefix={arXiv}, eprint={2410.06325}, primaryClass={cs.RO cs.SY eess.SY} }
lapandić2024meta-learning
arxiv-667224
2410.06327
Towards a GENEA Leaderboard -- an Extended, Living Benchmark for Evaluating and Advancing Conversational Motion Synthesis
<|reference_start|>Towards a GENEA Leaderboard -- an Extended, Living Benchmark for Evaluating and Advancing Conversational Motion Synthesis: Current evaluation practices in speech-driven gesture generation lack standardisation and focus on aspects that are easy to measure over aspects that actually matter. This leads to a situation where it is impossible to know what is the state of the art, or to know which method works better for which purpose when comparing two publications. In this position paper, we review and give details on issues with existing gesture-generation evaluation, and present a novel proposal for remedying them. Specifically, we announce an upcoming living leaderboard to benchmark progress in conversational motion synthesis. Unlike earlier gesture-generation challenges, the leaderboard will be updated with large-scale user studies of new gesture-generation systems multiple times per year, and systems on the leaderboard can be submitted to any publication venue that their authors prefer. By evolving the leaderboard evaluation data and tasks over time, the effort can keep driving progress towards the most important end goals identified by the community. We actively seek community involvement across the entire evaluation pipeline: from data and tasks for the evaluation, via tooling, to the systems evaluated. In other words, our proposal will not only make it easier for researchers to perform good evaluations, but their collective input and contributions will also help drive the future of gesture-generation research.<|reference_end|>
arxiv
@article{nagy2024towards, title={Towards a GENEA Leaderboard -- an Extended, Living Benchmark for Evaluating and Advancing Conversational Motion Synthesis}, author={Rajmund Nagy, Hendric Voss, Youngwoo Yoon, Taras Kucherenko, Teodor Nikolov, Thanh Hoang-Minh, Rachel McDonnell, Stefan Kopp, Michael Neff, Gustav Eje Henter}, journal={arXiv preprint arXiv:2410.06327}, year={2024}, archivePrefix={arXiv}, eprint={2410.06327}, primaryClass={cs.HC cs.CV cs.GR cs.LG} }
nagy2024towards
arxiv-667225
2410.06328
Auto-Evolve: Enhancing Large Language Model's Performance via Self-Reasoning Framework
<|reference_start|>Auto-Evolve: Enhancing Large Language Model's Performance via Self-Reasoning Framework: Recent advancements in prompt engineering strategies, such as Chain-of-Thought (CoT) and Self-Discover, have demonstrated significant potential in improving the reasoning abilities of Large Language Models (LLMs). However, these state-of-the-art (SOTA) prompting strategies rely on single or fixed set of static seed reasoning modules like \emph{"think step by step"} or \emph{"break down this problem"} intended to simulate human approach to problem-solving. This constraint limits the flexibility of models in tackling diverse problems effectively. In this paper, we introduce Auto-Evolve, a novel framework that enables LLMs to self-create dynamic reasoning modules and downstream action plan, resulting in significant improvements over current SOTA methods. We evaluate Auto-Evolve on the challenging BigBench-Hard (BBH) dataset with Claude 2.0, Claude 3 Sonnet, Mistral Large, and GPT 4, where it consistently outperforms the SOTA prompt strategies. Auto-Evolve outperforms CoT by up to 10.4\% and on an average by 7\% across these four models. Our framework introduces two innovations: a) Auto-Evolve dynamically generates reasoning modules for each task while aligning with human reasoning paradigm, thus eliminating the need for predefined templates. b) We introduce an iterative refinement component, that incrementally refines instruction guidance for LLMs and helps boost performance by average 2.8\% compared to doing it in a single step.<|reference_end|>
arxiv
@article{aswani2024auto-evolve:, title={Auto-Evolve: Enhancing Large Language Model's Performance via Self-Reasoning Framework}, author={Krishna Aswani, Huilin Lu, Pranav Patankar, Priya Dhalwani, Iris Tan, Jayant Ganeshmohan, Simon Lacasse}, journal={arXiv preprint arXiv:2410.06328}, year={2024}, archivePrefix={arXiv}, eprint={2410.06328}, primaryClass={cs.CL cs.AI cs.LG} }
aswani2024auto-evolve:
arxiv-667226
2410.06329
Bayesian Estimation and Tuning-Free Rank Detection for Probability Mass Function Tensors
<|reference_start|>Bayesian Estimation and Tuning-Free Rank Detection for Probability Mass Function Tensors: Obtaining a reliable estimate of the joint probability mass function (PMF) of a set of random variables from observed data is a significant objective in statistical signal processing and machine learning. Modelling the joint PMF as a tensor that admits a low-rank canonical polyadic decomposition (CPD) has enabled the development of efficient PMF estimation algorithms. However, these algorithms require the rank (model order) of the tensor to be specified beforehand. In real-world applications, the true rank is unknown. Therefore, an appropriate rank is usually selected from a candidate set either by observing validation errors or by computing various likelihood-based information criteria, a procedure which is computationally expensive for large datasets. This paper presents a novel Bayesian framework for estimating the joint PMF and automatically inferring its rank from observed data. We specify a Bayesian PMF estimation model and employ appropriate prior distributions for the model parameters, allowing for tuning-free rank inference via a single training run. We then derive a deterministic solution based on variational inference (VI) to approximate the posterior distributions of various model parameters. Additionally, we develop a scalable version of the VI-based approach by leveraging stochastic variational inference (SVI) to arrive at an efficient algorithm whose complexity scales sublinearly with the size of the dataset. Numerical experiments involving both synthetic data and real movie recommendation data illustrate the advantages of our VI and SVI-based methods in terms of estimation accuracy, automatic rank detection, and computational efficiency.<|reference_end|>
arxiv
@article{chege2024bayesian, title={Bayesian Estimation and Tuning-Free Rank Detection for Probability Mass Function Tensors}, author={Joseph K. Chege, Arie Yeredor, Martin Haardt}, journal={arXiv preprint arXiv:2410.06329}, year={2024}, archivePrefix={arXiv}, eprint={2410.06329}, primaryClass={stat.ML cs.LG eess.SP} }
chege2024bayesian
arxiv-667227
2410.06330
Local Surface Parameterizations via Geodesic Splines
<|reference_start|>Local Surface Parameterizations via Geodesic Splines: We present a general method for computing local parameterizations rooted at a point on a surface, where the surface is described only through a signed implicit function and a corresponding projection function. Using a two-stage process, we compute several points radially emanating from the map origin, and interpolate between them with a spline surface. The narrow interface of our method allows it to support several kinds of geometry such as signed distance functions, general analytic implicit functions, triangle meshes, neural implicits, and point clouds. We demonstrate the high quality of our generated parameterizations on a variety of examples, and show applications in local texturing and surface curve drawing.<|reference_end|>
arxiv
@article{madan2024local, title={Local Surface Parameterizations via Geodesic Splines}, author={Abhishek Madan, David I.W. Levin}, journal={arXiv preprint arXiv:2410.06330}, year={2024}, archivePrefix={arXiv}, eprint={2410.06330}, primaryClass={cs.GR} }
madan2024local
arxiv-667228
2410.06331
Locate-then-edit for Multi-hop Factual Recall under Knowledge Editing
<|reference_start|>Locate-then-edit for Multi-hop Factual Recall under Knowledge Editing: The locate-then-edit paradigm has shown significant promise for knowledge editing (KE) in Large Language Models (LLMs). While previous methods perform well on single-hop fact recall tasks, they consistently struggle with multi-hop factual recall tasks involving newly edited knowledge. In this paper, leveraging tools in mechanistic interpretability, we first identify that in multi-hop tasks, LLMs tend to retrieve implicit subject knowledge from deeper MLP layers, unlike single-hop tasks, which rely on earlier layers. This distinction explains the poor performance of current methods in multi-hop queries, as they primarily focus on editing shallow layers, leaving deeper layers unchanged. To address this, we propose IFMET, a novel locate-then-edit KE approach designed to edit both shallow and deep MLP layers. IFMET employs multi-hop editing prompts and supplementary sets to locate and modify knowledge across different reasoning stages. Experimental results demonstrate that IFMET significantly improves performance on multi-hop factual recall tasks, effectively overcoming the limitations of previous locate-then-edit methods.<|reference_end|>
arxiv
@article{zhang2024locate-then-edit, title={Locate-then-edit for Multi-hop Factual Recall under Knowledge Editing}, author={Zhuoran Zhang, Yongxiang Li, Zijian Kan, Keyuan Cheng, Lijie Hu, and Di Wang}, journal={arXiv preprint arXiv:2410.06331}, year={2024}, archivePrefix={arXiv}, eprint={2410.06331}, primaryClass={cs.CL cs.AI cs.LG} }
zhang2024locate-then-edit
arxiv-667229
2410.06332
Boolean Nearest Neighbor Language in the Knowledge Compilation Map
<|reference_start|>Boolean Nearest Neighbor Language in the Knowledge Compilation Map: The Boolean Nearest Neighbor (BNN) representation of Boolean functions was recently introduced by Hajnal, Liu and Turan. A BNN representation of $f$ is a pair $(P,N)$ of sets of Boolean vectors (called positive and negative prototypes) where $f(x)=1$ for every positive prototype $x \in P$, $f(x)=0$ for all every negative prototype $x \in N$, and the value $f(x)$ for $x \not\in P \cup N$ is determined by the type of the closest prototype. The main aim of this paper is to determine the position of the BNN language in the Knowledge Compilation Map (KCM). To this end, we derive results which compare the succinctness of the BNN language to several standard languages from KCM, and determine the complexity status of most standard queries and transformations for BNN inputs.<|reference_end|>
arxiv
@article{čepek2024boolean, title={Boolean Nearest Neighbor Language in the Knowledge Compilation Map}, author={Ondv{r}ej v{C}epek, Jelena Gliv{s}i'c}, journal={arXiv preprint arXiv:2410.06332}, year={2024}, archivePrefix={arXiv}, eprint={2410.06332}, primaryClass={cs.AI} }
čepek2024boolean
arxiv-667230
2410.06333
Batched Bayesian optimization with correlated candidate uncertainties
<|reference_start|>Batched Bayesian optimization with correlated candidate uncertainties: Batched Bayesian optimization (BO) can accelerate molecular design by efficiently identifying top-performing compounds from a large chemical library. Existing acquisition strategies for batch design in BO aim to balance exploration and exploitation. This often involves optimizing non-additive batch acquisition functions, necessitating approximation via myopic construction and/or diversity heuristics. In this work, we propose an acquisition strategy for discrete optimization that is motivated by pure exploitation, qPO (multipoint Probability of Optimality). qPO maximizes the probability that the batch includes the true optimum, which is expressible as the sum over individual acquisition scores and thereby circumvents the combinatorial challenge of optimizing a batch acquisition function. We differentiate the proposed strategy from parallel Thompson sampling and discuss how it implicitly captures diversity. Finally, we apply our method to the model-guided exploration of large chemical libraries and provide empirical evidence that it performs better than or on par with state-of-the-art methods in batched Bayesian optimization.<|reference_end|>
arxiv
@article{fromer2024batched, title={Batched Bayesian optimization with correlated candidate uncertainties}, author={Jenna Fromer, Runzhong Wang, Mrunali Manjrekar, Austin Tripp, Jos'e Miguel Hern'andez-Lobato, Connor W. Coley}, journal={arXiv preprint arXiv:2410.06333}, year={2024}, archivePrefix={arXiv}, eprint={2410.06333}, primaryClass={cs.LG stat.ML} }
fromer2024batched
arxiv-667231
2410.06336
Exploring Large Language Models Through a Neurodivergent Lens: Use, Challenges, Community-Driven Workarounds, and Concerns
<|reference_start|>Exploring Large Language Models Through a Neurodivergent Lens: Use, Challenges, Community-Driven Workarounds, and Concerns: Despite the increasing use of large language models (LLMs) in everyday life among neurodivergent individuals, our knowledge of how they engage with, and perceive LLMs remains limited. In this study, we investigate how neurodivergent individuals interact with LLMs by qualitatively analyzing topically related discussions from 61 neurodivergent communities on Reddit. Our findings reveal 20 specific LLM use cases across five core thematic areas of use among neurodivergent users: emotional well-being, mental health support, interpersonal communication, learning, and professional development and productivity. We also identified key challenges, including overly neurotypical LLM responses and the limitations of text-based interactions. In response to such challenges, some users actively seek advice by sharing input prompts and corresponding LLM responses. Others develop workarounds by experimenting and modifying prompts to be more neurodivergent-friendly. Despite these efforts, users have significant concerns around LLM use, including potential overreliance and fear of replacing human connections. Our analysis highlights the need to make LLMs more inclusive for neurodivergent users and implications around how LLM technologies can reinforce unintended consequences and behaviors.<|reference_end|>
arxiv
@article{carik2024exploring, title={Exploring Large Language Models Through a Neurodivergent Lens: Use, Challenges, Community-Driven Workarounds, and Concerns}, author={Buse Carik, Kaike Ping, Xiaohan Ding, Eugenia H. Rho}, journal={arXiv preprint arXiv:2410.06336}, year={2024}, archivePrefix={arXiv}, eprint={2410.06336}, primaryClass={cs.HC} }
carik2024exploring
arxiv-667232
2410.06337
Faster Algorithms for Graph Monopolarity
<|reference_start|>Faster Algorithms for Graph Monopolarity: A graph $G = (V,E)$ is said to be monopolar if its vertex set admits a partition $V = (C \uplus{} I)$ where $G[C]$ is a cluster graph and $I$ is an independent set in $G$. Monopolar graphs generalize both bipartite graphs and split graphs, and they have been extensively studied from both graph-theoretic and algorithmic points of view. In this work we focus on the problem MONOPOLAR RECOGNITION (MR) of deciding whether a given graph is monopolar. MR is known to be solvable in polynomial time in certain classes of graphs such as cographs and claw-free graphs, and to be NP-Hard in various restricted classes such as subcubic planar graphs. We initiate the study of exact exponential-time algorithms for MR and allied problems. We design an algorithm that solves MR in $\OhStar(1.3734^{n})$ time on input graphs with $n$ vertices. In fact we solve the more general problems MONOPOLAR EXTENSION (ME) and LIST MONOPOLAR PARTITION (LMP), which were introduced in the literature as part of the study of graph monopolarity, in $\OhStar(1.3734^{n})$ time. We also design fast parameterized algorithms for MR using two notions of distance from triviality as the parameters. Our FPT algorithms solve MR in $\OhStar(3.076^{k_{v}})$ and $\OhStar(2.253^{k_{e}})$ time, where $k_{v}$ and $k_{e}$ are, respectively, the sizes of the smallest claw-free vertex and edge deletion sets of the input graph. These results are a significant addition to the small number of FPT algorithms currently known for MR. Le and Nevries have shown that if a graph $G$ is chair-free, then an instance $(G,C')$ of ME can be solved in polynomial time for any subset $C'$ of its vertices. We significantly generalize this result; we show that we can solve instances $(G,C')$ of ME in polynomial time for arbitrary graphs $G$ and any chair-free vertex deletion set $C'$ of $G$. We believe this result could be of independent interest.<|reference_end|>
arxiv
@article{philip2024faster, title={Faster Algorithms for Graph Monopolarity}, author={Geevarghese Philip and Shrinidhi Teganahally Sridhara}, journal={arXiv preprint arXiv:2410.06337}, year={2024}, archivePrefix={arXiv}, eprint={2410.06337}, primaryClass={cs.DS} }
philip2024faster
arxiv-667233
2410.06338
Are Large Language Models State-of-the-art Quality Estimators for Machine Translation of User-generated Content?
<|reference_start|>Are Large Language Models State-of-the-art Quality Estimators for Machine Translation of User-generated Content?: This paper investigates whether large language models (LLMs) are state-of-the-art quality estimators for machine translation of user-generated content (UGC) that contains emotional expressions, without the use of reference translations. To achieve this, we employ an existing emotion-related dataset with human-annotated errors and calculate quality evaluation scores based on the Multi-dimensional Quality Metrics. We compare the accuracy of several LLMs with that of our fine-tuned baseline models, under in-context learning and parameter-efficient fine-tuning (PEFT) scenarios. We find that PEFT of LLMs leads to better performance in score prediction with human interpretable explanations than fine-tuned models. However, a manual analysis of LLM outputs reveals that they still have problems such as refusal to reply to a prompt and unstable output while evaluating machine translation of UGC.<|reference_end|>
arxiv
@article{qian2024are, title={Are Large Language Models State-of-the-art Quality Estimators for Machine Translation of User-generated Content?}, author={Shenbin Qian, Constantin Oru{a}san, Diptesh Kanojia, F'elix do Carmo}, journal={arXiv preprint arXiv:2410.06338}, year={2024}, archivePrefix={arXiv}, eprint={2410.06338}, primaryClass={cs.CL} }
qian2024are
arxiv-667234
2410.06339
Filtered Randomized Smoothing: A New Defense for Robust Modulation Classification
<|reference_start|>Filtered Randomized Smoothing: A New Defense for Robust Modulation Classification: Deep Neural Network (DNN) based classifiers have recently been used for the modulation classification of RF signals. These classifiers have shown impressive performance gains relative to conventional methods, however, they are vulnerable to imperceptible (low-power) adversarial attacks. Some of the prominent defense approaches include adversarial training (AT) and randomized smoothing (RS). While AT increases robustness in general, it fails to provide resilience against previously unseen adaptive attacks. Other approaches, such as Randomized Smoothing (RS), which injects noise into the input, address this shortcoming by providing provable certified guarantees against arbitrary attacks, however, they tend to sacrifice accuracy. In this paper, we study the problem of designing robust DNN-based modulation classifiers that can provide provable defense against arbitrary attacks without significantly sacrificing accuracy. To this end, we first analyze the spectral content of commonly studied attacks on modulation classifiers for the benchmark RadioML dataset. We observe that spectral signatures of un-perturbed RF signals are highly localized, whereas attack signals tend to be spread out in frequency. To exploit this spectral heterogeneity, we propose Filtered Randomized Smoothing (FRS), a novel defense which combines spectral filtering together with randomized smoothing. FRS can be viewed as a strengthening of RS by leveraging the specificity (spectral Heterogeneity) inherent to the modulation classification problem. In addition to providing an approach to compute the certified accuracy of FRS, we also provide a comprehensive set of simulations on the RadioML dataset to show the effectiveness of FRS and show that it significantly outperforms existing defenses including AT and RS in terms of accuracy on both attacked and benign signals.<|reference_end|>
arxiv
@article{zhang2024filtered, title={Filtered Randomized Smoothing: A New Defense for Robust Modulation Classification}, author={Wenhan Zhang, Meiyu Zhong, Ravi Tandon, Marwan Krunz}, journal={arXiv preprint arXiv:2410.06339}, year={2024}, archivePrefix={arXiv}, eprint={2410.06339}, primaryClass={cs.LG cs.CR cs.IT cs.NI eess.SP math.IT} }
zhang2024filtered
arxiv-667235
2410.06340
FedGraph: A Research Library and Benchmark for Federated Graph Learning
<|reference_start|>FedGraph: A Research Library and Benchmark for Federated Graph Learning: Federated graph learning is an emerging field with significant practical challenges. While many algorithms have been proposed to enhance model accuracy, their system performance, crucial for real-world deployment, is often overlooked. To address this gap, we present FedGraph, a research library designed for practical distributed deployment and benchmarking in federated graph learning. FedGraph supports a range of state-of-the-art methods and includes profiling tools for system performance evaluation, focusing on communication and computation costs during training. FedGraph can then facilitate the development of practical applications and guide the design of future algorithms.<|reference_end|>
arxiv
@article{yao2024fedgraph:, title={FedGraph: A Research Library and Benchmark for Federated Graph Learning}, author={Yuhang Yao, Yuan Li, Xinyi Fan, Junhao Li, Kay Liu, Weizhao Jin, Srivatsan Ravi, Philip S. Yu, Carlee Joe-Wong}, journal={arXiv preprint arXiv:2410.06340}, year={2024}, archivePrefix={arXiv}, eprint={2410.06340}, primaryClass={cs.LG} }
yao2024fedgraph:
arxiv-667236
2410.06343
Losing Treewidth In The Presence Of Weights
<|reference_start|>Losing Treewidth In The Presence Of Weights: In the Weighted Treewidth-$\eta$ Deletion problem we are given a node-weighted graph $G$ and we look for a vertex subset $X$ of minimum weight such that the treewidth of $G-X$ is at most $\eta$. We show that Weighted Treewidth-$\eta$ Deletion admits a randomized polynomial-time constant-factor approximation algorithm for every fixed $\eta$. Our algorithm also works for the more general Weighted Planar $F$-M-Deletion problem. This work extends the results for unweighted graphs by [Fomin, Lokshtanov, Misra, Saurabh; FOCS '12] and answers a question posed by [Agrawal, Lokshtanov, Misra, Saurabh, Zehavi; APPROX/RANDOM '18] and [Kim, Lee, Thilikos; APPROX/RANDOM '21]. The presented algorithm is based on a novel technique of random sampling of so-called protrusions.<|reference_end|>
arxiv
@article{włodarczyk2024losing, title={Losing Treewidth In The Presence Of Weights}, author={Micha{l} W{l}odarczyk}, journal={arXiv preprint arXiv:2410.06343}, year={2024}, archivePrefix={arXiv}, eprint={2410.06343}, primaryClass={cs.DS} }
włodarczyk2024losing
arxiv-667237
2410.06345
Work-in-Progress: Traded Control Transfer for Managing Real-Time Sensor Uncertainties in Autonomous Vehicle
<|reference_start|>Work-in-Progress: Traded Control Transfer for Managing Real-Time Sensor Uncertainties in Autonomous Vehicle: At Levels 2 and 3 of autonomous driving defined by the Society of Auto-motive Engineers, drivers must take on certain driving responsibilities, and automated driving must sometimes yield to human control. This situation can occur in real time due to uncertainties in sensor measurements caused by environmental factors like fog or smoke. To address this challenge, we propose a method to manage real-time sensor uncertainties in autonomous vehicles by monitoring sensor conflicts and dynamically adjusting control authority to maintain safe operation. However, to achieve this, we have introduced a novel metric called the Degree of Conflicts (DoC), which quantifies the conflict between real-time sensor data by measuring the differences between data from multiple sensors. Our approach aims to demonstrate the importance of selecting an appropriate DoC threshold for transferring control between the automation agent and the human driver. The results have shown that choosing the correct DoC threshold can enhance safety by promptly handing over the driving control from the automation system to the human driver in challenging conditions.<|reference_end|>
arxiv
@article{sourav2024work-in-progress:, title={Work-in-Progress: Traded Control Transfer for Managing Real-Time Sensor Uncertainties in Autonomous Vehicle}, author={Md Sakib Galib Sourav (1), Liang Cheng (1) ((1) University of Toledo, Toledo, USA)}, journal={arXiv preprint arXiv:2410.06345}, year={2024}, archivePrefix={arXiv}, eprint={2410.06345}, primaryClass={eess.SY cs.SY} }
sourav2024work-in-progress:
arxiv-667238
2410.06347
Solving Multi-Goal Robotic Tasks with Decision Transformer
<|reference_start|>Solving Multi-Goal Robotic Tasks with Decision Transformer: Artificial intelligence plays a crucial role in robotics, with reinforcement learning (RL) emerging as one of the most promising approaches for robot control. However, several key challenges hinder its broader application. First, many RL methods rely on online learning, which requires either real-world hardware or advanced simulation environments--both of which can be costly, time-consuming, and impractical. Offline reinforcement learning offers a solution, enabling models to be trained without ongoing access to physical robots or simulations. A second challenge is learning multi-goal tasks, where robots must achieve multiple objectives simultaneously. This adds complexity to the training process, as the model must generalize across different goals. At the same time, transformer architectures have gained significant popularity across various domains, including reinforcement learning. Yet, no existing methods effectively combine offline training, multi-goal learning, and transformer-based architectures. In this paper, we address these challenges by introducing a novel adaptation of the decision transformer architecture for offline multi-goal reinforcement learning in robotics. Our approach integrates goal-specific information into the decision transformer, allowing it to handle complex tasks in an offline setting. To validate our method, we developed a new offline reinforcement learning dataset using the Panda robotic platform in simulation. Our extensive experiments demonstrate that the decision transformer can outperform state-of-the-art online reinforcement learning methods.<|reference_end|>
arxiv
@article{gajewski2024solving, title={Solving Multi-Goal Robotic Tasks with Decision Transformer}, author={Paul Gajewski, Dominik .Zurek, Marcin Pietro'n, Kamil Faber}, journal={arXiv preprint arXiv:2410.06347}, year={2024}, archivePrefix={arXiv}, eprint={2410.06347}, primaryClass={cs.RO cs.AI} }
gajewski2024solving
arxiv-667239
2410.06348
Harnessing the Power of Noise: A Survey of Techniques and Applications
<|reference_start|>Harnessing the Power of Noise: A Survey of Techniques and Applications: Noise, traditionally considered a nuisance in computational systems, is reconsidered for its unexpected and counter-intuitive benefits across a wide spectrum of domains, including nonlinear information processing, signal processing, image processing, machine learning, network science, and natural language processing. Through a comprehensive review of both historical and contemporary research, this survey presents a dual perspective on noise, acknowledging its potential to both disrupt and enhance performance. Particularly, we highlight how noise-enhanced training strategies can lead to models that better generalize from noisy data, positioning noise not just as a challenge to overcome but as a strategic tool for improvement. This work calls for a shift in how we perceive noise, proposing that it can be a spark for innovation and advancement in the information era.<|reference_end|>
arxiv
@article{abdolazimi2024harnessing, title={Harnessing the Power of Noise: A Survey of Techniques and Applications}, author={Reyhaneh Abdolazimi, Shengmin Jin, Pramod K. Varshney, Reza Zafarani}, journal={arXiv preprint arXiv:2410.06348}, year={2024}, archivePrefix={arXiv}, eprint={2410.06348}, primaryClass={cs.LG} }
abdolazimi2024harnessing
arxiv-667240
2410.06349
Robust Domain Generalisation with Causal Invariant Bayesian Neural Networks
<|reference_start|>Robust Domain Generalisation with Causal Invariant Bayesian Neural Networks: Deep neural networks can obtain impressive performance on various tasks under the assumption that their training domain is identical to their target domain. Performance can drop dramatically when this assumption does not hold. One explanation for this discrepancy is the presence of spurious domain-specific correlations in the training data that the network exploits. Causal mechanisms, in the other hand, can be made invariant under distribution changes as they allow disentangling the factors of distribution underlying the data generation. Yet, learning causal mechanisms to improve out-of-distribution generalisation remains an under-explored area. We propose a Bayesian neural architecture that disentangles the learning of the the data distribution from the inference process mechanisms. We show theoretically and experimentally that our model approximates reasoning under causal interventions. We demonstrate the performance of our method, outperforming point estimate-counterparts, on out-of-distribution image recognition tasks where the data distribution acts as strong adversarial confounders.<|reference_end|>
arxiv
@article{gendron2024robust, title={Robust Domain Generalisation with Causal Invariant Bayesian Neural Networks}, author={Ga"el Gendron, Michael Witbrock, Gillian Dobbie}, journal={arXiv preprint arXiv:2410.06349}, year={2024}, archivePrefix={arXiv}, eprint={2410.06349}, primaryClass={cs.LG stat.ME} }
gendron2024robust
arxiv-667241
2410.06351
Moving Faster and Reducing Risk: Using LLMs in Release Deployment
<|reference_start|>Moving Faster and Reducing Risk: Using LLMs in Release Deployment: Release engineering has traditionally focused on continuously delivering features and bug fixes to users, but at a certain scale, it becomes impossible for a release engineering team to determine what should be released. At Meta's scale, the responsibility appropriately and necessarily falls back on the engineer writing and reviewing the code. To address this challenge, we developed models of diff risk scores (DRS) to determine how likely a diff is to cause a SEV, i.e., a severe fault that impacts end-users. Assuming that SEVs are only caused by diffs, a naive model could randomly gate X% of diffs from landing, which would automatically catch X% of SEVs on average. However, we aimed to build a model that can capture Y% of SEVs by gating X% of diffs, where Y >> X. By training the model on historical data on diffs that have caused SEVs in the past, we can predict the riskiness of an outgoing diff to cause a SEV. Diffs that are beyond a particular threshold of risk can then be gated. We have four types of gating: no gating (green), weekend gating (weekend), medium impact on end-users (yellow), and high impact on end-users (red). The input parameter for our models is the level of gating, and the outcome measure is the number of captured SEVs. Our research approaches include a logistic regression model, a BERT-based model, and generative LLMs. Our baseline regression model captures 18.7%, 27.9%, and 84.6% of SEVs while respectively gating the top 5% (weekend), 10% (yellow), and 50% (red) of risky diffs. The BERT-based model, StarBERT, only captures 0.61x, 0.85x, and 0.81x as many SEVs as the logistic regression for the weekend, yellow, and red gating zones, respectively. The generative LLMs, iCodeLlama-34B and iDiffLlama-13B, when risk-aligned, capture more SEVs than the logistic regression model in production: 1.40x, 1.52x, 1.05x, respectively.<|reference_end|>
arxiv
@article{abreu2024moving, title={Moving Faster and Reducing Risk: Using LLMs in Release Deployment}, author={Rui Abreu, Vijayaraghavan Murali, Peter C Rigby, Chandra Maddila, Weiyan Sun, Jun Ge, Kaavya Chinniah, Audris Mockus, Megh Mehta, Nachiappan Nagappan}, journal={arXiv preprint arXiv:2410.06351}, year={2024}, archivePrefix={arXiv}, eprint={2410.06351}, primaryClass={cs.SE} }
abreu2024moving
arxiv-667242
2410.06352
Tree-Based Leakage Inspection and Control in Concept Bottleneck Models
<|reference_start|>Tree-Based Leakage Inspection and Control in Concept Bottleneck Models: As AI models grow larger, the demand for accountability and interpretability has become increasingly critical for understanding their decision-making processes. Concept Bottleneck Models (CBMs) have gained attention for enhancing interpretability by mapping inputs to intermediate concepts before making final predictions. However, CBMs often suffer from information leakage, where additional input data, not captured by the concepts, is used to improve task performance, complicating the interpretation of downstream predictions. In this paper, we introduce a novel approach for training both joint and sequential CBMs that allows us to identify and control leakage using decision trees. Our method quantifies leakage by comparing the decision paths of hard CBMs with their soft, leaky counterparts. Specifically, we show that soft leaky CBMs extend the decision paths of hard CBMs, particularly in cases where concept information is incomplete. Using this insight, we develop a technique to better inspect and manage leakage, isolating the subsets of data most affected by this. Through synthetic and real-world experiments, we demonstrate that controlling leakage in this way not only improves task accuracy but also yields more informative and transparent explanations.<|reference_end|>
arxiv
@article{ragkousis2024tree-based, title={Tree-Based Leakage Inspection and Control in Concept Bottleneck Models}, author={Angelos Ragkousis, Sonali Parbhoo}, journal={arXiv preprint arXiv:2410.06352}, year={2024}, archivePrefix={arXiv}, eprint={2410.06352}, primaryClass={cs.LG} }
ragkousis2024tree-based
arxiv-667243
2410.06353
Language-Assisted Human Part Motion Learning for Skeleton-Based Temporal Action Segmentation
<|reference_start|>Language-Assisted Human Part Motion Learning for Skeleton-Based Temporal Action Segmentation: Skeleton-based Temporal Action Segmentation involves the dense action classification of variable-length skeleton sequences. Current approaches primarily apply graph-based networks to extract framewise, whole-body-level motion representations, and use one-hot encoded labels for model optimization. However, whole-body motion representations do not capture fine-grained part-level motion representations and the one-hot encoded labels neglect the intrinsic semantic relationships within the language-based action definitions. To address these limitations, we propose a novel method named Language-assisted Human Part Motion Representation Learning (LPL), which contains a Disentangled Part Motion Encoder (DPE) to extract dual-level (i.e., part and whole-body) motion representations and a Language-assisted Distribution Alignment (LDA) strategy for optimizing spatial relations within representations. Specifically, after part-aware skeleton encoding via DPE, LDA generates dual-level action descriptions to construct a textual embedding space with the help of a large-scale language model. Then, LDA motivates the alignment of the embedding space between text descriptions and motions. This alignment allows LDA not only to enhance intra-class compactness but also to transfer the language-encoded semantic correlations among actions to skeleton-based motion learning. Moreover, we propose a simple yet efficient Semantic Offset Adapter to smooth the cross-domain misalignment. Our experiments indicate that LPL achieves state-of-the-art performance across various datasets (e.g., +4.4\% Accuracy, +5.6\% F1 on the PKU-MMD dataset). Moreover, LDA is compatible with existing methods and improves their performance (e.g., +4.8\% Accuracy, +4.3\% F1 on the LARa dataset) without additional inference costs.<|reference_end|>
arxiv
@article{chen2024language-assisted, title={Language-Assisted Human Part Motion Learning for Skeleton-Based Temporal Action Segmentation}, author={Bowen Chen, Haoyu Ji, Zhiyong Wang, Benjamin Filtjens, Chunzhuo Wang, Weihong Ren, Bart Vanrumste, Honghai Liu}, journal={arXiv preprint arXiv:2410.06353}, year={2024}, archivePrefix={arXiv}, eprint={2410.06353}, primaryClass={cs.CV} }
chen2024language-assisted
arxiv-667244
2410.06355
Context-Aware Command Understanding for Tabletop Scenarios
<|reference_start|>Context-Aware Command Understanding for Tabletop Scenarios: This paper presents a novel hybrid algorithm designed to interpret natural human commands in tabletop scenarios. By integrating multiple sources of information, including speech, gestures, and scene context, the system extracts actionable instructions for a robot, identifying relevant objects and actions. The system operates in a zero-shot fashion, without reliance on predefined object models, enabling flexible and adaptive use in various environments. We assess the integration of multiple deep learning models, evaluating their suitability for deployment in real-world robotic setups. Our algorithm performs robustly across different tasks, combining language processing with visual grounding. In addition, we release a small dataset of video recordings used to evaluate the system. This dataset captures real-world interactions in which a human provides instructions in natural language to a robot, a contribution to future research on human-robot interaction. We discuss the strengths and limitations of the system, with particular focus on how it handles multimodal command interpretation, and its ability to be integrated into symbolic robotic frameworks for safe and explainable decision-making.<|reference_end|>
arxiv
@article{gajewski2024context-aware, title={Context-Aware Command Understanding for Tabletop Scenarios}, author={Paul Gajewski, Antonio Galiza Cerdeira Gonzalez, Bipin Indurkhya}, journal={arXiv preprint arXiv:2410.06355}, year={2024}, archivePrefix={arXiv}, eprint={2410.06355}, primaryClass={cs.RO cs.AI} }
gajewski2024context-aware
arxiv-667245
2410.06361
Hierarchy of chaotic dynamics in random modular networks
<|reference_start|>Hierarchy of chaotic dynamics in random modular networks: We introduce a model of randomly connected neural populations and study its dynamics by means of the dynamical mean-field theory and simulations. Our analysis uncovers a rich phase diagram, featuring high- and low-dimensional chaotic phases, separated by a crossover region characterized by low values of the maximal Lyapunov exponent and participation ratio dimension, but with high and rapidly changing values of the Lyapunov dimension. Counterintuitively, chaos can be attenuated by either adding noise to strongly modular connectivity or by introducing modularity into random connectivity. Extending the model to include a multilevel, hierarchical connectivity reveals that a loose balance between activities across levels drives the system towards the edge of chaos.<|reference_end|>
arxiv
@article{kuśmierz2024hierarchy, title={Hierarchy of chaotic dynamics in random modular networks}, author={{L}ukasz Ku'smierz, Ulises Pereira-Obilinovic, Zhixin Lu, Dana Mastrovito, Stefan Mihalas}, journal={arXiv preprint arXiv:2410.06361}, year={2024}, archivePrefix={arXiv}, eprint={2410.06361}, primaryClass={physics.bio-ph cond-mat.dis-nn cs.NE nlin.CD q-bio.NC} }
kuśmierz2024hierarchy
arxiv-667246
2410.06362
Long-time stable SAV-BDF2 numerical schemes for the forced Navier-Stokes equations
<|reference_start|>Long-time stable SAV-BDF2 numerical schemes for the forced Navier-Stokes equations: We propose a novel second-order accurate, long-time unconditionally stable time-marching scheme for the forced Navier-Stokes equations. A new Forced Scalar Auxiliary Variable approach (FSAV) is introduced to preserve the underlying dissipative structure of the forced system that yields a uniform-in-time estimate of the numerical solution. In addition, the numerical scheme is autonomous if the underlying model is, laying the foundation for studying long-time dynamics of the numerical solution via dynamical system approach. As an example we apply the new algorithm to the two-dimensional incompressible Navier-Stokes equations. In the case with no-penetration and free-slip boundary condition on a simply connected domain, we are also able to derive a uniform-in-time estimate of the vorticity in $H^1$ norm in addition to the $L^2$ norm guaranteed by the general framework. Numerical results demonstrate superior performance of the new algorithm in terms of accuracy, efficiency, stability and robustness.<|reference_end|>
arxiv
@article{han2024long-time, title={Long-time stable SAV-BDF2 numerical schemes for the forced Navier-Stokes equations}, author={Daozhi Han and Xiaoming Wang}, journal={arXiv preprint arXiv:2410.06362}, year={2024}, archivePrefix={arXiv}, eprint={2410.06362}, primaryClass={math.NA cs.NA} }
han2024long-time
arxiv-667247
2410.06364
SpaLLM: Unified Compressive Adaptation of Large Language Models with Sketching
<|reference_start|>SpaLLM: Unified Compressive Adaptation of Large Language Models with Sketching: Compressive adaptation approaches, such as QLoRA, are widely popular alternatives for reducing memory requirements during fine-tuning of large language models (LLMs) while producing models capable of handling various downstream tasks. The key idea is to employ a "two-tower" architecture: compressing pre-trained LLM parameters into compact representations and fine-tuning the additive full-precision adapter, which typically has few tunable parameters in low-rank format. However, the strict algebraic assumptions, such as low-rank assumption, and the complexity of composing two-tower architectures are some of the known shortcomings, resulting in a poor accuracy-efficiency trade-off. In response to these known limitations, we propose SpaLLM (Sketched Parameter Adaptation of LLMs), a novel compressive adaptation approach for LLMs. This method is also the first to illustrate parameter-sharing compression methods for LLM fine-tuning, which, unlike QLoRA, are free from strict low-rank algebraic assumptions on adapters. Furthermore, our proposal unifies model compression and adaptation into a single, streamlined process, eliminating the need for two-tower architectures. SpaLLM sketches pre-trained LLM weights into lookup tables and directly fine-tunes the values in these tables. This approach simplifies LLMs' compressive adaptation workflow, potentially improves multi-user serving efficiency, and delivers significantly better accuracy for both natural language understanding and generation tasks. Moreover, by avoiding the "two-tower" architecture, our framework only requires one compressed matrix multiplication per layer during inference, demonstrating superior inference efficiency compared to previous methods.<|reference_end|>
arxiv
@article{zhang2024spallm:, title={SpaLLM: Unified Compressive Adaptation of Large Language Models with Sketching}, author={Tianyi Zhang, Junda Su, Oscar Wu, Zhaozhuo Xu, Anshumali Shrivastava}, journal={arXiv preprint arXiv:2410.06364}, year={2024}, archivePrefix={arXiv}, eprint={2410.06364}, primaryClass={cs.LG} }
zhang2024spallm:
arxiv-667248
2410.06365
Network-level ISAC: Performance Analysis and Optimal Antenna-to-BS Allocation
<|reference_start|>Network-level ISAC: Performance Analysis and Optimal Antenna-to-BS Allocation: A cooperative architecture is proposed for integrated sensing and communication (ISAC) networks, incorporating coordinated multi-point (CoMP) transmission along with multi-static sensing. We investigate how the allocation of antennas-to-base stations (BSs) affects cooperative sensing and cooperative communication performance. More explicitly, we balance the benefits of geographically concentrated antennas, which enhance beamforming and coherent processing, against those of geographically distributed antennas, which improve diversity and reduce service distances. Regarding sensing performance, we investigate three localization methods: angle-of-arrival (AOA)-based, time-of-flight (TOF)-based, and a hybrid approach combining both AOA and TOF measurements, for critically appraising their effects on ISAC network performance. Our analysis shows that in networks having N ISAC nodes following a Poisson point process, the localization accuracy of TOF-based methods follow a \ln^2 N scaling law (explicitly, the Cram\'er-Rao lower bound (CRLB) reduces with \ln^2 N). The AOA-based methods follow a \ln N scaling law, while the hybrid methods scale as a\ln^2 N + b\ln N, where a and b represent parameters related to TOF and AOA measurements, respectively. The difference between these scaling laws arises from the distinct ways in which measurement results are converted into the target location. In terms of communication performance, we derive a tractable expression for the communication data rate, considering various cooperative region sizes and antenna-to-BS allocation strategy. It is proved that higher path loss exponents favor distributed antenna allocation to reduce access distances, while lower exponents favor centralized antenna allocation to maximize beamforming gain.<|reference_end|>
arxiv
@article{meng2024network-level, title={Network-level ISAC: Performance Analysis and Optimal Antenna-to-BS Allocation}, author={Kaitao Meng and Kawon Han and Christos Masouros and Lajos Hanzo}, journal={arXiv preprint arXiv:2410.06365}, year={2024}, archivePrefix={arXiv}, eprint={2410.06365}, primaryClass={cs.IT eess.SP math.IT} }
meng2024network-level
arxiv-667249
2410.06366
Physics-Informed Regularization for Domain-Agnostic Dynamical System Modeling
<|reference_start|>Physics-Informed Regularization for Domain-Agnostic Dynamical System Modeling: Learning complex physical dynamics purely from data is challenging due to the intrinsic properties of systems to be satisfied. Incorporating physics-informed priors, such as in Hamiltonian Neural Networks (HNNs), achieves high-precision modeling for energy-conservative systems. However, real-world systems often deviate from strict energy conservation and follow different physical priors. To address this, we present a framework that achieves high-precision modeling for a wide range of dynamical systems from the numerical aspect, by enforcing Time-Reversal Symmetry (TRS) via a novel regularization term. It helps preserve energies for conservative systems while serving as a strong inductive bias for non-conservative, reversible systems. While TRS is a domain-specific physical prior, we present the first theoretical proof that TRS loss can universally improve modeling accuracy by minimizing higher-order Taylor terms in ODE integration, which is numerically beneficial to various systems regardless of their properties, even for irreversible systems. By integrating the TRS loss within neural ordinary differential equation models, the proposed model TREAT demonstrates superior performance on diverse physical systems. It achieves a significant 11.5% MSE improvement in a challenging chaotic triple-pendulum scenario, underscoring TREAT's broad applicability and effectiveness.<|reference_end|>
arxiv
@article{huang2024physics-informed, title={Physics-Informed Regularization for Domain-Agnostic Dynamical System Modeling}, author={Zijie Huang, Wanjia Zhao, Jingdong Gao, Ziniu Hu, Xiao Luo, Yadi Cao, Yuanzhou Chen, Yizhou Sun, Wei Wang}, journal={arXiv preprint arXiv:2410.06366}, year={2024}, archivePrefix={arXiv}, eprint={2410.06366}, primaryClass={cs.LG cs.AI} }
huang2024physics-informed
arxiv-667250
2410.06369
Communication-Efficient Federated Group Distributionally Robust Optimization
<|reference_start|>Communication-Efficient Federated Group Distributionally Robust Optimization: Federated learning faces challenges due to the heterogeneity in data volumes and distributions at different clients, which can compromise model generalization ability to various distributions. Existing approaches to address this issue based on group distributionally robust optimization (GDRO) often lead to high communication and sample complexity. To this end, this work introduces algorithms tailored for communication-efficient Federated Group Distributionally Robust Optimization (FGDRO). Our contributions are threefold: Firstly, we introduce the FGDRO-CVaR algorithm, which optimizes the average top-K losses while reducing communication complexity to $O(1/\epsilon^4)$, where $\epsilon$ denotes the desired precision level. Secondly, our FGDRO-KL algorithm is crafted to optimize KL regularized FGDRO, cutting communication complexity to $O(1/\epsilon^3)$. Lastly, we propose FGDRO-KL-Adam to utilize Adam-type local updates in FGDRO-KL, which not only maintains a communication cost of $O(1/\epsilon^3)$ but also shows potential to surpass SGD-type local steps in practical applications. The effectiveness of our algorithms has been demonstrated on a variety of real-world tasks, including natural language processing and computer vision.<|reference_end|>
arxiv
@article{guo2024communication-efficient, title={Communication-Efficient Federated Group Distributionally Robust Optimization}, author={Zhishuai Guo, Tianbao Yang}, journal={arXiv preprint arXiv:2410.06369}, year={2024}, archivePrefix={arXiv}, eprint={2410.06369}, primaryClass={cs.LG cs.DC stat.ML} }
guo2024communication-efficient
arxiv-667251
2410.06370
HumVI: A Multilingual Dataset for Detecting Violent Incidents Impacting Humanitarian Aid
<|reference_start|>HumVI: A Multilingual Dataset for Detecting Violent Incidents Impacting Humanitarian Aid: Humanitarian organizations can enhance their effectiveness by analyzing data to discover trends, gather aggregated insights, manage their security risks, support decision-making, and inform advocacy and funding proposals. However, data about violent incidents with direct impact and relevance for humanitarian aid operations is not readily available. An automatic data collection and NLP-backed classification framework aligned with humanitarian perspectives can help bridge this gap. In this paper, we present HumVI - a dataset comprising news articles in three languages (English, French, Arabic) containing instances of different types of violent incidents categorized by the humanitarian sector they impact, e.g., aid security, education, food security, health, and protection. Reliable labels were obtained for the dataset by partnering with a data-backed humanitarian organization, Insecurity Insight. We provide multiple benchmarks for the dataset, employing various deep learning architectures and techniques, including data augmentation and mask loss, to address different task-related challenges, e.g., domain expansion. The dataset is publicly available at https://github.com/dataminr-ai/humvi-dataset.<|reference_end|>
arxiv
@article{lamba2024humvi:, title={HumVI: A Multilingual Dataset for Detecting Violent Incidents Impacting Humanitarian Aid}, author={Hemank Lamba, Anton Abilov, Ke Zhang, Elizabeth M. Olson, Henry k. Dambanemuya, Jo~ao c. B'arcia, David S. Batista, Christina Wille, Aoife Cahill, Joel Tetreault, Alex Jaimes}, journal={arXiv preprint arXiv:2410.06370}, year={2024}, archivePrefix={arXiv}, eprint={2410.06370}, primaryClass={cs.CL cs.AI cs.LG cs.SI} }
lamba2024humvi:
arxiv-667252
2410.06371
Improved Estimation of Ranks for Learning ItemRecommenders with Negative Sampling
<|reference_start|>Improved Estimation of Ranks for Learning ItemRecommenders with Negative Sampling: In recommendation systems, there has been a growth in the num-ber of recommendable items (# of movies, music, products). Whenthe set of recommendable items is large, training and evaluationof item recommendation models becomes computationally expen-sive. To lower this cost, it has become common to sample negativeitems. However, the recommendation quality can suffer from biasesintroduced by traditional negative sampling mechanisms.In this work, we demonstrate the benefits from correcting thebias introduced by sampling of negatives. We first provide sampledbatch version of the well-studied WARP and LambdaRank methods.Then, we present how these methods can benefit from improvedranking estimates. Finally, we evaluate the recommendation qualityas a result of correcting rank estimates and demonstrate that WARPand LambdaRank can be learned efficiently with negative samplingand our proposed correction technique.<|reference_end|>
arxiv
@article{subbiah2024improved, title={Improved Estimation of Ranks for Learning Item Recommenders with Negative Sampling}, author={Anushya Subbiah, Steffen Rendle, Vikram Aggarwal}, journal={arXiv preprint arXiv:2410.06371}, year={2024}, doi={10.1145/3627673.3679943}, archivePrefix={arXiv}, eprint={2410.06371}, primaryClass={cs.IR} }
subbiah2024improved
arxiv-667253
2410.06372
Cooperative and Asynchronous Transformer-based Mission Planning for Heterogeneous Teams of Mobile Robots
<|reference_start|>Cooperative and Asynchronous Transformer-based Mission Planning for Heterogeneous Teams of Mobile Robots: Coordinating heterogeneous teams of mobile robots for tasks such as search and rescue is highly challenging. This is due to the complexities of perception, decision making and planning in such environments, with agents' non-synchronous operation, constrained communication, and limited computational resources. This paper presents the Cooperative and Asynchronous Transformer-based Mission Planning (CATMiP) framework, which leverages multi-agent reinforcement learning (MARL) to effectively coordinate agents with heterogeneous sensing, motion, and actuation capabilities. The framework introduces a Class-based Macro-Action Decentralized Partially Observable Markov Decision Process (CMD-POMDP) model to handle asynchronous decision-making among different agent classes via macro-actions. It also extends the Multi-Agent Transformer (MAT) architecture to facilitate distributed, ad hoc communication among the agents. CATMiP easily adapts to mission complexities and communication constraints, and scales to varying environment sizes and team compositions. Simulations demonstrate its scalability and ability to achieve cooperative mission objectives with two classes of explorer and rescuer agents, even under severe communication constraints. The code is available at https://github.com/mylad13/CATMiP.<|reference_end|>
arxiv
@article{farjadnasab2024cooperative, title={Cooperative and Asynchronous Transformer-based Mission Planning for Heterogeneous Teams of Mobile Robots}, author={Milad Farjadnasab, Shahin Sirouspour}, journal={arXiv preprint arXiv:2410.06372}, year={2024}, archivePrefix={arXiv}, eprint={2410.06372}, primaryClass={cs.RO cs.AI} }
farjadnasab2024cooperative
arxiv-667254
2410.06373
Unveiling the Backbone-Optimizer Coupling Bias in Visual Representation Learning
<|reference_start|>Unveiling the Backbone-Optimizer Coupling Bias in Visual Representation Learning: This paper delves into the interplay between vision backbones and optimizers, unvealing an inter-dependent phenomenon termed \textit{\textbf{b}ackbone-\textbf{o}ptimizer \textbf{c}oupling \textbf{b}ias} (BOCB). We observe that canonical CNNs, such as VGG and ResNet, exhibit a marked co-dependency with SGD families, while recent architectures like ViTs and ConvNeXt share a tight coupling with the adaptive learning rate ones. We further show that BOCB can be introduced by both optimizers and certain backbone designs and may significantly impact the pre-training and downstream fine-tuning of vision models. Through in-depth empirical analysis, we summarize takeaways on recommended optimizers and insights into robust vision backbone architectures. We hope this work can inspire the community to question long-held assumptions on backbones and optimizers, stimulate further explorations, and thereby contribute to more robust vision systems. The source code and models are publicly available at https://bocb-ai.github.io/.<|reference_end|>
arxiv
@article{li2024unveiling, title={Unveiling the Backbone-Optimizer Coupling Bias in Visual Representation Learning}, author={Siyuan Li, Juanxi Tian, Zedong Wang, Luyuan Zhang, Zicheng Liu, Weiyang Jin, Yang Liu, Baigui Sun, Stan Z. Li}, journal={arXiv preprint arXiv:2410.06373}, year={2024}, archivePrefix={arXiv}, eprint={2410.06373}, primaryClass={cs.CV cs.LG} }
li2024unveiling
arxiv-667255
2410.06376
Riemannian Optimization for Non-convex Euclidean Distance Geometry with Global Recovery Guarantees
<|reference_start|>Riemannian Optimization for Non-convex Euclidean Distance Geometry with Global Recovery Guarantees: The problem of determining the configuration of points from partial distance information, known as the Euclidean Distance Geometry (EDG) problem, is fundamental to many tasks in the applied sciences. In this paper, we propose two algorithms grounded in the Riemannian optimization framework to address the EDG problem. Our approach formulates the problem as a low-rank matrix completion task over the Gram matrix, using partial measurements represented as expansion coefficients of the Gram matrix in a non-orthogonal basis. For the first algorithm, under a uniform sampling with replacement model for the observed distance entries, we demonstrate that, with high probability, a Riemannian gradient-like algorithm on the manifold of rank-$r$ matrices converges linearly to the true solution, given initialization via a one-step hard thresholding. This holds provided the number of samples, $m$, satisfies $m \geq \mathcal{O}(n^{7/4}r^2 \log(n))$. With a more refined initialization, achieved through resampled Riemannian gradient-like descent, we further improve this bound to $m \geq \mathcal{O}(nr^2 \log(n))$. Our analysis for the first algorithm leverages a non-self-adjoint operator and depends on deriving eigenvalue bounds for an inner product matrix of restricted basis matrices, leveraging sparsity properties for tighter guarantees than previously established. The second algorithm introduces a self-adjoint surrogate for the sampling operator. This algorithm demonstrates strong numerical performance on both synthetic and real data. Furthermore, we show that optimizing over manifolds of higher-than-rank-$r$ matrices yields superior numerical results, consistent with recent literature on overparameterization in the EDG problem.<|reference_end|>
arxiv
@article{smith2024riemannian, title={Riemannian Optimization for Non-convex Euclidean Distance Geometry with Global Recovery Guarantees}, author={Chandler Smith, HanQin Cai, Abiy Tasissa}, journal={arXiv preprint arXiv:2410.06376}, year={2024}, archivePrefix={arXiv}, eprint={2410.06376}, primaryClass={math.OC cs.LG} }
smith2024riemannian
arxiv-667256
2410.06378
Covering Numbers for Deep ReLU Networks with Applications to Function Approximation and Nonparametric Regression
<|reference_start|>Covering Numbers for Deep ReLU Networks with Applications to Function Approximation and Nonparametric Regression: Covering numbers of families of (deep) ReLU networks have been used to characterize their approximation-theoretic performance, upper-bound the prediction error they incur in nonparametric regression, and quantify their classification capacity. These results are based on covering number upper bounds obtained through the explicit construction of coverings. Lower bounds on covering numbers do not seem to be available in the literature. The present paper fills this gap by deriving tight (up to a multiplicative constant) lower and upper bounds on the covering numbers of fully-connected networks with bounded weights, sparse networks with bounded weights, and fully-connected networks with quantized weights. Thanks to the tightness of the bounds, a fundamental understanding of the impact of sparsity, quantization, bounded vs. unbounded weights, and network output truncation can be developed. Furthermore, the bounds allow to characterize the fundamental limits of neural network transformation, including network compression, and lead to sharp upper bounds on the prediction error in nonparametric regression through deep networks. Specifically, we can remove a $\log^6(n)$-factor in the best-known sample complexity rate in the estimation of Lipschitz functions through deep networks thereby establishing optimality. Finally, we identify a systematic relation between optimal nonparametric regression and optimal approximation through deep networks, unifying numerous results in the literature and uncovering general underlying principles.<|reference_end|>
arxiv
@article{ou2024covering, title={Covering Numbers for Deep ReLU Networks with Applications to Function Approximation and Nonparametric Regression}, author={Weigutian Ou, Helmut B"olcskei}, journal={arXiv preprint arXiv:2410.06378}, year={2024}, archivePrefix={arXiv}, eprint={2410.06378}, primaryClass={stat.ML cs.AI cs.IT cs.LG math.IT} }
ou2024covering
arxiv-667257
2410.06380
Adver-City: Open-Source Multi-Modal Dataset for Collaborative Perception Under Adverse Weather Conditions
<|reference_start|>Adver-City: Open-Source Multi-Modal Dataset for Collaborative Perception Under Adverse Weather Conditions: Adverse weather conditions pose a significant challenge to the widespread adoption of Autonomous Vehicles (AVs) by impacting sensors like LiDARs and cameras. Even though Collaborative Perception (CP) improves AV perception in difficult conditions, existing CP datasets lack adverse weather conditions. To address this, we introduce Adver-City, the first open-source synthetic CP dataset focused on adverse weather conditions. Simulated in CARLA with OpenCDA, it contains over 24 thousand frames, over 890 thousand annotations, and 110 unique scenarios across six different weather conditions: clear weather, soft rain, heavy rain, fog, foggy heavy rain and, for the first time in a synthetic CP dataset, glare. It has six object categories including pedestrians and cyclists, and uses data from vehicles and roadside units featuring LiDARs, RGB and semantic segmentation cameras, GNSS, and IMUs. Its scenarios, based on real crash reports, depict the most relevant road configurations for adverse weather and poor visibility conditions, varying in object density, with both dense and sparse scenes, allowing for novel testing conditions of CP models. Benchmarks run on the dataset show that weather conditions created challenging conditions for perception models, reducing multi-modal object detection performance by up to 19%, while object density affected LiDAR-based detection by up to 29%. The dataset, code and documentation are available at https://labs.cs.queensu.ca/quarrg/datasets/adver-city/.<|reference_end|>
arxiv
@article{karvat2024adver-city:, title={Adver-City: Open-Source Multi-Modal Dataset for Collaborative Perception Under Adverse Weather Conditions}, author={Mateus Karvat, Sidney Givigi}, journal={arXiv preprint arXiv:2410.06380}, year={2024}, archivePrefix={arXiv}, eprint={2410.06380}, primaryClass={cs.CV cs.LG cs.RO} }
karvat2024adver-city:
arxiv-667258
2410.06384
Validation of the Scientific Literature via Chemputation Augmented by Large Language Models
<|reference_start|>Validation of the Scientific Literature via Chemputation Augmented by Large Language Models: Chemputation is the process of programming chemical robots to do experiments using a universal symbolic language, but the literature can be error prone and hard to read due to ambiguities. Large Language Models (LLMs) have demonstrated remarkable capabilities in various domains, including natural language processing, robotic control, and more recently, chemistry. Despite significant advancements in standardizing the reporting and collection of synthetic chemistry data, the automatic reproduction of reported syntheses remains a labour-intensive task. In this work, we introduce an LLM-based chemical research agent workflow designed for the automatic validation of synthetic literature procedures. Our workflow can autonomously extract synthetic procedures and analytical data from extensive documents, translate these procedures into universal XDL code, simulate the execution of the procedure in a hardware-specific setup, and ultimately execute the procedure on an XDL-controlled robotic system for synthetic chemistry. This demonstrates the potential of LLM-based workflows for autonomous chemical synthesis with Chemputers. Due to the abstraction of XDL this approach is safe, secure, and scalable since hallucinations will not be chemputable and the XDL can be both verified and encrypted. Unlike previous efforts, which either addressed only a limited portion of the workflow, relied on inflexible hard-coded rules, or lacked validation in physical systems, our approach provides four realistic examples of syntheses directly executed from synthetic literature. We anticipate that our workflow will significantly enhance automation in robotically driven synthetic chemistry research, streamline data extraction, improve the reproducibility, scalability, and safety of synthetic and experimental chemistry.<|reference_end|>
arxiv
@article{pagel2024validation, title={Validation of the Scientific Literature via Chemputation Augmented by Large Language Models}, author={Sebastian Pagel, Michael Jirasek, Leroy Cronin}, journal={arXiv preprint arXiv:2410.06384}, year={2024}, archivePrefix={arXiv}, eprint={2410.06384}, primaryClass={cs.AI cs.CL cs.IR} }
pagel2024validation
arxiv-667259
2410.06385
Skin Cancer Machine Learning Model Tone Bias
<|reference_start|>Skin Cancer Machine Learning Model Tone Bias: Background: Many open-source skin cancer image datasets are the result of clinical trials conducted in countries with lighter skin tones. Due to this tone imbalance, machine learning models derived from these datasets can perform well at detecting skin cancer for lighter skin tones. Any tone bias in these models could introduce fairness concerns and reduce public trust in the artificial intelligence health field. Methods: We examine a subset of images from the International Skin Imaging Collaboration (ISIC) archive that provide tone information. The subset has a significant tone imbalance. These imbalances could explain a model's tone bias. To address this, we train models using the imbalanced dataset and a balanced dataset to compare against. The datasets are used to train a deep convolutional neural network model to classify the images as malignant or benign. We then evaluate the models' disparate impact, based on selection rate, relative to dark or light skin tone. Results: Using the imbalanced dataset, we found that the model is significantly better at detecting malignant images in lighter tone resulting in a disparate impact of 0.577. Using the balanced dataset, we found that the model is also significantly better at detecting malignant images in lighter versus darker tones with a disparate impact of 0.684. Using the imbalanced or balanced dataset to train the model still results in a disparate impact well below the standard threshold of 0.80 which suggests the model is biased with respect to skin tone. Conclusion: The results show that typical skin cancer machine learning models can be tone biased. These results provide evidence that diagnosis or tone imbalance is not the cause of the bias. Other techniques will be necessary to identify and address the bias in these models, an area of future investigation.<|reference_end|>
arxiv
@article{pope2024skin, title={Skin Cancer Machine Learning Model Tone Bias}, author={James Pope, Md Hassanuzzaman, Mingmar Sherpa, Omar Emara, Ayush Joshi, Nirmala Adhikari}, journal={arXiv preprint arXiv:2410.06385}, year={2024}, archivePrefix={arXiv}, eprint={2410.06385}, primaryClass={eess.IV cs.AI cs.CV} }
pope2024skin
arxiv-667260
2410.06386
A novel, finite-element-based framework for sparse data solution reconstruction and multiple choices
<|reference_start|>A novel, finite-element-based framework for sparse data solution reconstruction and multiple choices: Digital twinning offers a capability of effective real-time monitoring and control, which are vital for cost-intensive experimental facilities, particularly the ones where extreme conditions exist. Sparse experimental measurements collected by various diagnostic sensors are usually the only source of information available during the course of a physical experiment. Consequently, in order to enable monitoring and control of the experiment (digital twinning), the ability to perform inverse analysis, facilitating the full field solution reconstruction from the sparse experimental data in real time, is crucial. This paper shows for the first time that it is possible to directly solve inverse problems, such as solution reconstruction, where some or all boundary conditions (BCs) are unknown, by purely using a finite-element (FE) approach, without needing to employ any traditional inverse analysis techniques or any machine learning models, as is normally done in the field. This novel and efficient FE-based inverse analysis framework employs a conventional FE discretisation, splits the loading vector into two parts corresponding to the known and unknown BCs, and then defines a loss function based on that split. In spite of the loading vector split, the loss function preserves the element connectivity. This function is minimised using a gradient-based optimisation. Furthermore, this paper presents a novel modification of this approach, which allows it to generate a range of different solutions satisfying given requirements in a controlled manner. Controlled multiple solution generation in the context of inverse problems and their intrinsic ill-posedness is a novel notion, which has not been explored before. This is done in order to potentially introduce the capability of semi-autonomous system control with intermittent human intervention to the workflow.<|reference_end|>
arxiv
@article{bielajewa2024a, title={A novel, finite-element-based framework for sparse data solution reconstruction and multiple choices}, author={Wiera Bielajewa, Michelle Baxter (n'ee Tindall), Perumal Nithiarasu}, journal={arXiv preprint arXiv:2410.06386}, year={2024}, archivePrefix={arXiv}, eprint={2410.06386}, primaryClass={cs.CE} }
bielajewa2024a
arxiv-667261
2410.06388
Evaluating the Impact of Warning Modalities and False Alarms in Pedestrian Crossing Alert System
<|reference_start|>Evaluating the Impact of Warning Modalities and False Alarms in Pedestrian Crossing Alert System: With the steadily increasing pedestrian fatalities, pedestrian safety is a growing concern, especially in urban environments. Advanced Driver Assistance Systems (ADAS) have been developed to mitigate road user risks by predicting potential pedestrian crossings and issuing timely driver alerts. However, there is limited understanding of how drivers respond to different modalities of alerts, particularly in the presence of false alarms. In this study, we utilized a full-scale driving simulator to compare the effectiveness of different alert modalities, audio-visual (AV), visual-tactile (VT), and audio-visual-tactile (AVT), in alerting drivers to various pedestrian jaywalking events. Our findings reveal that, compared to no alerts, multimodal alerts significantly increased the number of vehicles stopped for pedestrians and the distance to pedestrians when stopped. However, the false alarms negatively impacted driver trust, with some drivers exhibiting excessive caution, alert fatigue and anxiety, even including one instance where a driver fully stopped when no pedestrian was present.<|reference_end|>
arxiv
@article{alyamani2024evaluating, title={Evaluating the Impact of Warning Modalities and False Alarms in Pedestrian Crossing Alert System}, author={Hesham Alyamani, Yucheng Yang, David Noyce, Madhav Chitturi, and Kassem Fawaz}, journal={arXiv preprint arXiv:2410.06388}, year={2024}, archivePrefix={arXiv}, eprint={2410.06388}, primaryClass={cs.HC} }
alyamani2024evaluating
arxiv-667262
2410.06392
Counterfactual Causal Inference in Natural Language with Large Language Models
<|reference_start|>Counterfactual Causal Inference in Natural Language with Large Language Models: Causal structure discovery methods are commonly applied to structured data where the causal variables are known and where statistical testing can be used to assess the causal relationships. By contrast, recovering a causal structure from unstructured natural language data such as news articles contains numerous challenges due to the absence of known variables or counterfactual data to estimate the causal links. Large Language Models (LLMs) have shown promising results in this direction but also exhibit limitations. This work investigates LLM's abilities to build causal graphs from text documents and perform counterfactual causal inference. We propose an end-to-end causal structure discovery and causal inference method from natural language: we first use an LLM to extract the instantiated causal variables from text data and build a causal graph. We merge causal graphs from multiple data sources to represent the most exhaustive set of causes possible. We then conduct counterfactual inference on the estimated graph. The causal graph conditioning allows reduction of LLM biases and better represents the causal estimands. We use our method to show that the limitations of LLMs in counterfactual causal reasoning come from prediction errors and propose directions to mitigate them. We demonstrate the applicability of our method on real-world news articles.<|reference_end|>
arxiv
@article{gendron2024counterfactual, title={Counterfactual Causal Inference in Natural Language with Large Language Models}, author={Ga"el Gendron, Jov{z}e M. Rov{z}anec, Michael Witbrock, Gillian Dobbie}, journal={arXiv preprint arXiv:2410.06392}, year={2024}, archivePrefix={arXiv}, eprint={2410.06392}, primaryClass={cs.CL cs.LG} }
gendron2024counterfactual
arxiv-667263
2410.06395
Multimodal Representation Learning using Adaptive Graph Construction
<|reference_start|>Multimodal Representation Learning using Adaptive Graph Construction: Multimodal contrastive learning train neural networks by levergaing data from heterogeneous sources such as images and text. Yet, many current multimodal learning architectures cannot generalize to an arbitrary number of modalities and need to be hand-constructed. We propose AutoBIND, a novel contrastive learning framework that can learn representations from an arbitrary number of modalites through graph optimization. We evaluate AutoBIND on Alzhiemer's disease detection because it has real-world medical applicability and it contains a broad range of data modalities. We show that AutoBIND outperforms previous methods on this task, highlighting the generalizablility of the approach.<|reference_end|>
arxiv
@article{huang2024multimodal, title={Multimodal Representation Learning using Adaptive Graph Construction}, author={Weichen Huang}, journal={arXiv preprint arXiv:2410.06395}, year={2024}, archivePrefix={arXiv}, eprint={2410.06395}, primaryClass={cs.LG cs.AI} }
huang2024multimodal
arxiv-667264
2410.06396
MLissard: Multilingual Long and Simple Sequential Reasoning Benchmarks
<|reference_start|>MLissard: Multilingual Long and Simple Sequential Reasoning Benchmarks: Language models are now capable of solving tasks that require dealing with long sequences consisting of hundreds of thousands of tokens. However, they often fail on tasks that require repetitive use of simple rules, even on sequences that are much shorter than those seen during training. For example, state-of-the-art LLMs can find common items in two lists with up to 20 items but fail when lists have 80 items. In this paper, we introduce MLissard, a multilingual benchmark designed to evaluate models' abilities to process and generate texts of varied lengths and offers a mechanism for controlling sequence complexity. Our evaluation of open-source and proprietary models show a consistent decline in performance across all models and languages as the complexity of the sequence increases. Surprisingly, the use of in-context examples in languages other than English helps increase extrapolation performance significantly. The datasets and code are available at https://github.com/unicamp-dl/Lissard<|reference_end|>
arxiv
@article{bueno2024mlissard:, title={MLissard: Multilingual Long and Simple Sequential Reasoning Benchmarks}, author={Mirelle Bueno, Roberto Lotufo and Rodrigo Nogueira}, journal={arXiv preprint arXiv:2410.06396}, year={2024}, archivePrefix={arXiv}, eprint={2410.06396}, primaryClass={cs.CL} }
bueno2024mlissard:
arxiv-667265
2410.06397
Provable Accuracy Bounds for Hybrid Dynamical Optimization and Sampling
<|reference_start|>Provable Accuracy Bounds for Hybrid Dynamical Optimization and Sampling: Analog dynamical accelerators (DXs) are a growing sub-field in computer architecture research, offering order-of-magnitude gains in power efficiency and latency over traditional digital methods in several machine learning, optimization, and sampling tasks. However, limited-capacity accelerators require hybrid analog/digital algorithms to solve real-world problems, commonly using large-neighborhood local search (LNLS) frameworks. Unlike fully digital algorithms, hybrid LNLS has no non-asymptotic convergence guarantees and no principled hyperparameter selection schemes, particularly limiting cross-device training and inference. In this work, we provide non-asymptotic convergence guarantees for hybrid LNLS by reducing to block Langevin Diffusion (BLD) algorithms. Adapting tools from classical sampling theory, we prove exponential KL-divergence convergence for randomized and cyclic block selection strategies using ideal DXs. With finite device variation, we provide explicit bounds on the 2-Wasserstein bias in terms of step duration, noise strength, and function parameters. Our BLD model provides a key link between established theory and novel computing platforms, and our theoretical results provide a closed-form expression linking device variation, algorithm hyperparameters, and performance.<|reference_end|>
arxiv
@article{burns2024provable, title={Provable Accuracy Bounds for Hybrid Dynamical Optimization and Sampling}, author={Matthew X. Burns, Qingyuan Hou, Michael C. Huang}, journal={arXiv preprint arXiv:2410.06397}, year={2024}, archivePrefix={arXiv}, eprint={2410.06397}, primaryClass={cs.LG cs.DS math.ST stat.TH} }
burns2024provable
arxiv-667266
2410.06399
Adaptive Random Fourier Features Training Stabilized By Resampling With Applications in Image Regression
<|reference_start|>Adaptive Random Fourier Features Training Stabilized By Resampling With Applications in Image Regression: This paper presents an enhanced adaptive random Fourier features (ARFF) training algorithm for shallow neural networks, building upon the work introduced in "Adaptive Random Fourier Features with Metropolis Sampling", Kammonen et al., Foundations of Data Science, 2(3):309--332, 2020. This improved method uses a particle filter type resampling technique to stabilize the training process and reduce sensitivity to parameter choices. With resampling, the Metropolis test may also be omitted, reducing the number of hyperparameters and reducing the computational cost per iteration, compared to ARFF. We present comprehensive numerical experiments demonstrating the efficacy of our proposed algorithm in function regression tasks, both as a standalone method and as a pre-training step before gradient-based optimization, here Adam. Furthermore, we apply our algorithm to a simple image regression problem, showcasing its utility in sampling frequencies for the random Fourier features (RFF) layer of coordinate-based multilayer perceptrons (MLPs). In this context, we use the proposed algorithm to sample the parameters of the RFF layer in an automated manner.<|reference_end|>
arxiv
@article{kammonen2024adaptive, title={Adaptive Random Fourier Features Training Stabilized By Resampling With Applications in Image Regression}, author={Aku Kammonen, Anamika Pandey, Erik von Schwerin, Ra'ul Tempone}, journal={arXiv preprint arXiv:2410.06399}, year={2024}, archivePrefix={arXiv}, eprint={2410.06399}, primaryClass={cs.LG} }
kammonen2024adaptive
arxiv-667267
2410.06400
Reliable Heading Tracking for Pedestrian Road Crossing Prediction Using Commodity Devices
<|reference_start|>Reliable Heading Tracking for Pedestrian Road Crossing Prediction Using Commodity Devices: Pedestrian heading tracking enables applications in pedestrian navigation, traffic safety, and accessibility. Previous works, using inertial sensor fusion or machine learning, are limited in that they assume the phone is fixed in specific orientations, hindering their generalizability. We propose a new heading tracking algorithm, the Orientation-Heading Alignment (OHA), which leverages a key insight: people tend to carry smartphones in certain ways due to habits, such as swinging them while walking. For each smartphone attitude during this motion, OHA maps the smartphone orientation to the pedestrian heading and learns such mappings efficiently from coarse headings and smartphone orientations. To anchor our algorithm in a practical scenario, we apply OHA to a challenging task: predicting when pedestrians are about to cross the road to improve road user safety. In particular, using 755 hours of walking data collected since 2020 from 60 individuals, we develop a lightweight model that operates in real-time on commodity devices to predict road crossings. Our evaluation shows that OHA achieves 3.4 times smaller heading errors across nine scenarios than existing methods. Furthermore, OHA enables the early and accurate detection of pedestrian crossing behavior, issuing crossing alerts 0.35 seconds, on average, before pedestrians enter the road range.<|reference_end|>
arxiv
@article{yang2024reliable, title={Reliable Heading Tracking for Pedestrian Road Crossing Prediction Using Commodity Devices}, author={Yucheng Yang, Jingjie Li, and Kassem Fawaz}, journal={arXiv preprint arXiv:2410.06400}, year={2024}, archivePrefix={arXiv}, eprint={2410.06400}, primaryClass={eess.SP cs.LG} }
yang2024reliable
arxiv-667268
2410.06401
Trajectory Improvement and Reward Learning from Comparative Language Feedback
<|reference_start|>Trajectory Improvement and Reward Learning from Comparative Language Feedback: Learning from human feedback has gained traction in fields like robotics and natural language processing in recent years. While prior works mostly rely on human feedback in the form of comparisons, language is a preferable modality that provides more informative insights into user preferences. In this work, we aim to incorporate comparative language feedback to iteratively improve robot trajectories and to learn reward functions that encode human preferences. To achieve this goal, we learn a shared latent space that integrates trajectory data and language feedback, and subsequently leverage the learned latent space to improve trajectories and learn human preferences. To the best of our knowledge, we are the first to incorporate comparative language feedback into reward learning. Our simulation experiments demonstrate the effectiveness of the learned latent space and the success of our learning algorithms. We also conduct human subject studies that show our reward learning algorithm achieves a 23.9% higher subjective score on average and is 11.3% more time-efficient compared to preference-based reward learning, underscoring the superior performance of our method. Our website is at https://liralab.usc.edu/comparative-language-feedback/<|reference_end|>
arxiv
@article{yang2024trajectory, title={Trajectory Improvement and Reward Learning from Comparative Language Feedback}, author={Zhaojing Yang, Miru Jun, Jeremy Tien, Stuart J. Russell, Anca Dragan, Erdem B{i}y{i}k}, journal={arXiv preprint arXiv:2410.06401}, year={2024}, archivePrefix={arXiv}, eprint={2410.06401}, primaryClass={cs.RO} }
yang2024trajectory
arxiv-667269
2410.06405
Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and Objects
<|reference_start|>Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and Objects: The Abstraction and Reasoning Corpus (ARC) is a popular benchmark focused on visual reasoning in the evaluation of Artificial Intelligence systems. In its original framing, an ARC task requires solving a program synthesis problem over small 2D images using a few input-output training pairs. In this work, we adopt the recently popular data-driven approach to the ARC and ask whether a Vision Transformer (ViT) can learn the implicit mapping, from input image to output image, that underlies the task. We show that a ViT -- otherwise a state-of-the-art model for images -- fails dramatically on most ARC tasks even when trained on one million examples per task. This points to an inherent representational deficiency of the ViT architecture that makes it incapable of uncovering the simple structured mappings underlying the ARC tasks. Building on these insights, we propose ViTARC, a ViT-style architecture that unlocks some of the visual reasoning capabilities required by the ARC. Specifically, we use a pixel-level input representation, design a spatially-aware tokenization scheme, and introduce a novel object-based positional encoding that leverages automatic segmentation, among other enhancements. Our task-specific ViTARC models achieve a test solve rate close to 100% on more than half of the 400 public ARC tasks strictly through supervised learning from input-output grids. This calls attention to the importance of imbuing the powerful (Vision) Transformer with the correct inductive biases for abstract visual reasoning that are critical even when the training data is plentiful and the mapping is noise-free. Hence, ViTARC provides a strong foundation for future research in visual reasoning using transformer-based architectures.<|reference_end|>
arxiv
@article{li2024tackling, title={Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and Objects}, author={Wenhao Li, Yudong Xu, Scott Sanner, Elias Boutros Khalil}, journal={arXiv preprint arXiv:2410.06405}, year={2024}, archivePrefix={arXiv}, eprint={2410.06405}, primaryClass={cs.CV cs.AI} }
li2024tackling
arxiv-667270
2410.06406
Topology-Agnostic Graph U-Nets for Scalar Field Prediction on Unstructured Meshes
<|reference_start|>Topology-Agnostic Graph U-Nets for Scalar Field Prediction on Unstructured Meshes: Machine-learned surrogate models to accelerate lengthy computer simulations are becoming increasingly important as engineers look to streamline the product design cycle. In many cases, these approaches offer the ability to predict relevant quantities throughout a geometry, but place constraints on the form of the input data. In a world of diverse data types, a preferred approach would not restrict the input to a particular structure. In this paper, we propose Topology-Agnostic Graph U-Net (TAG U-Net), a graph convolutional network that can be trained to input any mesh or graph structure and output a prediction of a target scalar field at each node. The model constructs coarsened versions of each input graph and performs a set of convolution and pooling operations to predict the node-wise outputs on the original graph. By training on a diverse set of shapes, the model can make strong predictions, even for shapes unlike those seen during training. A 3-D additive manufacturing dataset is presented, containing Laser Powder Bed Fusion simulation results for thousands of parts. The model is demonstrated on this dataset, and it performs well, predicting both 2-D and 3-D scalar fields with a median R-squared > 0.85 on test geometries. Code and datasets are available online.<|reference_end|>
arxiv
@article{ferguson2024topology-agnostic, title={Topology-Agnostic Graph U-Nets for Scalar Field Prediction on Unstructured Meshes}, author={Kevin Ferguson, Yu-hsuan Chen, Yiming Chen, Andrew Gillman, James Hardin, Levent Burak Kara}, journal={arXiv preprint arXiv:2410.06406}, year={2024}, archivePrefix={arXiv}, eprint={2410.06406}, primaryClass={cs.LG} }
ferguson2024topology-agnostic
arxiv-667271
2410.06407
A Skewness-Based Criterion for Addressing Heteroscedastic Noise in Causal Discovery
<|reference_start|>A Skewness-Based Criterion for Addressing Heteroscedastic Noise in Causal Discovery: Real-world data often violates the equal-variance assumption (homoscedasticity), making it essential to account for heteroscedastic noise in causal discovery. In this work, we explore heteroscedastic symmetric noise models (HSNMs), where the effect $Y$ is modeled as $Y = f(X) + \sigma(X)N$, with $X$ as the cause and $N$ as independent noise following a symmetric distribution. We introduce a novel criterion for identifying HSNMs based on the skewness of the score (i.e., the gradient of the log density) of the data distribution. This criterion establishes a computationally tractable measurement that is zero in the causal direction but nonzero in the anticausal direction, enabling the causal direction discovery. We extend this skewness-based criterion to the multivariate setting and propose SkewScore, an algorithm that handles heteroscedastic noise without requiring the extraction of exogenous noise. We also conduct a case study on the robustness of SkewScore in a bivariate model with a latent confounder, providing theoretical insights into its performance. Empirical studies further validate the effectiveness of the proposed method.<|reference_end|>
arxiv
@article{lin2024a, title={A Skewness-Based Criterion for Addressing Heteroscedastic Noise in Causal Discovery}, author={Yingyu Lin, Yuxing Huang, Wenqin Liu, Haoran Deng, Ignavier Ng, Kun Zhang, Mingming Gong, Yi-An Ma, Biwei Huang}, journal={arXiv preprint arXiv:2410.06407}, year={2024}, archivePrefix={arXiv}, eprint={2410.06407}, primaryClass={cs.LG stat.ME stat.ML} }
lin2024a
arxiv-667272
2410.06408
Automating Data Science Pipelines with Tensor Completion
<|reference_start|>Automating Data Science Pipelines with Tensor Completion: Hyperparameter optimization is an essential component in many data science pipelines and typically entails exhaustive time and resource-consuming computations in order to explore the combinatorial search space. Similar to this problem, other key operations in data science pipelines exhibit the exact same properties. Important examples are: neural architecture search, where the goal is to identify the best design choices for a neural network, and query cardinality estimation, where given different predicate values for a SQL query the goal is to estimate the size of the output. In this paper, we abstract away those essential components of data science pipelines and we model them as instances of tensor completion, where each variable of the search space corresponds to one mode of the tensor, and the goal is to identify all missing entries of the tensor, corresponding to all combinations of variable values, starting from a very small sample of observed entries. In order to do so, we first conduct a thorough experimental evaluation of existing state-of-the-art tensor completion techniques and introduce domain-inspired adaptations (such as smoothness across the discretized variable space) and an ensemble technique which is able to achieve state-of-the-art performance. We extensively evaluate existing and proposed methods in a number of datasets generated corresponding to (a) hyperparameter optimization for non-neural network models, (b) neural architecture search, and (c) variants of query cardinality estimation, demonstrating the effectiveness of tensor completion as a tool for automating data science pipelines. Furthermore, we release our generated datasets and code in order to provide benchmarks for future work on this topic.<|reference_end|>
arxiv
@article{pakala2024automating, title={Automating Data Science Pipelines with Tensor Completion}, author={Shaan Pakala, Bryce Graw, Dawon Ahn, Tam Dinh, Mehnaz Tabassum Mahin, Vassilis Tsotras, Jia Chen, Evangelos E. Papalexakis}, journal={arXiv preprint arXiv:2410.06408}, year={2024}, archivePrefix={arXiv}, eprint={2410.06408}, primaryClass={cs.LG} }
pakala2024automating
arxiv-667273
2410.06409
Fast Phase Factor Finding for Quantum Signal Processing
<|reference_start|>Fast Phase Factor Finding for Quantum Signal Processing: This paper presents two efficient and stable algorithms for recovering phase factors in quantum signal processing (QSP), a crucial component of many quantum algorithms. The first algorithm, the ``Half Cholesky" method, which is based on nonlinear Fourier analysis and fast solvers for structured matrices, demonstrates robust performance across all regimes. The second algorithm, ``Fast Fixed Point Iteration," provides even greater efficiency in the non-fully-coherent regime. Both theoretical analysis and numerical experiments demonstrate the significant advantages of these new methods over all existing approaches.<|reference_end|>
arxiv
@article{ni2024fast, title={Fast Phase Factor Finding for Quantum Signal Processing}, author={Hongkang Ni, Lexing Ying}, journal={arXiv preprint arXiv:2410.06409}, year={2024}, archivePrefix={arXiv}, eprint={2410.06409}, primaryClass={quant-ph cs.NA math.NA} }
ni2024fast
arxiv-667274
2410.06410
BEVLoc: Cross-View Localization and Matching via Birds-Eye-View Synthesis
<|reference_start|>BEVLoc: Cross-View Localization and Matching via Birds-Eye-View Synthesis: Ground to aerial matching is a crucial and challenging task in outdoor robotics, particularly when GPS is absent or unreliable. Structures like buildings or large dense forests create interference, requiring GNSS replacements for global positioning estimates. The true difficulty lies in reconciling the perspective difference between the ground and air images for acceptable localization. Taking inspiration from the autonomous driving community, we propose a novel framework for synthesizing a birds-eye-view (BEV) scene representation to match and localize against an aerial map in off-road environments. We leverage contrastive learning with domain specific hard negative mining to train a network to learn similar representations between the synthesized BEV and the aerial map. During inference, BEVLoc guides the identification of the most probable locations within the aerial map through a coarse-to-fine matching strategy. Our results demonstrate promising initial outcomes in extremely difficult forest environments with limited semantic diversity. We analyze our model's performance for coarse and fine matching, assessing both the raw matching capability of our model and its performance as a GNSS replacement. Our work delves into off-road map localization while establishing a foundational baseline for future developments in localization. Our code is available at: https://github.com/rpl-cmu/bevloc<|reference_end|>
arxiv
@article{klammer2024bevloc:, title={BEVLoc: Cross-View Localization and Matching via Birds-Eye-View Synthesis}, author={Christopher Klammer and Michael Kaess}, journal={arXiv preprint arXiv:2410.06410}, year={2024}, archivePrefix={arXiv}, eprint={2410.06410}, primaryClass={cs.RO cs.CV} }
klammer2024bevloc:
arxiv-667275
2410.06412
Stochastic Sparse Sampling: A Framework for Variable-Length Medical Time Series Classification
<|reference_start|>Stochastic Sparse Sampling: A Framework for Variable-Length Medical Time Series Classification: While the majority of time series classification research has focused on modeling fixed-length sequences, variable-length time series classification (VTSC) remains critical in healthcare, where sequence length may vary among patients and events. To address this challenge, we propose $\textbf{S}$tochastic $\textbf{S}$parse $\textbf{S}$ampling (SSS), a novel VTSC framework developed for medical time series. SSS manages variable-length sequences by sparsely sampling fixed windows to compute local predictions, which are then aggregated and calibrated to form a global prediction. We apply SSS to the task of seizure onset zone (SOZ) localization, a critical VTSC problem requiring identification of seizure-inducing brain regions from variable-length electrophysiological time series. We evaluate our method on the Epilepsy iEEG Multicenter Dataset, a heterogeneous collection of intracranial electroencephalography (iEEG) recordings obtained from four independent medical centers. SSS demonstrates superior performance compared to state-of-the-art (SOTA) baselines across most medical centers, and superior performance on all out-of-distribution (OOD) unseen medical centers. Additionally, SSS naturally provides post-hoc insights into local signal characteristics related to the SOZ, by visualizing temporally averaged local predictions throughout the signal.<|reference_end|>
arxiv
@article{mootoo2024stochastic, title={Stochastic Sparse Sampling: A Framework for Variable-Length Medical Time Series Classification}, author={Xavier Mootoo, Alan A. D'iaz-Montiel, Milad Lankarany, Hina Tabassum}, journal={arXiv preprint arXiv:2410.06412}, year={2024}, archivePrefix={arXiv}, eprint={2410.06412}, primaryClass={cs.LG} }
mootoo2024stochastic
arxiv-667276
2410.06415
Biased AI can Influence Political Decision-Making
<|reference_start|>Biased AI can Influence Political Decision-Making: As modern AI models become integral to everyday tasks, concerns about their inherent biases and their potential impact on human decision-making have emerged. While bias in models are well-documented, less is known about how these biases influence human decisions. This paper presents two interactive experiments investigating the effects of partisan bias in AI language models on political decision-making. Participants interacted freely with either a biased liberal, conservative, or unbiased control model while completing political decision-making tasks. We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias, regardless of their personal political partisanship. However, we also discovered that prior knowledge about AI could lessen the impact of the bias, highlighting the possible importance of AI education for robust bias mitigation. Our findings not only highlight the critical effects of interacting with biased AI and its ability to impact public discourse and political conduct, but also highlights potential techniques for mitigating these risks in the future.<|reference_end|>
arxiv
@article{fisher2024biased, title={Biased AI can Influence Political Decision-Making}, author={Jillian Fisher, Shangbin Feng, Robert Aron, Thomas Richardson, Yejin Choi, Daniel W. Fisher, Jennifer Pan, Yulia Tsvetkov, Katharina Reinecke}, journal={arXiv preprint arXiv:2410.06415}, year={2024}, archivePrefix={arXiv}, eprint={2410.06415}, primaryClass={cs.HC cs.AI} }
fisher2024biased
arxiv-667277
2410.06416
Evaluating the Dependency Between Cyclomatic Complexity and Response For Class
<|reference_start|>Evaluating the Dependency Between Cyclomatic Complexity and Response For Class: In object-oriented programming, it is reasonable to hypothesize that smaller classes with fewer methods are less complex. Should this hypothesis hold true, it would be advisable for programmers to design classes with fewer methods, as complexity significantly contributes to poor maintainability. To test this assumption, we analyzed 862,517 Java classes from 1,000 open GitHub repositories. Our findings indicate a strong Pearson correlation of 0.79 between the cumulative McCabe's Cyclomatic Complexity (CC) of all class methods and the number of methods, a metric known as Response for Class (RFC).<|reference_end|>
arxiv
@article{stavtsev2024evaluating, title={Evaluating the Dependency Between Cyclomatic Complexity and Response For Class}, author={Maxim Stavtsev, Yegor Bugayenko}, journal={arXiv preprint arXiv:2410.06416}, year={2024}, archivePrefix={arXiv}, eprint={2410.06416}, primaryClass={cs.SE} }
stavtsev2024evaluating
arxiv-667278
2410.06418
MIRACLE 3D: Memory-efficient Integrated Robust Approach for Continual Learning on Point Clouds via Shape Model construction
<|reference_start|>MIRACLE 3D: Memory-efficient Integrated Robust Approach for Continual Learning on Point Clouds via Shape Model construction: In this paper, we introduce a novel framework for memory-efficient and privacy-preserving continual learning in 3D object classification. Unlike conventional memory-based approaches in continual learning that require storing numerous exemplars, our method constructs a compact shape model for each class, retaining only the mean shape along with a few key modes of variation. This strategy not only enables the generation of diverse training samples while drastically reducing memory usage but also enhances privacy by eliminating the need to store original data. To further improve model robustness against input variations, an issue common in 3D domains due to the absence of strong backbones and limited training data, we incorporate Gradient Mode Regularization. This technique enhances model stability and broadens classification margins, resulting in accuracy improvements. We validate our approach through extensive experiments on the ModelNet40, ShapeNet, and ScanNet datasets, where we achieve state-of-the-art performance. Notably, our method consumes only 15% of the memory required by competing methods on the ModelNet40 and ShapeNet, while achieving comparable performance on the challenging ScanNet dataset with just 8.5% of the memory. These results underscore the scalability, effectiveness, and privacy-preserving strengths of our framework for 3D object classification.<|reference_end|>
arxiv
@article{resani2024miracle, title={MIRACLE 3D: Memory-efficient Integrated Robust Approach for Continual Learning on Point Clouds via Shape Model construction}, author={Hossein Resani and Behrooz Nasihatkon}, journal={arXiv preprint arXiv:2410.06418}, year={2024}, archivePrefix={arXiv}, eprint={2410.06418}, primaryClass={cs.CV} }
resani2024miracle
arxiv-667279
2410.06420
ERVQA: A Dataset to Benchmark the Readiness of Large Vision Language Models in Hospital Environments
<|reference_start|>ERVQA: A Dataset to Benchmark the Readiness of Large Vision Language Models in Hospital Environments: The global shortage of healthcare workers has demanded the development of smart healthcare assistants, which can help monitor and alert healthcare workers when necessary. We examine the healthcare knowledge of existing Large Vision Language Models (LVLMs) via the Visual Question Answering (VQA) task in hospital settings through expert annotated open-ended questions. We introduce the Emergency Room Visual Question Answering (ERVQA) dataset, consisting of <image, question, answer> triplets covering diverse emergency room scenarios, a seminal benchmark for LVLMs. By developing a detailed error taxonomy and analyzing answer trends, we reveal the nuanced nature of the task. We benchmark state-of-the-art open-source and closed LVLMs using traditional and adapted VQA metrics: Entailment Score and CLIPScore Confidence. Analyzing errors across models, we infer trends based on properties like decoder type, model size, and in-context examples. Our findings suggest the ERVQA dataset presents a highly complex task, highlighting the need for specialized, domain-specific solutions.<|reference_end|>
arxiv
@article{ray2024ervqa:, title={ERVQA: A Dataset to Benchmark the Readiness of Large Vision Language Models in Hospital Environments}, author={Sourjyadip Ray, Kushal Gupta, Soumi Kundu, Payal Arvind Kasat, Somak Aditya, Pawan Goyal}, journal={arXiv preprint arXiv:2410.06420}, year={2024}, archivePrefix={arXiv}, eprint={2410.06420}, primaryClass={cs.CL cs.CV} }
ray2024ervqa:
arxiv-667280
2410.06422
Predicting Battery Capacity Fade Using Probabilistic Machine Learning Models With and Without Pre-Trained Priors
<|reference_start|>Predicting Battery Capacity Fade Using Probabilistic Machine Learning Models With and Without Pre-Trained Priors: Lithium-ion batteries are a key energy storage technology driving revolutions in mobile electronics, electric vehicles and renewable energy storage. Capacity retention is a vital performance measure that is frequently utilized to assess whether these batteries have approached their end-of-life. Machine learning (ML) offers a powerful tool for predicting capacity degradation based on past data, and, potentially, prior physical knowledge, but the degree to which an ML prediction can be trusted is of significant practical importance in situations where consequential decisions must be made based on battery state of health. This study explores the efficacy of fully Bayesian machine learning in forecasting battery health with the quantification of uncertainty in its predictions. Specifically, we implemented three probabilistic ML approaches and evaluated the accuracy of their predictions and uncertainty estimates: a standard Gaussian process (GP), a structured Gaussian process (sGP), and a fully Bayesian neural network (BNN). In typical applications of GP and sGP, their hyperparameters are learned from a single sample while, in contrast, BNNs are typically pre-trained on an existing dataset to learn the weight distributions before being used for inference. This difference in methodology gives the BNN an advantage in learning global trends in a dataset and makes BNNs a good choice when training data is available. However, we show that pre-training can also be leveraged for GP and sGP approaches to learn the prior distributions of the hyperparameters and that in the case of the pre-trained sGP, similar accuracy and improved uncertainty estimation compared to the BNN can be achieved. This approach offers a framework for a broad range of probabilistic machine learning scenarios where past data is available and can be used to learn priors for (hyper)parameters of probabilistic ML models.<|reference_end|>
arxiv
@article{kenney2024predicting, title={Predicting Battery Capacity Fade Using Probabilistic Machine Learning Models With and Without Pre-Trained Priors}, author={Michael J. Kenney, Katerina G. Malollari, Sergei V. Kalinin, Maxim Ziatdinov}, journal={arXiv preprint arXiv:2410.06422}, year={2024}, archivePrefix={arXiv}, eprint={2410.06422}, primaryClass={cs.LG cond-mat.mtrl-sci} }
kenney2024predicting
arxiv-667281
2410.06423
FAIREDU: A Multiple Regression-Based Method for Enhancing Fairness in Machine Learning Models for Educational Applications
<|reference_start|>FAIREDU: A Multiple Regression-Based Method for Enhancing Fairness in Machine Learning Models for Educational Applications: Fairness in artificial intelligence and machine learning (AI/ML) models is becoming critically important, especially as decisions made by these systems impact diverse groups. In education, a vital sector for all countries, the widespread application of AI/ML systems raises specific concerns regarding fairness. Current research predominantly focuses on fairness for individual sensitive features, which limits the comprehensiveness of fairness assessments. This paper introduces FAIREDU, a novel and effective method designed to improve fairness across multiple sensitive features. Through extensive experiments, we evaluate FAIREDU effectiveness in enhancing fairness without compromising model performance. The results demonstrate that FAIREDU addresses intersectionality across features such as gender, race, age, and other sensitive features, outperforming state-of-the-art methods with minimal effect on model accuracy. The paper also explores potential future research directions to enhance further the method robustness and applicability to various machine-learning models and datasets.<|reference_end|>
arxiv
@article{pham2024fairedu:, title={FAIREDU: A Multiple Regression-Based Method for Enhancing Fairness in Machine Learning Models for Educational Applications}, author={Nga Pham, Minh Kha Do, Tran Vu Dai, Pham Ngoc Hung, Anh Nguyen-Duc}, journal={arXiv preprint arXiv:2410.06423}, year={2024}, archivePrefix={arXiv}, eprint={2410.06423}, primaryClass={cs.LG cs.AI} }
pham2024fairedu:
arxiv-667282
2410.06424
Restructuring Vector Quantization with the Rotation Trick
<|reference_start|>Restructuring Vector Quantization with the Rotation Trick: Vector Quantized Variational AutoEncoders (VQ-VAEs) are designed to compress a continuous input to a discrete latent space and reconstruct it with minimal distortion. They operate by maintaining a set of vectors -- often referred to as the codebook -- and quantizing each encoder output to the nearest vector in the codebook. However, as vector quantization is non-differentiable, the gradient to the encoder flows around the vector quantization layer rather than through it in a straight-through approximation. This approximation may be undesirable as all information from the vector quantization operation is lost. In this work, we propose a way to propagate gradients through the vector quantization layer of VQ-VAEs. We smoothly transform each encoder output into its corresponding codebook vector via a rotation and rescaling linear transformation that is treated as a constant during backpropagation. As a result, the relative magnitude and angle between encoder output and codebook vector becomes encoded into the gradient as it propagates through the vector quantization layer and back to the encoder. Across 11 different VQ-VAE training paradigms, we find this restructuring improves reconstruction metrics, codebook utilization, and quantization error. Our code is available at https://github.com/cfifty/rotation_trick.<|reference_end|>
arxiv
@article{fifty2024restructuring, title={Restructuring Vector Quantization with the Rotation Trick}, author={Christopher Fifty, Ronald G. Junkins, Dennis Duan, Aniketh Iger, Jerry W. Liu, Ehsan Amid, Sebastian Thrun, Christopher R'e}, journal={arXiv preprint arXiv:2410.06424}, year={2024}, archivePrefix={arXiv}, eprint={2410.06424}, primaryClass={cs.LG cs.CV} }
fifty2024restructuring
arxiv-667283
2410.06425
Embedded State Estimation for Optimization of Cislunar Space Domain Awareness Constellation Design
<|reference_start|>Embedded State Estimation for Optimization of Cislunar Space Domain Awareness Constellation Design: The traffic in cislunar space is expected to increase over the coming years, leading to a higher likelihood of conjunction events among active satellites, orbital debris, and non-cooperative satellites. This increase necessitates enhanced space domain awareness (SDA) capabilities that include state estimation for targets of interest. Both Earth surface-based and space-based observation platforms in geosynchronous orbit or below face challenges such as range, exclusion, and occlusion that hinder observation. Motivated by the need to place space-based observers in the cislunar space regime to overcome these challenges, this paper proposes a cislunar SDA constellation design and analysis framework that integrates state estimation into an optimization problem for determining the placement of observers for optimal state estimation performance on a set of targets. The proposed multi-observer placement optimization problem samples from a range of possible target orbits. Upon convergence, the optimized constellation is validated against a broader set of targets to assess its effectiveness. Two comparative analyses are presented to evaluate the effects of changes in the sensor tasking procedure and sensor fidelity on the optimized constellation, comparing these to a single observer baseline case. The results demonstrate that the optimized constellations can provide accurate state estimation for various orbit families.<|reference_end|>
arxiv
@article{clareson2024embedded, title={Embedded State Estimation for Optimization of Cislunar Space Domain Awareness Constellation Design}, author={Thomas H. Clareson, Matthew C. Fox, Dominic K. Amato, Hang Woon Lee}, journal={arXiv preprint arXiv:2410.06425}, year={2024}, archivePrefix={arXiv}, eprint={2410.06425}, primaryClass={math.OC cs.SY eess.SY physics.space-ph} }
clareson2024embedded
arxiv-667284
2410.06427
NLP Case Study on Predicting the Before and After of the Ukraine-Russia and Hamas-Israel Conflicts
<|reference_start|>NLP Case Study on Predicting the Before and After of the Ukraine-Russia and Hamas-Israel Conflicts: We propose a method to predict toxicity and other textual attributes through the use of natural language processing (NLP) techniques for two recent events: the Ukraine-Russia and Hamas-Israel conflicts. This article provides a basis for exploration in future conflicts with hopes to mitigate risk through the analysis of social media before and after a conflict begins. Our work compiles several datasets from Twitter and Reddit for both conflicts in a before and after separation with an aim of predicting a future state of social media for avoidance. More specifically, we show that: (1) there is a noticeable difference in social media discussion leading up to and following a conflict and (2) social media discourse on platforms like Twitter and Reddit is useful in identifying future conflicts before they arise. Our results show that through the use of advanced NLP techniques (both supervised and unsupervised) toxicity and other attributes about language before and after a conflict is predictable with a low error of nearly 1.2 percent for both conflicts.<|reference_end|>
arxiv
@article{miner2024nlp, title={NLP Case Study on Predicting the Before and After of the Ukraine-Russia and Hamas-Israel Conflicts}, author={Jordan Miner and John E. Ortega}, journal={arXiv preprint arXiv:2410.06427}, year={2024}, archivePrefix={arXiv}, eprint={2410.06427}, primaryClass={cs.CL cs.AI cs.LG} }
miner2024nlp
arxiv-667285
2410.06428
Stress Detection on Code-Mixed Texts in Dravidian Languages using Machine Learning
<|reference_start|>Stress Detection on Code-Mixed Texts in Dravidian Languages using Machine Learning: Stress is a common feeling in daily life, but it can affect mental well-being in some situations, the development of robust detection models is imperative. This study introduces a methodical approach to the stress identification in code-mixed texts for Dravidian languages. The challenge encompassed two datasets, targeting Tamil and Telugu languages respectively. This proposal underscores the importance of using uncleaned text as a benchmark to refine future classification methodologies, incorporating diverse preprocessing techniques. Random Forest algorithm was used, featuring three textual representations: TF-IDF, Uni-grams of words, and a composite of (1+2+3)-Grams of characters. The approach achieved a good performance for both linguistic categories, achieving a Macro F1-score of 0.734 in Tamil and 0.727 in Telugu, overpassing results achieved with different complex techniques such as FastText and Transformer models. The results underscore the value of uncleaned data for mental state detection and the challenges classifying code-mixed texts for stress, indicating the potential for improved performance through cleaning data, other preprocessing techniques, or more complex models.<|reference_end|>
arxiv
@article{ramos2024stress, title={Stress Detection on Code-Mixed Texts in Dravidian Languages using Machine Learning}, author={L. Ramos, M. Shahiki-Tash, Z. Ahani, A. Eponon, O. Kolesnikova, H. Calvo}, journal={arXiv preprint arXiv:2410.06428}, year={2024}, archivePrefix={arXiv}, eprint={2410.06428}, primaryClass={cs.CL cs.AI cs.HC cs.LG} }
ramos2024stress
arxiv-667286
2410.06429
A QUBO Formulation for the Generalized LinkedIn Queens Game
<|reference_start|>A QUBO Formulation for the Generalized LinkedIn Queens Game: In this paper, we present a QUBO formulation designed to solve a series of generalisations of the LinkedIn queens game, a version of the N-queens problem. We adapt this formulation for several particular cases of the problem by trying to optimise the number of variables and interactions, improving the possibility of applying it on quantum hardware by means of Quantum Annealing or the Quantum Approximated Optimization Algorithm (QAOA). We also present two new types of problems, the Coloured Chess Piece Problem and the Max Chess Pieces Problem, with their corresponding QUBO formulations.<|reference_end|>
arxiv
@article{ali2024a, title={A QUBO Formulation for the Generalized LinkedIn Queens Game}, author={Alejandro Mata Ali and Edgar Mencia}, journal={arXiv preprint arXiv:2410.06429}, year={2024}, archivePrefix={arXiv}, eprint={2410.06429}, primaryClass={quant-ph cs.ET physics.pop-ph} }
ali2024a
arxiv-667287
2410.06431
Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs
<|reference_start|>Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs: From common-sense reasoning to domain-specific tasks, parameter-efficient fine tuning (PEFT) methods for large language models (LLMs) have showcased significant performance improvements on downstream tasks. However, fine-tuned LLMs often struggle with overconfidence in uncertain predictions, particularly due to sparse training data. This overconfidence reflects poor epistemic uncertainty calibration, which arises from limitations in the model's ability to generalize with limited data. Existing PEFT uncertainty quantification methods for LLMs focus on the post fine-tuning stage and thus have limited capability in calibrating epistemic uncertainty. To address these limitations, we propose Functional-Level Uncertainty Quantification for Calibrated Fine-Tuning (UQ4CT), which captures and calibrates functional-level epistemic uncertainty during the fine-tuning stage via a mixture-of-expert framework. We show that UQ4CT reduces Expected Calibration Error (ECE) by more than $25\%$ while maintaining high accuracy across $5$ benchmarks. Furthermore, UQ4CT maintains superior ECE performance with high accuracy under distribution shift, showcasing improved generalizability.<|reference_end|>
arxiv
@article{niu2024functional-level, title={Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs}, author={Ruijia Niu, Dongxia Wu, Rose Yu, Yi-An Ma}, journal={arXiv preprint arXiv:2410.06431}, year={2024}, archivePrefix={arXiv}, eprint={2410.06431}, primaryClass={cs.LG} }
niu2024functional-level
arxiv-667288
2410.06434
$\Gamma$-convergence of an Enhanced Finite Element Method for Mani\`a's Problem Exhibiting the Lavrentiev Gap Phenomenon
<|reference_start|>$\Gamma$-convergence of an Enhanced Finite Element Method for Mani\`a's Problem Exhibiting the Lavrentiev Gap Phenomenon: It is well-known that numerically approximating calculus of variations problems possessing a Lavrentiev Gap Phenomenon (LGP) is challenging, and the standard numerical methodologies such as finite element, finite difference, and discontinuous Galerkin methods fail to give convergent methods because they cannot overcome the gap. This paper is a continuation of a 2018 paper by Feng-Schnake, where a promising enhanced finite element method was proposed to overcome the LGP in the classical Mani\`a's problem. The goal of this paper is to provide a complete $\Gamma$-convergence proof for this enhanced finite element method, hence establishing a theoretical foundation for the method. The crux of the convergence analysis is the construction of a new finite element interpolant that helps to build a recovery sequence for proving a $\Gamma$-convergence result due to its strong approximation properties in Sobolev spaces. Numerical tests are also provided to verify the theoretical results.<|reference_end|>
arxiv
@article{feng2024$\gamma$-convergence, title={$\Gamma$-convergence of an Enhanced Finite Element Method for Mani\`a's Problem Exhibiting the Lavrentiev Gap Phenomenon}, author={Xiaobing H. Feng, Joshua M. Siktar}, journal={arXiv preprint arXiv:2410.06434}, year={2024}, archivePrefix={arXiv}, eprint={2410.06434}, primaryClass={math.NA cs.NA} }
feng2024$\gamma$-convergence
arxiv-667289
2410.06437
LocoVR: Multiuser Indoor Locomotion Dataset in Virtual Reality
<|reference_start|>LocoVR: Multiuser Indoor Locomotion Dataset in Virtual Reality: Understanding human locomotion is crucial for AI agents such as robots, particularly in complex indoor home environments. Modeling human trajectories in these spaces requires insight into how individuals maneuver around physical obstacles and manage social navigation dynamics. These dynamics include subtle behaviors influenced by proxemics - the social use of space, such as stepping aside to allow others to pass or choosing longer routes to avoid collisions. Previous research has developed datasets of human motion in indoor scenes, but these are often limited in scale and lack the nuanced social navigation dynamics common in home environments. To address this, we present LocoVR, a dataset of 7000+ two-person trajectories captured in virtual reality from over 130 different indoor home environments. LocoVR provides full body pose data and precise spatial information, along with rich examples of socially-motivated movement behaviors. For example, the dataset captures instances of individuals navigating around each other in narrow spaces, adjusting paths to respect personal boundaries in living areas, and coordinating movements in high-traffic zones like entryways and kitchens. Our evaluation shows that LocoVR significantly enhances model performance in three practical indoor tasks utilizing human trajectories, and demonstrates predicting socially-aware navigation patterns in home environments.<|reference_end|>
arxiv
@article{takeyama2024locovr:, title={LocoVR: Multiuser Indoor Locomotion Dataset in Virtual Reality}, author={Kojiro Takeyama, Yimeng Liu, Misha Sra}, journal={arXiv preprint arXiv:2410.06437}, year={2024}, archivePrefix={arXiv}, eprint={2410.06437}, primaryClass={cs.RO cs.CV cs.HC} }
takeyama2024locovr:
arxiv-667290
2410.06438
Leroy: Library Learning for Imperative Programming Languages
<|reference_start|>Leroy: Library Learning for Imperative Programming Languages: Library learning is the process of building a library of common functionalities from a given set of programs. Typically, this process is applied in the context of aiding program synthesis: concise functions can help the synthesizer produce modularized code that is smaller in size. Previous work has focused on functional Lisp-like languages, as their regularity makes them more amenable to extracting repetitive structures. Our work introduces Leroy, which extends existing library learning techniques to imperative higher-level programming languages, with the goal of facilitating reusability and ease of maintenance. Leroy wraps the existing Stitch framework for library learning and converts imperative programs into a Lisp-like format using the AST. Our solution uses Stitch to do a top-down, corpus-guided extraction of repetitive expressions. Further, we prune abstractions that cannot be implemented in the programming language and convert the best abstractions back to the original language. We implement our technique in a tool for a subset of the Python programming language and evaluate it on a large corpus of programs. Leroy achieves a compression ratio of 1.04x of the original code base, with a slight expansion when the library is included. Additionally, we show that our technique prunes invalid abstractions.<|reference_end|>
arxiv
@article{bellur2024leroy:, title={Leroy: Library Learning for Imperative Programming Languages}, author={Abhiram Bellur, Razan Alghamdi, Kidus Workneh, Joseph Izraelevitz}, journal={arXiv preprint arXiv:2410.06438}, year={2024}, archivePrefix={arXiv}, eprint={2410.06438}, primaryClass={cs.PL} }
bellur2024leroy:
arxiv-667291
2410.06440
Checker Bug Detection and Repair in Deep Learning Libraries
<|reference_start|>Checker Bug Detection and Repair in Deep Learning Libraries: Checker bugs in Deep Learning (DL) libraries are critical yet not well-explored. These bugs are often concealed in the input validation and error-checking code of DL libraries and can lead to silent failures, incorrect results, or unexpected program behavior in DL applications. Despite their potential to significantly impact the reliability and performance of DL-enabled systems built with these libraries, checker bugs have received limited attention. We present the first comprehensive study of DL checker bugs in two widely-used DL libraries, i.e., TensorFlow and PyTorch. Initially, we automatically collected a dataset of 2,418 commits from TensorFlow and PyTorch repositories on GitHub from Sept. 2016 to Dec. 2023 using specific keywords related to checker bugs. Through manual inspection, we identified 527 DL checker bugs. Subsequently, we analyzed these bugs from three perspectives, i.e., root causes, symptoms, and fixing patterns. Using the knowledge gained via root cause analysis of checker bugs, we further propose TensorGuard, a proof-of-concept RAG-based LLM-based tool to detect and fix checker bugs in DL libraries via prompt engineering a series of ChatGPT prompts. We evaluated TensorGuard's performance on a test dataset that includes 92 buggy and 135 clean checker-related changes in TensorFlow and PyTorch from January 2024 to July 2024. Our results demonstrate that TensorGuard has high average recall (94.51\%) using Chain of Thought prompting, a balanced performance between precision and recall using Zero-Shot prompting and Few-Shot prompting strategies. In terms of patch generation, TensorGuard achieves an accuracy of 11.1\%, which outperforms the state-of-the-art bug repair baseline by 2\%. We have also applied TensorGuard on the latest six months' checker-related changes (493 changes) of the JAX library from Google, which resulted in the detection of 64 new checker bugs.<|reference_end|>
arxiv
@article{harzevili2024checker, title={Checker Bug Detection and Repair in Deep Learning Libraries}, author={Nima Shiri Harzevili, Mohammad Mahdi Mohajer, Jiho Shin, Moshi Wei, Gias Uddin, Jinqiu Yang, Junjie Wang, Song Wang, Zhen Ming (Jack) Jiang, Nachiappan Nagappan}, journal={arXiv preprint arXiv:2410.06440}, year={2024}, archivePrefix={arXiv}, eprint={2410.06440}, primaryClass={cs.SE} }
harzevili2024checker
arxiv-667292
2410.06441
Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and Performance of SGD for Fine-Tuning Language Models
<|reference_start|>Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and Performance of SGD for Fine-Tuning Language Models: Fine-tuning language models (LMs) with the Adam optimizer often demands excessive memory, limiting accessibility. The "in-place" version of Stochastic Gradient Descent (IP-SGD) and Memory-Efficient Zeroth-order Optimizer (MeZO) have been proposed to address this. However, IP-SGD still requires substantial memory, and MeZO suffers from slow convergence and degraded final performance due to its zeroth-order nature. This paper introduces Addax, a novel method that improves both memory efficiency and performance of IP-SGD by integrating it with MeZO. Specifically, Addax computes zeroth- or first-order gradients of data points in the minibatch based on their memory consumption, combining these gradient estimates to update directions. By computing zeroth-order gradients for data points that require more memory and first-order gradients for others, Addax overcomes the slow convergence of MeZO and the excessive memory requirement of IP-SGD. Additionally, the zeroth-order gradient acts as a regularizer for the first-order gradient, further enhancing the model's final performance. Theoretically, we establish the convergence of Addax under mild assumptions, demonstrating faster convergence and less restrictive hyper-parameter choices than MeZO. Our experiments with diverse LMs and tasks show that Addax consistently outperforms MeZO regarding accuracy and convergence speed while having a comparable memory footprint. When fine-tuning OPT-13B with one A100 GPU, on average, Addax outperforms MeZO in accuracy/F1 score by 14% and runs 15x faster while using memory similar to MeZO. In our experiments on the larger OPT-30B model, on average, Addax outperforms MeZO in terms of accuracy/F1 score by >16 and runs 30x faster on a single H100 GPU. Moreover, Addax surpasses the performance of standard fine-tuning approaches, such as IP-SGD and Adam, in most tasks with significantly less memory requirement.<|reference_end|>
arxiv
@article{li2024addax:, title={Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and Performance of SGD for Fine-Tuning Language Models}, author={Zeman Li, Xinwei Zhang, Peilin Zhong, Yuan Deng, Meisam Razaviyayn, Vahab Mirrokni}, journal={arXiv preprint arXiv:2410.06441}, year={2024}, archivePrefix={arXiv}, eprint={2410.06441}, primaryClass={cs.LG cs.CL} }
li2024addax:
arxiv-667293
2410.06442
MaD-Scientist: AI-based Scientist solving Convection-Diffusion-Reaction Equations Using Massive PINN-Based Prior Data
<|reference_start|>MaD-Scientist: AI-based Scientist solving Convection-Diffusion-Reaction Equations Using Massive PINN-Based Prior Data: Large language models (LLMs), like ChatGPT, have shown that even trained with noisy prior data, they can generalize effectively to new tasks through in-context learning (ICL) and pre-training techniques. Motivated by this, we explore whether a similar approach can be applied to scientific foundation models (SFMs). Our methodology is structured as follows: (i) we collect low-cost physics-informed neural network (PINN)-based approximated prior data in the form of solutions to partial differential equations (PDEs) constructed through an arbitrary linear combination of mathematical dictionaries; (ii) we utilize Transformer architectures with self and cross-attention mechanisms to predict PDE solutions without knowledge of the governing equations in a zero-shot setting; (iii) we provide experimental evidence on the one-dimensional convection-diffusion-reaction equation, which demonstrate that pre-training remains robust even with approximated prior data, with only marginal impacts on test accuracy. Notably, this finding opens the path to pre-training SFMs with realistic, low-cost data instead of (or in conjunction with) numerical high-cost data. These results support the conjecture that SFMs can improve in a manner similar to LLMs, where fully cleaning the vast set of sentences crawled from the Internet is nearly impossible.<|reference_end|>
arxiv
@article{kang2024mad-scientist:, title={MaD-Scientist: AI-based Scientist solving Convection-Diffusion-Reaction Equations Using Massive PINN-Based Prior Data}, author={Mingu Kang, Dongseok Lee, Woojin Cho, Jaehyeon Park, Kookjin Lee, Anthony Gruber, Youngjoon Hong, Noseong Park}, journal={arXiv preprint arXiv:2410.06442}, year={2024}, archivePrefix={arXiv}, eprint={2410.06442}, primaryClass={cs.LG cs.AI} }
kang2024mad-scientist:
arxiv-667294
2410.06443
Categorizing Social Media Screenshots for Identifying Author Misattribution
<|reference_start|>Categorizing Social Media Screenshots for Identifying Author Misattribution: Mis/disinformation is a common and dangerous occurrence on social media. Misattribution is a form of mis/disinformation that deals with a false claim of authorship, which means a user is claiming someone said (posted) something they never did. We discuss the difference between misinformation and disinformation and how screenshots are used to spread author misattribution on social media platforms. It is important to be able to find the original post of a screenshot to determine if the screenshot is being correctly attributed. To do this we have built several tools to aid in automating this search process. The first is a Python script that aims to categorize Twitter posts based on their structure, extract the metadata from a screenshot, and use this data to group all the posts within a screenshot together. We tested this process on 75 Twitter posts containing screenshots collected by hand to determine how well the script extracted metadata and grouped the individual posts, F1 = 0.80. The second is a series of scrapers being used to collect a dataset that can train and test a model to differentiate between various social media platforms. We collected 16,620 screenshots have been collected from Facebook, Instagram, Truth Social, and Twitter. Screenshots were taken by the scrapers of the web version and mobile version of each platform in both light and dark mode.<|reference_end|>
arxiv
@article{farris2024categorizing, title={Categorizing Social Media Screenshots for Identifying Author Misattribution}, author={Ashlyn M. Farris, Michael L. Nelson}, journal={arXiv preprint arXiv:2410.06443}, year={2024}, archivePrefix={arXiv}, eprint={2410.06443}, primaryClass={cs.IR} }
farris2024categorizing
arxiv-667295
2410.06444
Multi-label Classification for Android Malware Based on Active Learning
<|reference_start|>Multi-label Classification for Android Malware Based on Active Learning: The existing malware classification approaches (i.e., binary and family classification) can barely benefit subsequent analysis with their outputs. Even the family classification approaches suffer from lacking a formal naming standard and an incomplete definition of malicious behaviors. More importantly, the existing approaches are powerless for one malware with multiple malicious behaviors, while this is a very common phenomenon for Android malware in the wild. So, neither of them can provide researchers with a direct and comprehensive enough understanding of malware. In this paper, we propose MLCDroid, an ML-based multi-label classification approach that can directly indicate the existence of pre-defined malicious behaviors. With an in-depth analysis, we summarize six basic malicious behaviors from real-world malware with security reports and construct a labeled dataset. We compare the results of 70 algorithm combinations to evaluate the effectiveness (best at 73.3%). Faced with the challenge of the expensive cost of data annotation, we further propose an active learning approach based on data augmentation, which can improve the overall accuracy to 86.7% with a data augmentation of 5,000+ high-quality samples from an unlabeled malware dataset. This is the first multi-label Android malware classification approach intending to provide more information on fine-grained malicious behaviors.<|reference_end|>
arxiv
@article{qiao2024multi-label, title={Multi-label Classification for Android Malware Based on Active Learning}, author={Qijing Qiao, Ruitao Feng, Sen Chen, Fei Zhang, Xiaohong Li}, journal={arXiv preprint arXiv:2410.06444}, year={2024}, doi={10.1109/TDSC.2022.3213689}, archivePrefix={arXiv}, eprint={2410.06444}, primaryClass={cs.CR} }
qiao2024multi-label
arxiv-667296
2410.06446
Machine Unlearning in Forgettability Sequence
<|reference_start|>Machine Unlearning in Forgettability Sequence: Machine unlearning (MU) is becoming a promising paradigm to achieve the "right to be forgotten", where the training trace of any chosen data points could be eliminated, while maintaining the model utility on general testing samples after unlearning. With the advancement of forgetting research, many fundamental open questions remain unanswered: do different samples exhibit varying levels of difficulty in being forgotten? Further, does the sequence in which samples are forgotten, determined by their respective difficulty levels, influence the performance of forgetting algorithms? In this paper, we identify key factor affecting unlearning difficulty and the performance of unlearning algorithms. We find that samples with higher privacy risks are more likely to be unlearning, indicating that the unlearning difficulty varies among different samples which motives a more precise unlearning mode. Built upon this insight, we propose a general unlearning framework, dubbed RSU, which consists of Ranking module and SeqUnlearn module.<|reference_end|>
arxiv
@article{chen2024machine, title={Machine Unlearning in Forgettability Sequence}, author={Junjie Chen, Qian Chen, Jian Lou, Xiaoyu Zhang, Kai Wu, Zilong Wang}, journal={arXiv preprint arXiv:2410.06446}, year={2024}, archivePrefix={arXiv}, eprint={2410.06446}, primaryClass={cs.LG cs.CV} }
chen2024machine
arxiv-667297
2410.06452
Modeling chaotic Lorenz ODE System using Scientific Machine Learning
<|reference_start|>Modeling chaotic Lorenz ODE System using Scientific Machine Learning: In climate science, models for global warming and weather prediction face significant challenges due to the limited availability of high-quality data and the difficulty in obtaining it, making data efficiency crucial. In the past few years, Scientific Machine Learning (SciML) models have gained tremendous traction as they can be trained in a data-efficient manner, making them highly suitable for real-world climate applications. Despite this, very little attention has been paid to chaotic climate system modeling utilizing SciML methods. In this paper, we have integrated SciML methods into foundational weather models, where we have enhanced large-scale climate predictions with a physics-informed approach that achieves high accuracy with reduced data. We successfully demonstrate that by combining the interpretability of physical climate models with the computational power of neural networks, SciML models can prove to be a reliable tool for modeling climate. This indicates a shift from the traditional black box-based machine learning modeling of climate systems to physics-informed decision-making, leading to effective climate policy implementation.<|reference_end|>
arxiv
@article{kashyap2024modeling, title={Modeling chaotic Lorenz ODE System using Scientific Machine Learning}, author={Sameera S Kashyap, Raj Abhijit Dandekar, Rajat Dandekar, Sreedath Panat}, journal={arXiv preprint arXiv:2410.06452}, year={2024}, archivePrefix={arXiv}, eprint={2410.06452}, primaryClass={cs.LG cs.AI} }
kashyap2024modeling
arxiv-667298
2410.06453
Challenges of the QWERTY Keyboard for Quechua Speakers in the Puno Region in Per\'u
<|reference_start|>Challenges of the QWERTY Keyboard for Quechua Speakers in the Puno Region in Per\'u: The widespread adoption of the QWERTY keyboard layout, designed primarily for English, presents significant challenges for speakers of indigenous languages such as Quechua, particularly in the Puno region of Peru. This research examines the extent to which the QWERTY layout affects the writing and digital communication of Quechua speakers. Through an analysis of the Quechua languages unique alphabet and character frequency, combined with insights from local speakers, we identify the limitations imposed by the QWERTY system on the efficient digital transcription of Quechua. The study further proposes alternative keyboard layouts, including optimizations of QWERTY and DVORAK, designed to enhance typing efficiency and reduce the digital divide for Quechua speakers. Our findings underscore the need for localized technological solutions to preserve linguistic diversity while improving digital literacy for indigenous communities. The proposed modifications offer a pathway toward more inclusive digital tools that respect and accommodate linguistic diversity.<|reference_end|>
arxiv
@article{juarez-vargas2024challenges, title={Challenges of the QWERTY Keyboard for Quechua Speakers in the Puno Region in Per\'u}, author={Henry Juarez-Vargas, Roger Mijael Mansilla-Huanacuni, Fred Torres-Cruz}, journal={arXiv preprint arXiv:2410.06453}, year={2024}, archivePrefix={arXiv}, eprint={2410.06453}, primaryClass={cs.HC} }
juarez-vargas2024challenges
arxiv-667299
2410.06454
Efficient Coordination for Distributed Discrete-Event Systems
<|reference_start|>Efficient Coordination for Distributed Discrete-Event Systems: Timing control while preserving determinism is often a key requirement for ensuring the safety and correctness of distributed cyber-physical systems (CPS). Discrete-event (DE) systems provide a suitable model of computation (MoC) for time-sensitive distributed CPS. The high-level architecture (HLA) is a useful tool for the distributed simulation of DE systems, but its techniques can be adapted for implementing distributed CPS. However, HLA incurs considerable overhead in network messages conveying timing information between the distributed nodes and the centralized run-time infrastructure (RTI). This paper gives a novel approach and implementation that reduces such network messages while preserving DE semantics. An evaluation of our runtime demonstrates that our approach significantly reduces the volume of messages for timing information in HLA.<|reference_end|>
arxiv
@article{jun2024efficient, title={Efficient Coordination for Distributed Discrete-Event Systems}, author={Byeonggil Jun, Edward A. Lee, Marten Lohstroh, Hokeun Kim}, journal={arXiv preprint arXiv:2410.06454}, year={2024}, archivePrefix={arXiv}, eprint={2410.06454}, primaryClass={cs.DC cs.SY eess.SY} }
jun2024efficient
arxiv-667300
2410.06455
An efficient proximal-based approach for solving nonlocal Allen-Cahn equations
<|reference_start|>An efficient proximal-based approach for solving nonlocal Allen-Cahn equations: In this work, we present an efficient approach for the spatial and temporal discretization of the nonlocal Allen-Cahn equation, which incorporates various double-well potentials and an integrable kernel, with a particular focus on a non-smooth obstacle potential. While nonlocal models offer enhanced flexibility for complex phenomena, they often lead to increased computational costs and there is a need to design efficient spatial and temporal discretization schemes, especially in the non-smooth setting. To address this, we propose first- and second-order energy-stable time-stepping schemes combined with the Fourier collocation approach for spatial discretization. We provide energy stability estimates for the developed time-stepping schemes. A key aspect to our approach involves a representation of a solution via proximal operators. This together with the spatial and temporal discretizations enables direct evaluation of the solution that can bypass the solution of nonlinear, non-smooth, and nonlocal system. This method significantly improves computational efficiency, especially in the case of non-smooth obstacle potentials, and facilitates rapid solution evaluations in both two and three dimensions. We provide several numerical experiments to illustrate the effectiveness of our approach.<|reference_end|>
arxiv
@article{burkovska2024an, title={An efficient proximal-based approach for solving nonlocal Allen-Cahn equations}, author={Olena Burkovska and Ilyas Mustapha}, journal={arXiv preprint arXiv:2410.06455}, year={2024}, archivePrefix={arXiv}, eprint={2410.06455}, primaryClass={math.NA cs.NA} }
burkovska2024an