corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-664701 | 2410.01848 | Spatial Action Unit Cues for Interpretable Deep Facial Expression Recognition | <|reference_start|>Spatial Action Unit Cues for Interpretable Deep Facial Expression Recognition: Although state-of-the-art classifiers for facial expression recognition (FER) can achieve a high level of accuracy, they lack interpretability, an important feature for end-users. Experts typically associate spatial action units (AUs) from a codebook to facial regions for the visual interpretation of expressions. In this paper, the same expert steps are followed. A new learning strategy is proposed to explicitly incorporate AU cues into classifier training, allowing to train deep interpretable models. During training, this AU codebook is used, along with the input image expression label, and facial landmarks, to construct a AU heatmap that indicates the most discriminative image regions of interest w.r.t the facial expression. This valuable spatial cue is leveraged to train a deep interpretable classifier for FER. This is achieved by constraining the spatial layer features of a classifier to be correlated with AU heatmaps. Using a composite loss, the classifier is trained to correctly classify an image while yielding interpretable visual layer-wise attention correlated with AU maps, simulating the expert decision process. Our strategy only relies on image class expression for supervision, without additional manual annotations. Our new strategy is generic, and can be applied to any deep CNN- or transformer-based classifier without requiring any architectural change or significant additional training time. Our extensive evaluation on two public benchmarks RAF-DB, and AffectNet datasets shows that our proposed strategy can improve layer-wise interpretability without degrading classification performance. In addition, we explore a common type of interpretable classifiers that rely on class activation mapping (CAM) methods, and show that our approach can also improve CAM interpretability.<|reference_end|> | arxiv | @article{belharbi2024spatial,
title={Spatial Action Unit Cues for Interpretable Deep Facial Expression
Recognition},
author={Soufiane Belharbi, Marco Pedersoli, Alessandro Lameiras Koerich, Simon
Bacon, Eric Granger},
journal={arXiv preprint arXiv:2410.01848},
year={2024},
archivePrefix={arXiv},
eprint={2410.01848},
primaryClass={cs.CV cs.LG}
} | belharbi2024spatial |
arxiv-664702 | 2410.01850 | An Early-Stage Workflow Proposal for the Generation of Safe and Dependable AI Classifiers | <|reference_start|>An Early-Stage Workflow Proposal for the Generation of Safe and Dependable AI Classifiers: The generation and execution of qualifiable safe and dependable AI models, necessitates definition of a transparent, complete yet adaptable and preferably lightweight workflow. Given the rapidly progressing domain of AI research and the relative immaturity of the safe-AI domain the process stability upon which functionally safety developments rest must be married with some degree of adaptability. This early-stage work proposes such a workflow basing it on a an extended ONNX model description. A use case provides one foundations of this body of work which we expect to be extended by other, third party use-cases.<|reference_end|> | arxiv | @article{doran2024an,
title={An Early-Stage Workflow Proposal for the Generation of Safe and
Dependable AI Classifiers},
author={Hans Dermot Doran, Suzana Veljanovska},
journal={arXiv preprint arXiv:2410.01850},
year={2024},
archivePrefix={arXiv},
eprint={2410.01850},
primaryClass={cs.LG}
} | doran2024an |
arxiv-664703 | 2410.01853 | Recovering Time-Varying Networks From Single-Cell Data | <|reference_start|>Recovering Time-Varying Networks From Single-Cell Data: Gene regulation is a dynamic process that underlies all aspects of human development, disease response, and other key biological processes. The reconstruction of temporal gene regulatory networks has conventionally relied on regression analysis, graphical models, or other types of relevance networks. With the large increase in time series single-cell data, new approaches are needed to address the unique scale and nature of this data for reconstructing such networks. Here, we develop a deep neural network, Marlene, to infer dynamic graphs from time series single-cell gene expression data. Marlene constructs directed gene networks using a self-attention mechanism where the weights evolve over time using recurrent units. By employing meta learning, the model is able to recover accurate temporal networks even for rare cell types. In addition, Marlene can identify gene interactions relevant to specific biological responses, including COVID-19 immune response, fibrosis, and aging.<|reference_end|> | arxiv | @article{hasanaj2024recovering,
title={Recovering Time-Varying Networks From Single-Cell Data},
author={Euxhen Hasanaj, Barnab'as P'oczos, Ziv Bar-Joseph},
journal={arXiv preprint arXiv:2410.01853},
year={2024},
archivePrefix={arXiv},
eprint={2410.01853},
primaryClass={q-bio.QM cs.LG}
} | hasanaj2024recovering |
arxiv-664704 | 2410.01854 | A Novel Feature Extraction Model for the Detection of Plant Disease from Leaf Images in Low Computational Devices | <|reference_start|>A Novel Feature Extraction Model for the Detection of Plant Disease from Leaf Images in Low Computational Devices: Diseases in plants cause significant danger to productive and secure agriculture. Plant diseases can be detected early and accurately, reducing crop losses and pesticide use. Traditional methods of plant disease identification, on the other hand, are generally time-consuming and require professional expertise. It would be beneficial to the farmers if they could detect the disease quickly by taking images of the leaf directly. This will be a time-saving process and they can take remedial actions immediately. To achieve this a novel feature extraction approach for detecting tomato plant illnesses from leaf photos using low-cost computing systems such as mobile phones is proposed in this study. The proposed approach integrates various types of Deep Learning techniques to extract robust and discriminative features from leaf images. After the proposed feature extraction comparisons have been made on five cutting-edge deep learning models: AlexNet, ResNet50, VGG16, VGG19, and MobileNet. The dataset contains 10,000 leaf photos from ten classes of tomato illnesses and one class of healthy leaves. Experimental findings demonstrate that AlexNet has an accuracy score of 87%, with the benefit of being quick and lightweight, making it appropriate for use on embedded systems and other low-processing devices like smartphones.<|reference_end|> | arxiv | @article{pal2024a,
title={A Novel Feature Extraction Model for the Detection of Plant Disease from
Leaf Images in Low Computational Devices},
author={Rikathi Pal and Anik Basu Bhaumik and Arpan Murmu and Sanoar Hossain
and Biswajit Maity and Soumya Sen},
journal={arXiv preprint arXiv:2410.01854},
year={2024},
archivePrefix={arXiv},
eprint={2410.01854},
primaryClass={eess.IV cs.CV}
} | pal2024a |
arxiv-664705 | 2410.01855 | Explainable Diagnosis Prediction through Neuro-Symbolic Integration | <|reference_start|>Explainable Diagnosis Prediction through Neuro-Symbolic Integration: Diagnosis prediction is a critical task in healthcare, where timely and accurate identification of medical conditions can significantly impact patient outcomes. Traditional machine learning and deep learning models have achieved notable success in this domain but often lack interpretability which is a crucial requirement in clinical settings. In this study, we explore the use of neuro-symbolic methods, specifically Logical Neural Networks (LNNs), to develop explainable models for diagnosis prediction. Essentially, we design and implement LNN-based models that integrate domain-specific knowledge through logical rules with learnable thresholds. Our models, particularly $M_{\text{multi-pathway}}$ and $M_{\text{comprehensive}}$, demonstrate superior performance over traditional models such as Logistic Regression, SVM, and Random Forest, achieving higher accuracy (up to 80.52\%) and AUROC scores (up to 0.8457) in the case study of diabetes prediction. The learned weights and thresholds within the LNN models provide direct insights into feature contributions, enhancing interpretability without compromising predictive power. These findings highlight the potential of neuro-symbolic approaches in bridging the gap between accuracy and explainability in healthcare AI applications. By offering transparent and adaptable diagnostic models, our work contributes to the advancement of precision medicine and supports the development of equitable healthcare solutions. Future research will focus on extending these methods to larger and more diverse datasets to further validate their applicability across different medical conditions and populations.<|reference_end|> | arxiv | @article{lu2024explainable,
title={Explainable Diagnosis Prediction through Neuro-Symbolic Integration},
author={Qiuhao Lu, Rui Li, Elham Sagheb, Andrew Wen, Jinlian Wang, Liwei Wang,
Jungwei W. Fan, Hongfang Liu},
journal={arXiv preprint arXiv:2410.01855},
year={2024},
archivePrefix={arXiv},
eprint={2410.01855},
primaryClass={cs.LG cs.AI}
} | lu2024explainable |
arxiv-664706 | 2410.01857 | Learning the Optimal Path and DNN Partition for Collaborative Edge Inference | <|reference_start|>Learning the Optimal Path and DNN Partition for Collaborative Edge Inference: Recent advancements in Deep Neural Networks (DNNs) have catalyzed the development of numerous intelligent mobile applications and services. However, they also introduce significant computational challenges for resource-constrained mobile devices. To address this, collaborative edge inference has been proposed. This method involves partitioning a DNN inference task into several subtasks and distributing these across multiple network nodes. Despite its potential, most current approaches presume known network parameters -- like node processing speeds and link transmission rates -- or rely on a fixed sequence of nodes for processing the DNN subtasks. In this paper, we tackle a more complex scenario where network parameters are unknown and must be learned, and multiple network paths are available for distributing inference tasks. Specifically, we explore the learning problem of selecting the optimal network path and assigning DNN layers to nodes along this path, considering potential security threats and the costs of switching paths. We begin by deriving structural insights from the DNN layer assignment with complete network information, which narrows down the decision space and provides crucial understanding of optimal assignments. We then cast the learning problem with incomplete network information as a novel adversarial group linear bandits problem with switching costs, featuring rewards generation through a combined stochastic and adversarial process. We introduce a new bandit algorithm, B-EXPUCB, which combines elements of the classical blocked EXP3 and LinUCB algorithms, and demonstrate its sublinear regret. Extensive simulations confirm B-EXPUCB's superior performance in learning for collaborative edge inference over existing algorithms.<|reference_end|> | arxiv | @article{huang2024learning,
title={Learning the Optimal Path and DNN Partition for Collaborative Edge
Inference},
author={Yin Huang, Letian Zhang, Jie Xu},
journal={arXiv preprint arXiv:2410.01857},
year={2024},
archivePrefix={arXiv},
eprint={2410.01857},
primaryClass={cs.LG cs.DC}
} | huang2024learning |
arxiv-664707 | 2410.01858 | Long-range gene expression prediction with token alignment of large language model | <|reference_start|>Long-range gene expression prediction with token alignment of large language model: Gene expression is a cellular process that plays a fundamental role in human phenotypical variations and diseases. Despite advances of deep learning models for gene expression prediction, recent benchmarks have revealed their inability to learn distal regulatory grammar. Here, we address this challenge by leveraging a pretrained large language model to enhance gene expression prediction. We introduce Genetic sequence Token Alignment (GTA), which aligns genetic sequence features with natural language tokens, allowing for symbolic reasoning of genomic sequence features via the frozen language model. This cross-modal adaptation learns the regulatory grammar and allows us to further incorporate gene-specific human annotations as prompts, enabling in-context learning that is not possible with existing models. Trained on lymphoblastoid cells, GTA was evaluated on cells from the Geuvadis consortium and outperforms state-of-the-art models such as Enformer, achieving a Spearman correlation of 0.65, a 10\% improvement. Additionally, GTA offers improved interpretation of long-range interactions through the identification of the most meaningful sections of the input genetic context. GTA represents a powerful and novel cross-modal approach to gene expression prediction by utilizing a pretrained language model, in a paradigm shift from conventional gene expression models trained only on sequence data.<|reference_end|> | arxiv | @article{honig2024long-range,
title={Long-range gene expression prediction with token alignment of large
language model},
author={Edouardo Honig, Huixin Zhan, Ying Nian Wu, Zijun Frank Zhang},
journal={arXiv preprint arXiv:2410.01858},
year={2024},
archivePrefix={arXiv},
eprint={2410.01858},
primaryClass={q-bio.CB cs.LG q-bio.GN}
} | honig2024long-range |
arxiv-664708 | 2410.01859 | Enhancing End Stage Renal Disease Outcome Prediction: A Multi-Sourced Data-Driven Approach | <|reference_start|>Enhancing End Stage Renal Disease Outcome Prediction: A Multi-Sourced Data-Driven Approach: Objective: To improve prediction of Chronic Kidney Disease (CKD) progression to End Stage Renal Disease (ESRD) using machine learning (ML) and deep learning (DL) models applied to an integrated clinical and claims dataset of varying observation windows, supported by explainable AI (XAI) to enhance interpretability and reduce bias. Materials and Methods: We utilized data about 10,326 CKD patients, combining their clinical and claims information from 2009 to 2018. Following data preprocessing, cohort identification, and feature engineering, we evaluated multiple statistical, ML and DL models using data extracted from five distinct observation windows. Feature importance and Shapley value analysis were employed to understand key predictors. Models were tested for robustness, clinical relevance, misclassification errors and bias issues. Results: Integrated data models outperformed those using single data sources, with the Long Short-Term Memory (LSTM) model achieving the highest AUC (0.93) and F1 score (0.65). A 24-month observation window was identified as optimal for balancing early detection and prediction accuracy. The 2021 eGFR equation improved prediction accuracy and reduced racial bias, notably for African American patients. Discussion: Improved ESRD prediction accuracy, results interpretability and bias mitigation strategies presented in this study have the potential to significantly enhance CKD and ESRD management, support targeted early interventions and reduce healthcare disparities. Conclusion: This study presents a robust framework for predicting ESRD outcomes in CKD patients, improving clinical decision-making and patient care through multi-sourced, integrated data and AI/ML methods. Future research will expand data integration and explore the application of this framework to other chronic diseases.<|reference_end|> | arxiv | @article{li2024enhancing,
title={Enhancing End Stage Renal Disease Outcome Prediction: A Multi-Sourced
Data-Driven Approach},
author={Yubo Li, Rema Padman},
journal={arXiv preprint arXiv:2410.01859},
year={2024},
archivePrefix={arXiv},
eprint={2410.01859},
primaryClass={q-bio.QM cs.LG}
} | li2024enhancing |
arxiv-664709 | 2410.01860 | FredNormer: Frequency Domain Normalization for Non-stationary Time Series Forecasting | <|reference_start|>FredNormer: Frequency Domain Normalization for Non-stationary Time Series Forecasting: Recent normalization-based methods have shown great success in tackling the distribution shift issue, facilitating non-stationary time series forecasting. Since these methods operate in the time domain, they may fail to fully capture the dynamic patterns that are more apparent in the frequency domain, leading to suboptimal results. This paper first theoretically analyzes how normalization methods affect frequency components. We prove that the current normalization methods that operate in the time domain uniformly scale non-zero frequencies, and thus, they struggle to determine components that contribute to more robust forecasting. Therefore, we propose FredNormer, which observes datasets from a frequency perspective and adaptively up-weights the key frequency components. To this end, FredNormer consists of two components: a statistical metric that normalizes the input samples based on their frequency stability and a learnable weighting layer that adjusts stability and introduces sample-specific variations. Notably, FredNormer is a plug-and-play module, which does not compromise the efficiency compared to existing normalization methods. Extensive experiments show that FredNormer improves the averaged MSE of backbone forecasting models by 33.3% and 55.3% on the ETTm2 dataset. Compared to the baseline normalization methods, FredNormer achieves 18 top-1 results and 6 top-2 results out of 28 settings.<|reference_end|> | arxiv | @article{piao2024frednormer:,
title={FredNormer: Frequency Domain Normalization for Non-stationary Time
Series Forecasting},
author={Xihao Piao, Zheng Chen, Yushun Dong, Yasuko Matsubara, Yasushi Sakurai},
journal={arXiv preprint arXiv:2410.01860},
year={2024},
archivePrefix={arXiv},
eprint={2410.01860},
primaryClass={stat.ML cs.LG}
} | piao2024frednormer: |
arxiv-664710 | 2410.01861 | OCC-MLLM-Alpha:Empowering Multi-modal Large Language Model for the Understanding of Occluded Objects with Self-Supervised Test-Time Learning | <|reference_start|>OCC-MLLM-Alpha:Empowering Multi-modal Large Language Model for the Understanding of Occluded Objects with Self-Supervised Test-Time Learning: There is a gap in the understanding of occluded objects in existing large-scale visual language multi-modal models. Current state-of-the-art multi-modal models fail to provide satisfactory results in describing occluded objects through universal visual encoders and supervised learning strategies. Therefore, we introduce a multi-modal large language framework and corresponding self-supervised learning strategy with support of 3D generation. We start our experiments comparing with the state-of-the-art models in the evaluation of a large-scale dataset SOMVideo [18]. The initial results demonstrate the improvement of 16.92% in comparison with the state-of-the-art VLM models.<|reference_end|> | arxiv | @article{yang2024occ-mllm-alpha:empowering,
title={OCC-MLLM-Alpha:Empowering Multi-modal Large Language Model for the
Understanding of Occluded Objects with Self-Supervised Test-Time Learning},
author={Shuxin Yang, Xinhan Di},
journal={arXiv preprint arXiv:2410.01861},
year={2024},
archivePrefix={arXiv},
eprint={2410.01861},
primaryClass={cs.CV}
} | yang2024occ-mllm-alpha:empowering |
arxiv-664711 | 2410.01864 | Dynamic Portfolio Rebalancing: A Hybrid new Model Using GNNs and Pathfinding for Cost Efficiency | <|reference_start|>Dynamic Portfolio Rebalancing: A Hybrid new Model Using GNNs and Pathfinding for Cost Efficiency: This paper introduces a novel approach to optimizing portfolio rebalancing by integrating Graph Neural Networks (GNNs) for predicting transaction costs and Dijkstra's algorithm for identifying cost-efficient rebalancing paths. Using historical stock data from prominent technology firms, the GNN is trained to forecast future transaction costs, which are then applied as edge weights in a financial asset graph. Dijkstra's algorithm is used to find the least costly path for reallocating capital between assets. Empirical results show that this hybrid approach significantly reduces transaction costs, offering a powerful tool for portfolio managers, especially in high-frequency trading environments. This methodology demonstrates the potential of combining advanced machine learning techniques with classical optimization algorithms to improve financial decision-making processes. Future research will explore expanding the asset universe and incorporating reinforcement learning for continuous portfolio optimization.<|reference_end|> | arxiv | @article{vallarino2024dynamic,
title={Dynamic Portfolio Rebalancing: A Hybrid new Model Using GNNs and
Pathfinding for Cost Efficiency},
author={Diego Vallarino},
journal={arXiv preprint arXiv:2410.01864},
year={2024},
archivePrefix={arXiv},
eprint={2410.01864},
primaryClass={q-fin.PM cs.LG}
} | vallarino2024dynamic |
arxiv-664712 | 2410.01865 | Simplifying complex machine learning by linearly separable network embedding spaces | <|reference_start|>Simplifying complex machine learning by linearly separable network embedding spaces: Low-dimensional embeddings are a cornerstone in the modelling and analysis of complex networks. However, most existing approaches for mining network embedding spaces rely on computationally intensive machine learning systems to facilitate downstream tasks. In the field of NLP, word embedding spaces capture semantic relationships \textit{linearly}, allowing for information retrieval using \textit{simple linear operations} on word embedding vectors. Here, we demonstrate that there are structural properties of network data that yields this linearity. We show that the more homophilic the network representation, the more linearly separable the corresponding network embedding space, yielding better downstream analysis results. Hence, we introduce novel graphlet-based methods enabling embedding of networks into more linearly separable spaces, allowing for their better mining. Our fundamental insights into the structure of network data that enable their \textit{\textbf{linear}} mining and exploitation enable the ML community to build upon, towards efficiently and explainably mining of the complex network data.<|reference_end|> | arxiv | @article{xenos2024simplifying,
title={Simplifying complex machine learning by linearly separable network
embedding spaces},
author={Alexandros Xenos, Noel-Malod Dognin and Natasa Przulj},
journal={arXiv preprint arXiv:2410.01865},
year={2024},
archivePrefix={arXiv},
eprint={2410.01865},
primaryClass={cs.SI cs.AI cs.LG}
} | xenos2024simplifying |
arxiv-664713 | 2410.01866 | House of Cards: Massive Weights in LLMs | <|reference_start|>House of Cards: Massive Weights in LLMs: Massive activations, which manifest in specific feature dimensions of hidden states, introduce a significant bias in large language models (LLMs), leading to an overemphasis on the corresponding token. In this paper, we identify that massive activations originate not from the hidden state but from the intermediate state of a feed-forward network module in an early layer. Expanding on the previous observation that massive activations occur only in specific feature dimensions, we dive deep into the weights that cause massive activations. Specifically, we define top-$k$ massive weights as the weights that contribute to the dimensions with the top-$k$ magnitudes in the intermediate state. When these massive weights are set to zero, the functionality of LLMs is entirely disrupted. However, when all weights except for massive weights are set to zero, it results in a relatively minor performance drop, even though a much larger number of weights are set to zero. This implies that during the pre-training process, learning is dominantly focused on massive weights. Building on this observation, we propose a simple plug-and-play method called MacDrop (massive weights curriculum dropout), to rely less on massive weights during parameter-efficient fine-tuning. This method applies dropout to the pre-trained massive weights, starting with a high dropout probability and gradually decreasing it as fine-tuning progresses. Through experiments, we demonstrate that MacDrop generally improves performance across zero-shot downstream tasks and generation tasks.<|reference_end|> | arxiv | @article{oh2024house,
title={House of Cards: Massive Weights in LLMs},
author={Jaehoon Oh, Seungjun Shin, Dokwan Oh},
journal={arXiv preprint arXiv:2410.01866},
year={2024},
archivePrefix={arXiv},
eprint={2410.01866},
primaryClass={cs.LG cs.AI cs.CL}
} | oh2024house |
arxiv-664714 | 2410.01869 | Enhancing LLM Fine-tuning for Text-to-SQLs by SQL Quality Measurement | <|reference_start|>Enhancing LLM Fine-tuning for Text-to-SQLs by SQL Quality Measurement: Text-to-SQLs enables non-expert users to effortlessly retrieve desired information from relational databases using natural language queries. While recent advancements, particularly with Large Language Models (LLMs) like GPT and T5, have shown impressive performance on large-scale benchmarks such as BIRD, current state-of-the-art (SOTA) LLM-based Text-to-SQLs models often require significant efforts to develop auxiliary tools like SQL classifiers to achieve high performance. This paper proposed a novel approach that only needs SQL Quality Measurement to enhance LLMs-based Text-to-SQLs performance. It establishes a SQL quality evaluation mechanism to assess the generated SQL queries against predefined criteria and actual database responses. This feedback loop enables continuous learning and refinement of model outputs based on both syntactic correctness and semantic accuracy. The proposed method undergoes comprehensive validation on the BIRD benchmark, assessing Execution Accuracy (EX) and Valid Efficiency Score (VES) across various Text-to-SQLs difficulty levels. Experimental results reveal competitive performance in both EX and VES compared to SOTA models like GPT4 and T5.<|reference_end|> | arxiv | @article{sarker2024enhancing,
title={Enhancing LLM Fine-tuning for Text-to-SQLs by SQL Quality Measurement},
author={Shouvon Sarker, Xishuang Dong, Xiangfang Li, Lijun Qian},
journal={arXiv preprint arXiv:2410.01869},
year={2024},
archivePrefix={arXiv},
eprint={2410.01869},
primaryClass={cs.DB cs.AI cs.SE}
} | sarker2024enhancing |
arxiv-664715 | 2410.01870 | NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models | <|reference_start|>NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models: Fine-tuning pre-trained models is crucial for adapting large models to downstream tasks, often delivering state-of-the-art performance. However, fine-tuning all model parameters is resource-intensive and laborious, leading to the emergence of parameter-efficient fine-tuning (PEFT) methods. One widely adopted PEFT technique, Low-Rank Adaptation (LoRA), freezes the pre-trained model weights and introduces two low-rank matrices whose ranks are significantly smaller than the dimensions of the original weight matrices. This enables efficient fine-tuning by adjusting only a small number of parameters. Despite its efficiency, LoRA approximates weight updates using low-rank decomposition, which struggles to capture complex, non-linear components and efficient optimization trajectories. As a result, LoRA-based methods often exhibit a significant performance gap compared to full fine-tuning. Closing this gap requires higher ranks, which increases the number of parameters. To address these limitations, we propose a nonlinear parameter-efficient adaptation method (NEAT). NEAT introduces a lightweight neural network that takes pre-trained weights as input and learns a nonlinear transformation to approximate cumulative weight updates. These updates can be interpreted as functions of the corresponding pre-trained weights. The nonlinear approximation directly models the cumulative updates, effectively capturing complex and non-linear structures in the weight updates. Our theoretical analysis demonstrates taht NEAT can be more efficient than LoRA while having equal or greater expressivity. Extensive evaluations across four benchmarks and over twenty datasets demonstrate that NEAT significantly outperforms baselines in both vision and text tasks.<|reference_end|> | arxiv | @article{zhong2024neat:,
title={NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models},
author={Yibo Zhong, Haoxiang Jiang, Lincan Li, Ryumei Nakada, Tianci Liu,
Linjun Zhang, Huaxiu Yao, Haoyu Wang},
journal={arXiv preprint arXiv:2410.01870},
year={2024},
archivePrefix={arXiv},
eprint={2410.01870},
primaryClass={cs.LG cs.CL}
} | zhong2024neat: |
arxiv-664716 | 2410.01871 | Auction-Based Regulation for Artificial Intelligence | <|reference_start|>Auction-Based Regulation for Artificial Intelligence: In an era of "moving fast and breaking things", regulators have moved slowly to pick up the safety, bias, and legal pieces left in the wake of broken Artificial Intelligence (AI) deployment. Since AI models, such as large language models, are able to push misinformation and stoke division within our society, it is imperative for regulators to employ a framework that mitigates these dangers and ensures user safety. While there is much-warranted discussion about how to address the safety, bias, and legal woes of state-of-the-art AI models, the number of rigorous and realistic mathematical frameworks to regulate AI safety is lacking. We take on this challenge, proposing an auction-based regulatory mechanism that provably incentivizes model-building agents (i) to deploy safer models and (ii) to participate in the regulation process. We provably guarantee, via derived Nash Equilibria, that each participating agent's best strategy is to submit a model safer than a prescribed minimum-safety threshold. Empirical results show that our regulatory auction boosts safety and participation rates by 20% and 15% respectively, outperforming simple regulatory frameworks that merely enforce minimum safety standards.<|reference_end|> | arxiv | @article{bornstein2024auction-based,
title={Auction-Based Regulation for Artificial Intelligence},
author={Marco Bornstein, Zora Che, Suhas Julapalli, Abdirisak Mohamed, Amrit
Singh Bedi, Furong Huang},
journal={arXiv preprint arXiv:2410.01871},
year={2024},
archivePrefix={arXiv},
eprint={2410.01871},
primaryClass={cs.GT cs.AI cs.CY econ.GN q-fin.EC}
} | bornstein2024auction-based |
arxiv-664717 | 2410.01888 | Conformal Prediction Sets Can Cause Disparate Impact | <|reference_start|>Conformal Prediction Sets Can Cause Disparate Impact: Although conformal prediction is a promising method for quantifying the uncertainty of machine learning models, the prediction sets it outputs are not inherently actionable. Many applications require a single output to act on, not several. To overcome this, prediction sets can be provided to a human who then makes an informed decision. In any such system it is crucial to ensure the fairness of outcomes across protected groups, and researchers have proposed that Equalized Coverage be used as the standard for fairness. By conducting experiments with human participants, we demonstrate that providing prediction sets can increase the unfairness of their decisions. Disquietingly, we find that providing sets that satisfy Equalized Coverage actually increases unfairness compared to marginal coverage. Instead of equalizing coverage, we propose to equalize set sizes across groups which empirically leads to more fair outcomes.<|reference_end|> | arxiv | @article{cresswell2024conformal,
title={Conformal Prediction Sets Can Cause Disparate Impact},
author={Jesse C. Cresswell, Bhargava Kumar, Yi Sui, Mouloud Belbahri},
journal={arXiv preprint arXiv:2410.01888},
year={2024},
archivePrefix={arXiv},
eprint={2410.01888},
primaryClass={cs.LG stat.ML}
} | cresswell2024conformal |
arxiv-664718 | 2410.01898 | Latency Reduction in CloudVR: Cloud Prediction, Edge Correction | <|reference_start|>Latency Reduction in CloudVR: Cloud Prediction, Edge Correction: Current virtual reality (VR) headsets encounter a trade-off between high processing power and affordability. Consequently, offloading 3D rendering to remote servers helps reduce costs, battery usage, and headset weight. Maintaining network latency below 20ms is crucial to achieving this goal. Predicting future movement and prerendering are beneficial in meeting this tight latency bound. This paper proposes a method that utilizes the low-latency property of edge servers and the high resources available in cloud servers simultaneously to achieve cost-efficient, high-quality VR. In this method, head movement is predicted on the cloud server, and frames are rendered there and transmitted to the edge server. If the prediction error surpasses a threshold, the frame is re-rendered on the edge server. Results demonstrate that using this method, each edge server can efficiently serve up to 23 users concurrently, compared to a maximum of 5 users when rendering the frame entirely on the edge server. Furthermore, this paper shows that employing the Mean Absolute Error loss function and predicting acceleration rather than velocity significantly enhances prediction accuracy. Additionally, it is shown that normalizing individual data using its mean and standard deviation does not yield improvements in prediction accuracy. These findings provide insights into optimizing VR headset performance through cloud-edge collaboration.<|reference_end|> | arxiv | @article{kopaee2024latency,
title={Latency Reduction in CloudVR: Cloud Prediction, Edge Correction},
author={Ali Majlesi Kopaee, Seyed Amir Hajseyedtaghia, Hossein Chitsaz},
journal={arXiv preprint arXiv:2410.01898},
year={2024},
archivePrefix={arXiv},
eprint={2410.01898},
primaryClass={eess.SY cs.SY}
} | kopaee2024latency |
arxiv-664719 | 2410.01899 | The potential of LLM-generated reports in DevSecOps | <|reference_start|>The potential of LLM-generated reports in DevSecOps: Alert fatigue is a common issue faced by software teams using the DevSecOps paradigm. The overwhelming number of warnings and alerts generated by security and code scanning tools, particularly in smaller teams where resources are limited, leads to desensitization and diminished responsiveness to security warnings, potentially exposing systems to vulnerabilities. This paper explores the potential of LLMs in generating actionable security reports that emphasize the financial impact and consequences of detected security issues, such as credential leaks, if they remain unaddressed. A survey conducted among developers indicates that LLM-generated reports significantly enhance the likelihood of immediate action on security issues by providing clear, comprehensive, and motivating insights. Integrating these reports into DevSecOps workflows can mitigate attention saturation and alert fatigue, ensuring that critical security warnings are addressed effectively.<|reference_end|> | arxiv | @article{lykousas2024the,
title={The potential of LLM-generated reports in DevSecOps},
author={Nikolaos Lykousas, Vasileios Argyropoulos, Fran Casino},
journal={arXiv preprint arXiv:2410.01899},
year={2024},
archivePrefix={arXiv},
eprint={2410.01899},
primaryClass={cs.CR cs.AI cs.SE}
} | lykousas2024the |
arxiv-664720 | 2410.01901 | Generic Multicast (Extended Version) | <|reference_start|>Generic Multicast (Extended Version): Communication primitives play a central role in modern computing. They offer a panel of reliability and ordering guarantees for messages, enabling the implementation of complex distributed interactions. In particular, atomic broadcast is a pivotal abstraction for implementing fault-tolerant distributed services. This primitive allows disseminating messages across the system in a total order. There are two group communication primitives closely related to atomic broadcast. Atomic multicast permits targeting a subset of participants, possibly stricter than the whole system. Generic broadcast leverages the semantics of messages to order them only where necessary (that is when they conflict). In this paper, we propose to combine all these primitives into a single, more general one, called generic multicast. We formally specify the guarantees offered by generic multicast and present efficient algorithms. Compared to prior works, our solutions offer appealing properties in terms of time and space complexity. In particular, when a run is conflict-free, that is no two messages conflict, a message is delivered after at most three message delays.<|reference_end|> | arxiv | @article{bolina2024generic,
title={Generic Multicast (Extended Version)},
author={Jos'e Augusto Bolina, Pierre Sutra, Douglas Antunes Rocha, Lasaro
Camargos},
journal={arXiv preprint arXiv:2410.01901},
year={2024},
doi={10.1145/3697090.3697095},
archivePrefix={arXiv},
eprint={2410.01901},
primaryClass={cs.DC}
} | bolina2024generic |
arxiv-664721 | 2410.01906 | Social Media Authentication and Combating Deepfakes using Semi-fragile Invisible Image Watermarking | <|reference_start|>Social Media Authentication and Combating Deepfakes using Semi-fragile Invisible Image Watermarking: With the significant advances in deep generative models for image and video synthesis, Deepfakes and manipulated media have raised severe societal concerns. Conventional machine learning classifiers for deepfake detection often fail to cope with evolving deepfake generation technology and are susceptible to adversarial attacks. Alternatively, invisible image watermarking is being researched as a proactive defense technique that allows media authentication by verifying an invisible secret message embedded in the image pixels. A handful of invisible image watermarking techniques introduced for media authentication have proven vulnerable to basic image processing operations and watermark removal attacks. In response, we have proposed a semi-fragile image watermarking technique that embeds an invisible secret message into real images for media authentication. Our proposed watermarking framework is designed to be fragile to facial manipulations or tampering while being robust to benign image-processing operations and watermark removal attacks. This is facilitated through a unique architecture of our proposed technique consisting of critic and adversarial networks that enforce high image quality and resiliency to watermark removal efforts, respectively, along with the backbone encoder-decoder and the discriminator networks. Thorough experimental investigations on SOTA facial Deepfake datasets demonstrate that our proposed model can embed a $64$-bit secret as an imperceptible image watermark that can be recovered with a high-bit recovery accuracy when benign image processing operations are applied while being non-recoverable when unseen Deepfake manipulations are applied. In addition, our proposed watermarking technique demonstrates high resilience to several white-box and black-box watermark removal attacks. Thus, obtaining state-of-the-art performance.<|reference_end|> | arxiv | @article{nadimpalli2024social,
title={Social Media Authentication and Combating Deepfakes using Semi-fragile
Invisible Image Watermarking},
author={Aakash Varma Nadimpalli, Ajita Rattani},
journal={arXiv preprint arXiv:2410.01906},
year={2024},
doi={10.1145/3700146},
archivePrefix={arXiv},
eprint={2410.01906},
primaryClass={cs.CV cs.AI cs.CR cs.LG cs.MM}
} | nadimpalli2024social |
arxiv-664722 | 2410.01910 | Is uniform expressivity too restrictive? Towards efficient expressivity of graph neural networks | <|reference_start|>Is uniform expressivity too restrictive? Towards efficient expressivity of graph neural networks: Uniform expressivity guarantees that a Graph Neural Network (GNN) can express a query without the parameters depending on the size of the input graphs. This property is desirable in applications in order to have number of trainable parameters that is independent of the size of the input graphs. Uniform expressivity of the two variable guarded fragment (GC2) of first order logic is a well-celebrated result for Rectified Linear Unit (ReLU) GNNs [Barcelo & al., 2020]. In this article, we prove that uniform expressivity of GC2 queries is not possible for GNNs with a wide class of Pfaffian activation functions (including the sigmoid and tanh), answering a question formulated by [Grohe, 2021]. We also show that despite these limitations, many of those GNNs can still efficiently express GC2 queries in a way that the number of parameters remains logarithmic on the maximal degree of the input graphs. Furthermore, we demonstrate that a log-log dependency on the degree is achievable for a certain choice of activation function. This shows that uniform expressivity can be successfully relaxed by covering large graphs appearing in practical applications. Our experiments illustrates that our theoretical estimates hold in practice.<|reference_end|> | arxiv | @article{khalife2024is,
title={Is uniform expressivity too restrictive? Towards efficient expressivity
of graph neural networks},
author={Sammy Khalife, Josu'e Tonelli-Cueto},
journal={arXiv preprint arXiv:2410.01910},
year={2024},
archivePrefix={arXiv},
eprint={2410.01910},
primaryClass={cs.LG cs.CC cs.LO}
} | khalife2024is |
arxiv-664723 | 2410.01911 | A C++ implementation of the discrete adjoint sensitivity analysis method for explicit adaptive Runge-Kutta methods enabled by automatic adjoint differentiation and SIMD vectorization | <|reference_start|>A C++ implementation of the discrete adjoint sensitivity analysis method for explicit adaptive Runge-Kutta methods enabled by automatic adjoint differentiation and SIMD vectorization: A C++ library for sensitivity analysis of optimisation problems involving ordinary differential equations (ODEs) enabled by automatic differentiation (AD) and SIMD (Single Instruction, Multiple data) vectorization is presented. The discrete adjoint sensitivity analysis method is implemented for adaptive explicit Runge-Kutta (ERK) methods. Automatic adjoint differentiation (AAD) is employed for efficient evaluations of products of vectors and the Jacobian matrix of the right hand side of the ODE system. This approach avoids the low-level drawbacks of the black box approach of employing AAD on the entire ODE solver and opens the possibility to leverage parallelization. SIMD vectorization is employed to compute the vector-Jacobian products concurrently. We study the performance of other methods and implementations of sensitivity analysis and we find that our algorithm presents a small advantage compared to equivalent existing software.<|reference_end|> | arxiv | @article{martins2024a,
title={A C++ implementation of the discrete adjoint sensitivity analysis method
for explicit adaptive Runge-Kutta methods enabled by automatic adjoint
differentiation and SIMD vectorization},
author={Rui Martins, Evgeny Lakshtanov},
journal={arXiv preprint arXiv:2410.01911},
year={2024},
archivePrefix={arXiv},
eprint={2410.01911},
primaryClass={math.NA cs.MS cs.NA}
} | martins2024a |
arxiv-664724 | 2410.01912 | A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegrained Image Generation | <|reference_start|>A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegrained Image Generation: This work tackles the information loss bottleneck of vector-quantization (VQ) autoregressive image generation by introducing a novel model architecture called the 2-Dimensional Autoregression (DnD) Transformer. The DnD-Transformer predicts more codes for an image by introducing a new autoregression direction, \textit{model depth}, along with the sequence length direction. Compared to traditional 1D autoregression and previous work utilizing similar 2D image decomposition such as RQ-Transformer, the DnD-Transformer is an end-to-end model that can generate higher quality images with the same backbone model size and sequence length, opening a new optimization perspective for autoregressive image generation. Furthermore, our experiments reveal that the DnD-Transformer's potential extends beyond generating natural images. It can even generate images with rich text and graphical elements in a self-supervised manner, demonstrating an understanding of these combined modalities. This has not been previously demonstrated for popular vision generative models such as diffusion models, showing a spark of vision-language intelligence when trained solely on images. Code, datasets and models are open at https://github.com/chenllliang/DnD-Transformer.<|reference_end|> | arxiv | @article{chen2024a,
title={A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive
Transformer for Efficient Finegrained Image Generation},
author={Liang Chen, Sinan Tan, Zefan Cai, Weichu Xie, Haozhe Zhao, Yichi
Zhang, Junyang Lin, Jinze Bai, Tianyu Liu, Baobao Chang},
journal={arXiv preprint arXiv:2410.01912},
year={2024},
archivePrefix={arXiv},
eprint={2410.01912},
primaryClass={cs.CV cs.AI cs.CL}
} | chen2024a |
arxiv-664725 | 2410.01917 | Provably Accurate Shapley Value Estimation via Leverage Score Sampling | <|reference_start|>Provably Accurate Shapley Value Estimation via Leverage Score Sampling: Originally introduced in game theory, Shapley values have emerged as a central tool in explainable machine learning, where they are used to attribute model predictions to specific input features. However, computing Shapley values exactly is expensive: for a general model with $n$ features, $O(2^n)$ model evaluations are necessary. To address this issue, approximation algorithms are widely used. One of the most popular is the Kernel SHAP algorithm, which is model agnostic and remarkably effective in practice. However, to the best of our knowledge, Kernel SHAP has no strong non-asymptotic complexity guarantees. We address this issue by introducing Leverage SHAP, a light-weight modification of Kernel SHAP that provides provably accurate Shapley value estimates with just $O(n\log n)$ model evaluations. Our approach takes advantage of a connection between Shapley value estimation and agnostic active learning by employing leverage score sampling, a powerful regression tool. Beyond theoretical guarantees, we show that Leverage SHAP consistently outperforms even the highly optimized implementation of Kernel SHAP available in the ubiquitous SHAP library [Lundberg & Lee, 2017].<|reference_end|> | arxiv | @article{musco2024provably,
title={Provably Accurate Shapley Value Estimation via Leverage Score Sampling},
author={Christopher Musco and R. Teal Witter},
journal={arXiv preprint arXiv:2410.01917},
year={2024},
archivePrefix={arXiv},
eprint={2410.01917},
primaryClass={cs.LG cs.AI}
} | musco2024provably |
arxiv-664726 | 2410.01918 | Influence of control polygon on the generalization of the conversion between ANCF and B-spline surfaces | <|reference_start|>Influence of control polygon on the generalization of the conversion between ANCF and B-spline surfaces: The aim of this study is to establish a general transformation matrix between B-spline surfaces and ANCF surface elements. This study is a further study of the conversion between the ANCF and B-spline surfaces. In this paper, a general transformation matrix between the Bezier surfaces and ANCF surface element is established. This general transformation matrix essentially describes the linear relationship between ANCF and Bezier surfaces. Moreover, the general transformation matrix can help to improve the efficiency of the process to transfer the distorted configuration in the CAA back to the CAD, an urgent requirement in engineering practice. In addition, a special Bezier surface control polygon is given in this study. The Bezier surface described with this control polygon can be converted to an ANCF surface element with fewer d.o.f.. And the converted ANCF surface element with 36 d.o.f. was once addressed by Dufva and Shabana. So the special control polygon can be regarded as the geometric condition in conversion to an ANCF surface element with 36 d.o.f. Based on the fact that a B-spline surface can be seen as a set of Bezier surfaces connected together, the method to establish a general transformation matrix between the ANCF and lower-order B-spline surfaces is given. Specially, the general transformation is not in a recursive form, but in a simplified form.<|reference_end|> | arxiv | @article{lan2024influence,
title={Influence of control polygon on the generalization of the conversion
between ANCF and B-spline surfaces},
author={Peng Lan, Randi Wang, Zuqing Yu},
journal={arXiv preprint arXiv:2410.01918},
year={2024},
archivePrefix={arXiv},
eprint={2410.01918},
primaryClass={cs.CG}
} | lan2024influence |
arxiv-664727 | 2410.01919 | High-order regularization dealing with ill-conditioned robot localization problems | <|reference_start|>High-order regularization dealing with ill-conditioned robot localization problems: In this work, we propose a high-order regularization method to solve the ill-conditioned problems in robot localization. Numerical solutions to robot localization problems are often unstable when the problems are ill-conditioned. A typical way to solve ill-conditioned problems is regularization, and a classical regularization method is the Tikhonov regularization. It is shown that the Tikhonov regularization can be seen as a low-order case of our method. We find that the proposed method is superior to the Tikhonov regularization in approximating some ill-conditioned inverse problems, such as robot localization problems. The proposed method overcomes the over-smoothing problem in the Tikhonov regularization as it can use more than one term in the approximation of the matrix inverse, and an explanation for the over-smoothing of the Tikhonov regularization is given. Moreover, one a priori criterion which improves the numerical stability of the ill-conditioned problem is proposed to obtain an optimal regularization matrix. As most of the regularization solutions are biased, we also provide two bias-correction techniques for the proposed high-order regularization. The simulation and experiment results using a sensor network in a 3D environment are discussed, demonstrating the performance of the proposed method.<|reference_end|> | arxiv | @article{liu2024high-order,
title={High-order regularization dealing with ill-conditioned robot
localization problems},
author={Xinghua Liu and Ming Cao},
journal={arXiv preprint arXiv:2410.01919},
year={2024},
archivePrefix={arXiv},
eprint={2410.01919},
primaryClass={cs.RO}
} | liu2024high-order |
arxiv-664728 | 2410.01920 | Step-by-Step Reasoning for Math Problems via Twisted Sequential Monte Carlo | <|reference_start|>Step-by-Step Reasoning for Math Problems via Twisted Sequential Monte Carlo: Augmenting the multi-step reasoning abilities of Large Language Models (LLMs) has been a persistent challenge. Recently, verification has shown promise in improving solution consistency by evaluating generated outputs. However, current verification approaches suffer from sampling inefficiencies, requiring a large number of samples to achieve satisfactory performance. Additionally, training an effective verifier often depends on extensive process supervision, which is costly to acquire. In this paper, we address these limitations by introducing a novel verification method based on Twisted Sequential Monte Carlo (TSMC). TSMC sequentially refines its sampling effort to focus exploration on promising candidates, resulting in more efficient generation of high-quality solutions. We apply TSMC to LLMs by estimating the expected future rewards at partial solutions. This approach results in a more straightforward training target that eliminates the need for step-wise human annotations. We empirically demonstrate the advantages of our method across multiple math benchmarks, and also validate our theoretical analysis of both our approach and existing verification methods.<|reference_end|> | arxiv | @article{feng2024step-by-step,
title={Step-by-Step Reasoning for Math Problems via Twisted Sequential Monte
Carlo},
author={Shengyu Feng, Xiang Kong, Shuang Ma, Aonan Zhang, Dong Yin, Chong
Wang, Ruoming Pang, Yiming Yang},
journal={arXiv preprint arXiv:2410.01920},
year={2024},
archivePrefix={arXiv},
eprint={2410.01920},
primaryClass={cs.LG}
} | feng2024step-by-step |
arxiv-664729 | 2410.01922 | NTK-DFL: Enhancing Decentralized Federated Learning in Heterogeneous Settings via Neural Tangent Kernel | <|reference_start|>NTK-DFL: Enhancing Decentralized Federated Learning in Heterogeneous Settings via Neural Tangent Kernel: Decentralized federated learning (DFL) is a collaborative machine learning framework for training a model across participants without a central server or raw data exchange. DFL faces challenges due to statistical heterogeneity, as participants often possess different data distributions reflecting local environments and user behaviors. Recent work has shown that the neural tangent kernel (NTK) approach, when applied to federated learning in a centralized framework, can lead to improved performance. The NTK-based update mechanism is more expressive than typical gradient descent methods, enabling more efficient convergence and better handling of data heterogeneity. We propose an approach leveraging the NTK to train client models in the decentralized setting, while introducing a synergy between NTK-based evolution and model averaging. This synergy exploits inter-model variance and improves both accuracy and convergence in heterogeneous settings. Our model averaging technique significantly enhances performance, boosting accuracy by at least 10% compared to the mean local model accuracy. Empirical results demonstrate that our approach consistently achieves higher accuracy than baselines in highly heterogeneous settings, where other approaches often underperform. Additionally, it reaches target performance in 4.6 times fewer communication rounds. We validate our approach across multiple datasets, network topologies, and heterogeneity settings to ensure robustness and generalizability.<|reference_end|> | arxiv | @article{thompson2024ntk-dfl:,
title={NTK-DFL: Enhancing Decentralized Federated Learning in Heterogeneous
Settings via Neural Tangent Kernel},
author={Gabriel Thompson, Kai Yue, Chau-Wai Wong, Huaiyu Dai},
journal={arXiv preprint arXiv:2410.01922},
year={2024},
archivePrefix={arXiv},
eprint={2410.01922},
primaryClass={cs.LG}
} | thompson2024ntk-dfl: |
arxiv-664730 | 2410.01925 | Topological mapping for traversability-aware long-range navigation in off-road terrain | <|reference_start|>Topological mapping for traversability-aware long-range navigation in off-road terrain: Autonomous robots navigating in off-road terrain like forests open new opportunities for automation. While off-road navigation has been studied, existing work often relies on clearly delineated pathways. We present a method allowing for long-range planning, exploration and low-level control in unknown off-trail forest terrain, using vision and GPS only. We represent outdoor terrain with a topological map, which is a set of panoramic snapshots connected with edges containing traversability information. A novel traversability analysis method is demonstrated, predicting the existence of a safe path towards a target in an image. Navigating between nodes is done using goal-conditioned behavior cloning, leveraging the power of a pretrained vision transformer. An exploration planner is presented, efficiently covering an unknown off-road area with unknown traversability using a frontiers-based approach. The approach is successfully deployed to autonomously explore two 400 meters squared forest sites unseen during training, in difficult conditions for navigation.<|reference_end|> | arxiv | @article{tremblay2024topological,
title={Topological mapping for traversability-aware long-range navigation in
off-road terrain},
author={Jean-Franc{c}ois Tremblay, Julie Alhosh, Louis Petit, Faraz Lotfi,
Lara Landauro and David Meger},
journal={arXiv preprint arXiv:2410.01925},
year={2024},
archivePrefix={arXiv},
eprint={2410.01925},
primaryClass={cs.RO}
} | tremblay2024topological |
arxiv-664731 | 2410.01926 | MARPLE: A Benchmark for Long-Horizon Inference | <|reference_start|>MARPLE: A Benchmark for Long-Horizon Inference: Reconstructing past events requires reasoning across long time horizons. To figure out what happened, we need to use our prior knowledge about the world and human behavior and draw inferences from various sources of evidence including visual, language, and auditory cues. We introduce MARPLE, a benchmark for evaluating long-horizon inference capabilities using multi-modal evidence. Our benchmark features agents interacting with simulated households, supporting vision, language, and auditory stimuli, as well as procedurally generated environments and agent behaviors. Inspired by classic ``whodunit'' stories, we ask AI models and human participants to infer which agent caused a change in the environment based on a step-by-step replay of what actually happened. The goal is to correctly identify the culprit as early as possible. Our findings show that human participants outperform both traditional Monte Carlo simulation methods and an LLM baseline (GPT-4) on this task. Compared to humans, traditional inference models are less robust and performant, while GPT-4 has difficulty comprehending environmental changes. We analyze what factors influence inference performance and ablate different modes of evidence, finding that all modes are valuable for performance. Overall, our experiments demonstrate that the long-horizon, multimodal inference tasks in our benchmark present a challenge to current models.<|reference_end|> | arxiv | @article{jin2024marple:,
title={MARPLE: A Benchmark for Long-Horizon Inference},
author={Emily Jin, Zhuoyi Huang, Jan-Philipp Fr"anken, Weiyu Liu, Hannah Cha,
Erik Brockbank, Sarah Wu, Ruohan Zhang, Jiajun Wu, Tobias Gerstenberg},
journal={arXiv preprint arXiv:2410.01926},
year={2024},
archivePrefix={arXiv},
eprint={2410.01926},
primaryClass={cs.LG}
} | jin2024marple: |
arxiv-664732 | 2410.01927 | Risk Alignment in Agentic AI Systems | <|reference_start|>Risk Alignment in Agentic AI Systems: Agentic AIs $-$ AIs that are capable and permitted to undertake complex actions with little supervision $-$ mark a new frontier in AI capabilities and raise new questions about how to safely create and align such systems with users, developers, and society. Because agents' actions are influenced by their attitudes toward risk, one key aspect of alignment concerns the risk profiles of agentic AIs. Risk alignment will matter for user satisfaction and trust, but it will also have important ramifications for society more broadly, especially as agentic AIs become more autonomous and are allowed to control key aspects of our lives. AIs with reckless attitudes toward risk (either because they are calibrated to reckless human users or are poorly designed) may pose significant threats. They might also open 'responsibility gaps' in which there is no agent who can be held accountable for harmful actions. What risk attitudes should guide an agentic AI's decision-making? How might we design AI systems that are calibrated to the risk attitudes of their users? What guardrails, if any, should be placed on the range of permissible risk attitudes? What are the ethical considerations involved when designing systems that make risky decisions on behalf of others? We present three papers that bear on key normative and technical aspects of these questions.<|reference_end|> | arxiv | @article{clatterbuck2024risk,
title={Risk Alignment in Agentic AI Systems},
author={Hayley Clatterbuck, Clinton Castro, Arvo Mu~noz Mor'an},
journal={arXiv preprint arXiv:2410.01927},
year={2024},
archivePrefix={arXiv},
eprint={2410.01927},
primaryClass={cs.CY cs.AI econ.GN q-fin.EC}
} | clatterbuck2024risk |
arxiv-664733 | 2410.01928 | Deep learning assisted high resolution microscopy image processing for phase segmentation in functional composite materials | <|reference_start|>Deep learning assisted high resolution microscopy image processing for phase segmentation in functional composite materials: In the domain of battery research, the processing of high-resolution microscopy images is a challenging task, as it involves dealing with complex images and requires a prior understanding of the components involved. The utilization of deep learning methodologies for image analysis has attracted considerable interest in recent years, with multiple investigations employing such techniques for image segmentation and analysis within the realm of battery research. However, the automated analysis of high-resolution microscopy images for detecting phases and components in composite materials is still an underexplored area. This work proposes a novel workflow for detecting components and phase segmentation from raw high resolution transmission electron microscopy (TEM) images using a trained U-Net segmentation model. The developed model can expedite the detection of components and phase segmentation, diminishing the temporal and cognitive demands associated with scrutinizing an extensive array of TEM images, thereby mitigating the potential for human errors. This approach presents a novel and efficient image analysis approach with broad applicability beyond the battery field and holds potential for application in other related domains characterized by phase and composition distribution, such as alloy production.<|reference_end|> | arxiv | @article{raghavendran2024deep,
title={Deep learning assisted high resolution microscopy image processing for
phase segmentation in functional composite materials},
author={Ganesh Raghavendran (1), Bing Han (1), Fortune Adekogbe (4), Shuang
Bai (2), Bingyu Lu (1), William Wu (5), Minghao Zhang (1), Ying Shirley Meng
(1 and 3) ((1) Department of NanoEngineering-University of California San
Diego, (2) Department of NanoEngineering-University of California San Diego
(3) Pritzker School of Molecular Engineering-University of Chicago, (4)
Department of Chemical and Petroleum Engineering-University of Lagos, (5) Del
Norte High School)},
journal={arXiv preprint arXiv:2410.01928},
year={2024},
archivePrefix={arXiv},
eprint={2410.01928},
primaryClass={cs.CV}
} | raghavendran2024deep |
arxiv-664734 | 2410.01929 | LLM-Augmented Symbolic Reinforcement Learning with Landmark-Based Task Decomposition | <|reference_start|>LLM-Augmented Symbolic Reinforcement Learning with Landmark-Based Task Decomposition: One of the fundamental challenges in reinforcement learning (RL) is to take a complex task and be able to decompose it to subtasks that are simpler for the RL agent to learn. In this paper, we report on our work that would identify subtasks by using some given positive and negative trajectories for solving the complex task. We assume that the states are represented by first-order predicate logic using which we devise a novel algorithm to identify the subtasks. Then we employ a Large Language Model (LLM) to generate first-order logic rule templates for achieving each subtask. Such rules were then further fined tuned to a rule-based policy via an Inductive Logic Programming (ILP)-based RL agent. Through experiments, we verify the accuracy of our algorithm in detecting subtasks which successfully detect all of the subtasks correctly. We also investigated the quality of the common-sense rules produced by the language model to achieve the subtasks. Our experiments show that our LLM-guided rule template generation can produce rules that are necessary for solving a subtask, which leads to solving complex tasks with fewer assumptions about predefined first-order logic predicates of the environment.<|reference_end|> | arxiv | @article{kheirandish2024llm-augmented,
title={LLM-Augmented Symbolic Reinforcement Learning with Landmark-Based Task
Decomposition},
author={Alireza Kheirandish, Duo Xu, Faramarz Fekri},
journal={arXiv preprint arXiv:2410.01929},
year={2024},
archivePrefix={arXiv},
eprint={2410.01929},
primaryClass={cs.AI cs.LG}
} | kheirandish2024llm-augmented |
arxiv-664735 | 2410.01930 | Don't flatten, tokenize! Unlocking the key to SoftMoE's efficacy in deep RL | <|reference_start|>Don't flatten, tokenize! Unlocking the key to SoftMoE's efficacy in deep RL: The use of deep neural networks in reinforcement learning (RL) often suffers from performance degradation as model size increases. While soft mixtures of experts (SoftMoEs) have recently shown promise in mitigating this issue for online RL, the reasons behind their effectiveness remain largely unknown. In this work we provide an in-depth analysis identifying the key factors driving this performance gain. We discover the surprising result that tokenizing the encoder output, rather than the use of multiple experts, is what is behind the efficacy of SoftMoEs. Indeed, we demonstrate that even with an appropriately scaled single expert, we are able to maintain the performance gains, largely thanks to tokenization.<|reference_end|> | arxiv | @article{sokar2024don't,
title={Don't flatten, tokenize! Unlocking the key to SoftMoE's efficacy in deep
RL},
author={Ghada Sokar, Johan Obando-Ceron, Aaron Courville, Hugo Larochelle,
Pablo Samuel Castro},
journal={arXiv preprint arXiv:2410.01930},
year={2024},
archivePrefix={arXiv},
eprint={2410.01930},
primaryClass={cs.LG cs.AI}
} | sokar2024don't |
arxiv-664736 | 2410.01933 | TAEGAN: Generating Synthetic Tabular Data For Data Augmentation | <|reference_start|>TAEGAN: Generating Synthetic Tabular Data For Data Augmentation: Synthetic tabular data generation has gained significant attention for its potential in data augmentation, software testing and privacy-preserving data sharing. However, most research has primarily focused on larger datasets and evaluating their quality in terms of metrics like column-wise statistical distributions and inter-feature correlations, while often overlooking its utility for data augmentation, particularly for datasets whose data is scarce. In this paper, we propose Tabular Auto-Encoder Generative Adversarial Network (TAEGAN), an improved GAN-based framework for generating high-quality tabular data. Although large language models (LLMs)-based methods represent the state-of-the-art in synthetic tabular data generation, they are often overkill for small datasets due to their extensive size and complexity. TAEGAN employs a masked auto-encoder as the generator, which for the first time introduces the power of self-supervised pre-training in tabular data generation so that essentially exposes the networks to more information. We extensively evaluate TAEGAN against five state-of-the-art synthetic tabular data generation algorithms. Results from 10 datasets show that TAEGAN outperforms existing deep-learning-based tabular data generation models on 9 out of 10 datasets on the machine learning efficacy and achieves superior data augmentation performance on 7 out of 8 smaller datasets.<|reference_end|> | arxiv | @article{li2024taegan:,
title={TAEGAN: Generating Synthetic Tabular Data For Data Augmentation},
author={Jiayu Li, Zilong Zhao, Kevin Yee, Uzair Javaid and Biplab Sikdar},
journal={arXiv preprint arXiv:2410.01933},
year={2024},
archivePrefix={arXiv},
eprint={2410.01933},
primaryClass={cs.LG}
} | li2024taegan: |
arxiv-664737 | 2410.01939 | Equality Constrained Diffusion for Direct Trajectory Optimization | <|reference_start|>Equality Constrained Diffusion for Direct Trajectory Optimization: The recent success of diffusion-based generative models in image and natural language processing has ignited interest in diffusion-based trajectory optimization for nonlinear control systems. Existing methods cannot, however, handle the nonlinear equality constraints necessary for direct trajectory optimization. As a result, diffusion-based trajectory optimizers are currently limited to shooting methods, where the nonlinear dynamics are enforced by forward rollouts. This precludes many of the benefits enjoyed by direct methods, including flexible state constraints, reduced numerical sensitivity, and easy initial guess specification. In this paper, we present a method for diffusion-based optimization with equality constraints. This allows us to perform direct trajectory optimization, enforcing dynamic feasibility with constraints rather than rollouts. To the best of our knowledge, this is the first diffusion-based optimization algorithm that supports the general nonlinear equality constraints required for direct trajectory optimization.<|reference_end|> | arxiv | @article{kurtz2024equality,
title={Equality Constrained Diffusion for Direct Trajectory Optimization},
author={Vince Kurtz and Joel W. Burdick},
journal={arXiv preprint arXiv:2410.01939},
year={2024},
archivePrefix={arXiv},
eprint={2410.01939},
primaryClass={cs.RO cs.SY eess.SY}
} | kurtz2024equality |
arxiv-664738 | 2410.01943 | CHASE-SQL: Multi-Path Reasoning and Preference Optimized Candidate Selection in Text-to-SQL | <|reference_start|>CHASE-SQL: Multi-Path Reasoning and Preference Optimized Candidate Selection in Text-to-SQL: In tackling the challenges of large language model (LLM) performance for Text-to-SQL tasks, we introduce CHASE-SQL, a new framework that employs innovative strategies, using test-time compute in multi-agent modeling to improve candidate generation and selection. CHASE-SQL leverages LLMs' intrinsic knowledge to generate diverse and high-quality SQL candidates using different LLM generators with: (1) a divide-and-conquer method that decomposes complex queries into manageable sub-queries in a single LLM call; (2) chain-of-thought reasoning based on query execution plans, reflecting the steps a database engine takes during execution; and (3) a unique instance-aware synthetic example generation technique, which offers specific few-shot demonstrations tailored to test questions.To identify the best candidate, a selection agent is employed to rank the candidates through pairwise comparisons with a fine-tuned binary-candidates selection LLM. This selection approach has been demonstrated to be more robust over alternatives. The proposed generators-selector framework not only enhances the quality and diversity of SQL queries but also outperforms previous methods. Overall, our proposed CHASE-SQL achieves the state-of-the-art execution accuracy of 73.0% and 73.01% on the test set and development set of the notable BIRD Text-to-SQL dataset benchmark, rendering CHASE-SQL the top submission of the leaderboard (at the time of paper submission).<|reference_end|> | arxiv | @article{pourreza2024chase-sql:,
title={CHASE-SQL: Multi-Path Reasoning and Preference Optimized Candidate
Selection in Text-to-SQL},
author={Mohammadreza Pourreza, Hailong Li, Ruoxi Sun, Yeounoh Chung, Shayan
Talaei, Gaurav Tarlok Kakkar, Yu Gan, Amin Saberi, Fatma Ozcan, Sercan O.
Arik},
journal={arXiv preprint arXiv:2410.01943},
year={2024},
archivePrefix={arXiv},
eprint={2410.01943},
primaryClass={cs.LG cs.AI cs.CL cs.DB}
} | pourreza2024chase-sql: |
arxiv-664739 | 2410.01944 | One-step Noisy Label Mitigation | <|reference_start|>One-step Noisy Label Mitigation: Mitigating the detrimental effects of noisy labels on the training process has become increasingly critical, as obtaining entirely clean or human-annotated samples for large-scale pre-training tasks is often impractical. Nonetheless, existing noise mitigation methods often encounter limitations in practical applications due to their task-specific design, model dependency, and significant computational overhead. In this work, we exploit the properties of high-dimensional orthogonality to identify a robust and effective boundary in cone space for separating clean and noisy samples. Building on this, we propose One-step Anti-Noise (OSA), a model-agnostic noisy label mitigation paradigm that employs an estimator model and a scoring function to assess the noise level of input pairs through just one-step inference, a cost-efficient process. We empirically demonstrate the superiority of OSA, highlighting its enhanced training robustness, improved task transferability, ease of deployment, and reduced computational costs across various benchmarks, models, and tasks. Our code is released at https://github.com/leolee99/OSA.<|reference_end|> | arxiv | @article{li2024one-step,
title={One-step Noisy Label Mitigation},
author={Hao Li, Jiayang Gu, Jingkuan Song, An Zhang, Lianli Gao},
journal={arXiv preprint arXiv:2410.01944},
year={2024},
archivePrefix={arXiv},
eprint={2410.01944},
primaryClass={cs.CV cs.AI cs.LG}
} | li2024one-step |
arxiv-664740 | 2410.01945 | CALF: Benchmarking Evaluation of LFQA Using Chinese Examinations | <|reference_start|>CALF: Benchmarking Evaluation of LFQA Using Chinese Examinations: Long-Form Question Answering (LFQA) refers to generating in-depth, paragraph-level responses to open-ended questions. Although lots of LFQA methods are developed, evaluating LFQA effectively and efficiently remains challenging due to its high complexity and cost. Therefore, there is no standard benchmark for LFQA evaluation till now. To address this gap, we make the first attempt by proposing a well-constructed, reference-based benchmark named Chinese exAmination for LFQA Evaluation (CALF), aiming to rigorously assess the performance of automatic evaluation metrics for LFQA. The CALF benchmark is derived from Chinese examination questions that have been translated into English. It includes up to 1476 examples consisting of knowledge-intensive and nuanced responses. Our evaluation comprises three different settings to ana lyze the behavior of automatic metrics comprehensively. We conducted extensive experiments on 7 traditional evaluation metrics, 3 prompt-based metrics, and 3 trained evaluation metrics, and tested on agent systems for the LFQA evaluation. The results reveal that none of the current automatic evaluation metrics shows comparable performances with humans, indicating that they cannot capture dense information contained in long-form responses well. In addition, we provide a detailed analysis of the reasons why automatic evaluation metrics fail when evaluating LFQA, offering valuable insights to advance LFQA evaluation systems. Dataset and associated codes can be accessed at our GitHub repository.<|reference_end|> | arxiv | @article{fan2024calf:,
title={CALF: Benchmarking Evaluation of LFQA Using Chinese Examinations},
author={Yuchen Fan, Xin Zhong, Heng Zhou, Yuchen Zhang, Mingyu Liang,
Chengxing Xie, Ermo Hua, Ning Ding, Bowen Zhou},
journal={arXiv preprint arXiv:2410.01945},
year={2024},
archivePrefix={arXiv},
eprint={2410.01945},
primaryClass={cs.CL}
} | fan2024calf: |
arxiv-664741 | 2410.01946 | SciPrompt: Knowledge-augmented Prompting for Fine-grained Categorization of Scientific Topics | <|reference_start|>SciPrompt: Knowledge-augmented Prompting for Fine-grained Categorization of Scientific Topics: Prompt-based fine-tuning has become an essential method for eliciting information encoded in pre-trained language models for a variety of tasks, including text classification. For multi-class classification tasks, prompt-based fine-tuning under low-resource scenarios has resulted in performance levels comparable to those of fully fine-tuning methods. Previous studies have used crafted prompt templates and verbalizers, mapping from the label terms space to the class space, to solve the classification problem as a masked language modeling task. However, cross-domain and fine-grained prompt-based fine-tuning with an automatically enriched verbalizer remains unexplored, mainly due to the difficulty and costs of manually selecting domain label terms for the verbalizer, which requires humans with domain expertise. To address this challenge, we introduce SciPrompt, a framework designed to automatically retrieve scientific topic-related terms for low-resource text classification tasks. To this end, we select semantically correlated and domain-specific label terms within the context of scientific literature for verbalizer augmentation. Furthermore, we propose a new verbalization strategy that uses correlation scores as additional weights to enhance the prediction performance of the language model during model tuning. Our method outperforms state-of-the-art, prompt-based fine-tuning methods on scientific text classification tasks under few and zero-shot settings, especially in classifying fine-grained and emerging scientific topics.<|reference_end|> | arxiv | @article{you2024sciprompt:,
title={SciPrompt: Knowledge-augmented Prompting for Fine-grained Categorization
of Scientific Topics},
author={Zhiwen You, Kanyao Han, Haotian Zhu, Bertram Lud"ascher, Jana Diesner},
journal={arXiv preprint arXiv:2410.01946},
year={2024},
archivePrefix={arXiv},
eprint={2410.01946},
primaryClass={cs.CL}
} | you2024sciprompt: |
arxiv-664742 | 2410.01948 | Differentially Private Parameter-Efficient Fine-tuning for Large ASR Models | <|reference_start|>Differentially Private Parameter-Efficient Fine-tuning for Large ASR Models: Large ASR models can inadvertently leak sensitive information, which can be mitigated by formal privacy measures like differential privacy (DP). However, traditional DP training is computationally expensive, and can hurt model performance. Our study explores DP parameter-efficient fine-tuning as a way to mitigate privacy risks with smaller computation and performance costs for ASR models. Through extensive experimentation and progressive optimization, we achieve 4.6%/8.1% word error rate on LibriSpeech clean/other test-sets, setting a new performance benchmark while maintaining (10, 3.52e-6)-DP in fine-tuning a large ASR model with over 600M parameters.<|reference_end|> | arxiv | @article{liu2024differentially,
title={Differentially Private Parameter-Efficient Fine-tuning for Large ASR
Models},
author={Hongbin Liu, Lun Wang, Om Thakkar, Abhradeep Thakurta, Arun Narayanan},
journal={arXiv preprint arXiv:2410.01948},
year={2024},
archivePrefix={arXiv},
eprint={2410.01948},
primaryClass={cs.CR}
} | liu2024differentially |
arxiv-664743 | 2410.01949 | Discrete Copula Diffusion | <|reference_start|>Discrete Copula Diffusion: Discrete diffusion models have recently shown significant progress in modeling complex data, such as natural languages and DNA sequences. However, unlike diffusion models for continuous data, which can generate high-quality samples in just a few denoising steps, modern discrete diffusion models still require hundreds or even thousands of denoising steps to perform well. In this paper, we identify a fundamental limitation that prevents discrete diffusion models from achieving strong performance with fewer steps -- they fail to capture dependencies between output variables at each denoising step. To address this issue, we provide a formal explanation and introduce a general approach to supplement the missing dependency information by incorporating another deep generative model, termed the copula model. Our method does not require fine-tuning either the diffusion model or the copula model, yet it enables high-quality sample generation with significantly fewer denoising steps. When we apply this approach to autoregressive copula models, the combined model outperforms both models individually in unconditional and conditional text generation. Specifically, the hybrid model achieves better (un)conditional text generation using 8 to 32 times fewer denoising steps than the diffusion model alone. In addition to presenting an effective discrete diffusion generation algorithm, this paper emphasizes the importance of modeling inter-variable dependencies in discrete diffusion.<|reference_end|> | arxiv | @article{liu2024discrete,
title={Discrete Copula Diffusion},
author={Anji Liu, Oliver Broadrick, Mathias Niepert, and Guy Van den Broeck},
journal={arXiv preprint arXiv:2410.01949},
year={2024},
archivePrefix={arXiv},
eprint={2410.01949},
primaryClass={cs.LG}
} | liu2024discrete |
arxiv-664744 | 2410.01950 | Score-based pullback Riemannian geometry | <|reference_start|>Score-based pullback Riemannian geometry: Data-driven Riemannian geometry has emerged as a powerful tool for interpretable representation learning, offering improved efficiency in downstream tasks. Moving forward, it is crucial to balance cheap manifold mappings with efficient training algorithms. In this work, we integrate concepts from pullback Riemannian geometry and generative models to propose a framework for data-driven Riemannian geometry that is scalable in both geometry and learning: score-based pullback Riemannian geometry. Focusing on unimodal distributions as a first step, we propose a score-based Riemannian structure with closed-form geodesics that pass through the data probability density. With this structure, we construct a Riemannian autoencoder (RAE) with error bounds for discovering the correct data manifold dimension. This framework can naturally be used with anisotropic normalizing flows by adopting isometry regularization during training. Through numerical experiments on various datasets, we demonstrate that our framework not only produces high-quality geodesics through the data support, but also reliably estimates the intrinsic dimension of the data manifold and provides a global chart of the manifold, even in high-dimensional ambient spaces.<|reference_end|> | arxiv | @article{diepeveen2024score-based,
title={Score-based pullback Riemannian geometry},
author={Willem Diepeveen, Georgios Batzolis, Zakhar Shumaylov, Carola-Bibiane
Sch"onlieb},
journal={arXiv preprint arXiv:2410.01950},
year={2024},
archivePrefix={arXiv},
eprint={2410.01950},
primaryClass={cs.LG math.DG stat.ML}
} | diepeveen2024score-based |
arxiv-664745 | 2410.01951 | List Decoding Bounds for Binary Codes with Noiseless Feedback | <|reference_start|>List Decoding Bounds for Binary Codes with Noiseless Feedback: In an error-correcting code, a sender encodes a message $x \in \{ 0, 1 \}^k$ such that it is still decodable by a receiver on the other end of a noisy channel. In the setting of \emph{error-correcting codes with feedback}, after sending each bit, the sender learns what was received at the other end and can tailor future messages accordingly. While the unique decoding radius of feedback codes has long been known to be $\frac13$, the list decoding capabilities of feedback codes is not well understood. In this paper, we provide the first nontrivial bounds on the list decoding radius of feedback codes for lists of size $\ell$. For $\ell = 2$, we fully determine the $2$-list decoding radius to be $\frac37$. For larger values of $\ell$, we show an upper bound of $\frac12 - \frac{1}{2^{\ell + 2} - 2}$, and show that the same techniques for the $\ell = 2$ case cannot match this upper bound in general.<|reference_end|> | arxiv | @article{gupta2024list,
title={List Decoding Bounds for Binary Codes with Noiseless Feedback},
author={Meghal Gupta and Rachel Yun Zhang},
journal={arXiv preprint arXiv:2410.01951},
year={2024},
archivePrefix={arXiv},
eprint={2410.01951},
primaryClass={cs.IT math.IT}
} | gupta2024list |
arxiv-664746 | 2410.01952 | TypedThinker: Typed Thinking Improves Large Language Model Reasoning | <|reference_start|>TypedThinker: Typed Thinking Improves Large Language Model Reasoning: Despite significant advancements in the reasoning capabilities of Large Language Models (LLMs), the lack of diverse reasoning solutions often makes them trapped in a limited solution search area. In this paper, we propose TypedThinker, a novel framework that enhances LLMs' problem-solving abilities by incorporating multiple reasoning types (deductive, inductive, abductive, and analogical). Our analysis across four benchmarks reveals that different reasoning types uniquely solve distinct sets of problems, highlighting the importance of diverse thinking approaches. TypedThinker addresses two key challenges: selecting appropriate reasoning types for given problems and effectively implementing specific reasoning types. Through self-training on successful experiences, TypedThinker learns an implicit policy for reasoning type selection and application. Experimental results demonstrate significant improvements over baseline models, with accuracy increases of 3.4% for Mistral 7B and 16.7% for LLaMA3 8B across four reasoning benchmarks. Notably, TypedThinker shows effective generalization to new benchmarks and can further enhance the reasoning capability of powerful models like GPT-4o. The code is released at https://github.com/dqwang122/ThinkHub.<|reference_end|> | arxiv | @article{wang2024typedthinker:,
title={TypedThinker: Typed Thinking Improves Large Language Model Reasoning},
author={Danqing Wang, Jianxin Ma, Fei Fang, Lei Li},
journal={arXiv preprint arXiv:2410.01952},
year={2024},
archivePrefix={arXiv},
eprint={2410.01952},
primaryClass={cs.CL}
} | wang2024typedthinker: |
arxiv-664747 | 2410.01953 | Generate then Refine: Data Augmentation for Zero-shot Intent Detection | <|reference_start|>Generate then Refine: Data Augmentation for Zero-shot Intent Detection: In this short paper we propose a data augmentation method for intent detection in zero-resource domains. Existing data augmentation methods rely on few labelled examples for each intent category, which can be expensive in settings with many possible intents. We use a two-stage approach: First, we generate utterances for intent labels using an open-source large language model in a zero-shot setting. Second, we develop a smaller sequence-to-sequence model (the Refiner), to improve the generated utterances. The Refiner is fine-tuned on seen domains and then applied to unseen domains. We evaluate our method by training an intent classifier on the generated data, and evaluating it on real (human) data. We find that the Refiner significantly improves the data utility and diversity over the zero-shot LLM baseline for unseen domains and over common baseline approaches. Our results indicate that a two-step approach of a generative LLM in zero-shot setting and a smaller sequence-to-sequence model can provide high-quality data for intent detection.<|reference_end|> | arxiv | @article{lin2024generate,
title={Generate then Refine: Data Augmentation for Zero-shot Intent Detection},
author={I-Fan Lin, Faegheh Hasibi, Suzan Verberne},
journal={arXiv preprint arXiv:2410.01953},
year={2024},
archivePrefix={arXiv},
eprint={2410.01953},
primaryClass={cs.CL}
} | lin2024generate |
arxiv-664748 | 2410.01954 | ComaDICE: Offline Cooperative Multi-Agent Reinforcement Learning with Stationary Distribution Shift Regularization | <|reference_start|>ComaDICE: Offline Cooperative Multi-Agent Reinforcement Learning with Stationary Distribution Shift Regularization: Offline reinforcement learning (RL) has garnered significant attention for its ability to learn effective policies from pre-collected datasets without the need for further environmental interactions. While promising results have been demonstrated in single-agent settings, offline multi-agent reinforcement learning (MARL) presents additional challenges due to the large joint state-action space and the complexity of multi-agent behaviors. A key issue in offline RL is the distributional shift, which arises when the target policy being optimized deviates from the behavior policy that generated the data. This problem is exacerbated in MARL due to the interdependence between agents' local policies and the expansive joint state-action space. Prior approaches have primarily addressed this challenge by incorporating regularization in the space of either Q-functions or policies. In this work, we introduce a regularizer in the space of stationary distributions to better handle distributional shift. Our algorithm, ComaDICE, offers a principled framework for offline cooperative MARL by incorporating stationary distribution regularization for the global learning policy, complemented by a carefully structured multi-agent value decomposition strategy to facilitate multi-agent training. Through extensive experiments on the multi-agent MuJoCo and StarCraft II benchmarks, we demonstrate that ComaDICE achieves superior performance compared to state-of-the-art offline MARL methods across nearly all tasks.<|reference_end|> | arxiv | @article{bui2024comadice:,
title={ComaDICE: Offline Cooperative Multi-Agent Reinforcement Learning with
Stationary Distribution Shift Regularization},
author={The Viet Bui and Thanh Hong Nguyen and Tien Mai},
journal={arXiv preprint arXiv:2410.01954},
year={2024},
archivePrefix={arXiv},
eprint={2410.01954},
primaryClass={cs.LG cs.MA}
} | bui2024comadice: |
arxiv-664749 | 2410.01955 | Quantum-data-driven dynamical transition in quantum learning | <|reference_start|>Quantum-data-driven dynamical transition in quantum learning: Quantum circuits are an essential ingredient of quantum information processing. Parameterized quantum circuits optimized under a specific cost function -- quantum neural networks (QNNs) -- provide a paradigm for achieving quantum advantage in the near term. Understanding QNN training dynamics is crucial for optimizing their performance. In terms of supervised learning tasks such as classification and regression for large datasets, the role of quantum data in QNN training dynamics remains unclear. We reveal a quantum-data-driven dynamical transition, where the target value and data determine the polynomial or exponential convergence of the training. We analytically derive the complete classification of fixed points from the dynamical equation and reveal a comprehensive `phase diagram' featuring seven distinct dynamics. These dynamics originate from a bifurcation transition with multiple codimensions induced by training data, extending the transcritical bifurcation in simple optimization tasks. Furthermore, perturbative analyses identify an exponential convergence class and a polynomial convergence class among the seven dynamics. We provide a non-perturbative theory to explain the transition via generalized restricted Haar ensemble. The analytical results are confirmed with numerical simulations of QNN training and experimental verification on IBM quantum devices. As the QNN training dynamics is determined by the choice of the target value, our findings provide guidance on constructing the cost function to optimize the speed of convergence.<|reference_end|> | arxiv | @article{zhang2024quantum-data-driven,
title={Quantum-data-driven dynamical transition in quantum learning},
author={Bingzhi Zhang, Junyu Liu, Liang Jiang and Quntao Zhuang},
journal={arXiv preprint arXiv:2410.01955},
year={2024},
archivePrefix={arXiv},
eprint={2410.01955},
primaryClass={quant-ph cond-mat.stat-mech cs.LG}
} | zhang2024quantum-data-driven |
arxiv-664750 | 2410.01956 | Learning-Based Autonomous Navigation, Benchmark Environments and Simulation Framework for Endovascular Interventions | <|reference_start|>Learning-Based Autonomous Navigation, Benchmark Environments and Simulation Framework for Endovascular Interventions: Endovascular interventions are a life-saving treatment for many diseases, yet suffer from drawbacks such as radiation exposure and potential scarcity of proficient physicians. Robotic assistance during these interventions could be a promising support towards these problems. Research focusing on autonomous endovascular interventions utilizing artificial intelligence-based methodologies is gaining popularity. However, variability in assessment environments hinders the ability to compare and contrast the efficacy of different approaches, primarily due to each study employing a unique evaluation framework. In this study, we present deep reinforcement learning-based autonomous endovascular device navigation on three distinct digital benchmark interventions: BasicWireNav, ArchVariety, and DualDeviceNav. The benchmark interventions were implemented with our modular simulation framework stEVE (simulated EndoVascular Environment). Autonomous controllers were trained solely in simulation and evaluated in simulation and on physical test benches with camera and fluoroscopy feedback. Autonomous control for BasicWireNav and ArchVariety reached high success rates and was successfully transferred from the simulated training environment to the physical test benches, while autonomous control for DualDeviceNav reached a moderate success rate. The experiments demonstrate the feasibility of stEVE and its potential for transferring controllers trained in simulation to real-world scenarios. Nevertheless, they also reveal areas that offer opportunities for future research. This study demonstrates the transferability of autonomous controllers from simulation to the real world in endovascular navigation and lowers the entry barriers and increases the comparability of research on endovascular assistance systems by providing open-source training scripts, benchmarks and the stEVE framework.<|reference_end|> | arxiv | @article{karstensen2024learning-based,
title={Learning-Based Autonomous Navigation, Benchmark Environments and
Simulation Framework for Endovascular Interventions},
author={Lennart Karstensen, Harry Robertshaw, Johannes Hatzl, Benjamin
Jackson, Jens Langej"urgen, Katharina Breininger, Christian Uhl, S.M.Hadi
Sadati, Thomas Booth, Christos Bergeles and Franziska Mathis-Ullrich},
journal={arXiv preprint arXiv:2410.01956},
year={2024},
archivePrefix={arXiv},
eprint={2410.01956},
primaryClass={cs.RO}
} | karstensen2024learning-based |
arxiv-664751 | 2410.01957 | How Reliable Is Human Feedback For Aligning Large Language Models? | <|reference_start|>How Reliable Is Human Feedback For Aligning Large Language Models?: Most alignment research today focuses on designing new learning algorithms using datasets like Anthropic-HH, assuming human feedback data is inherently reliable. However, little attention has been given to the qualitative unreliability of human feedback and its impact on alignment. To address this gap, we conduct a comprehensive study and provide an in-depth analysis of human feedback data. We assess feedback reliability using a committee of gold reward models, revealing that over 25% of the dataset shows low or no agreement with these models, implying a high degree of unreliability. Through a qualitative analysis, we identify six key sources of unreliability, such as mis-labeling, subjective preferences, differing criteria and thresholds for helpfulness and harmlessness, etc. Lastly, to mitigate unreliability, we propose Source-Aware Cleaning, an automatic data-cleaning method guided by the insight of our qualitative analysis, to significantly improve data quality. Extensive experiments demonstrate that models trained on our cleaned dataset, HH-Clean, substantially outperform those trained on the original dataset. We release HH-Clean to support more reliable LLM alignment evaluation in the future.<|reference_end|> | arxiv | @article{yeh2024how,
title={How Reliable Is Human Feedback For Aligning Large Language Models?},
author={Min-Hsuan Yeh, Leitian Tao, Jeffrey Wang, Xuefeng Du, and Yixuan Li},
journal={arXiv preprint arXiv:2410.01957},
year={2024},
archivePrefix={arXiv},
eprint={2410.01957},
primaryClass={cs.CL}
} | yeh2024how |
arxiv-664752 | 2410.01958 | Adaptive Invariant Extended Kalman Filter with Noise Covariance Tuning for Attitude Estimation | <|reference_start|>Adaptive Invariant Extended Kalman Filter with Noise Covariance Tuning for Attitude Estimation: Attitude estimation is crucial in aerospace engineering, robotics, and virtual reality applications, but faces difficulties due to nonlinear system dynamics and sensor limitations. This paper addresses the challenge of attitude estimation using quaternion-based adaptive right invariant extended Kalman filtering (RI-EKF) that integrates data from inertial and magnetometer sensors. Our approach applies the expectation-maximization (EM) algorithm to estimate noise covariance, exploiting RI-EKF symmetry properties. We analyze the adaptive RI-EKF's stability, convergence, and accuracy, validating its performance through simulations and comparison with the left invariant EKF. Monte Carlo simulations validate the effectiveness of our noise covariance estimation technique across various window lengths.<|reference_end|> | arxiv | @article{pandey2024adaptive,
title={Adaptive Invariant Extended Kalman Filter with Noise Covariance Tuning
for Attitude Estimation},
author={Yash Pandey, Rahul Bhattacharyya, Yatindra Nath Singh},
journal={arXiv preprint arXiv:2410.01958},
year={2024},
archivePrefix={arXiv},
eprint={2410.01958},
primaryClass={eess.SP cs.SY eess.SY}
} | pandey2024adaptive |
arxiv-664753 | 2410.01959 | Scale-Invariant Learning-to-Rank | <|reference_start|>Scale-Invariant Learning-to-Rank: At Expedia, learning-to-rank (LTR) models plays a key role on our website in sorting and presenting information more relevant to users, such as search filters, property rooms, amenities, and images. A major challenge in deploying these models is ensuring consistent feature scaling between training and production data, as discrepancies can lead to unreliable rankings when deployed. Normalization techniques like feature standardization and batch normalization could address these issues but are impractical in production due to latency impacts and the difficulty of distributed real-time inference. To address consistent feature scaling issue, we introduce a scale-invariant LTR framework which combines a deep and a wide neural network to mathematically guarantee scale-invariance in the model at both training and prediction time. We evaluate our framework in simulated real-world scenarios with injected feature scale issues by perturbing the test set at prediction time, and show that even with inconsistent train-test scaling, using framework achieves better performance than without.<|reference_end|> | arxiv | @article{petrozziello2024scale-invariant,
title={Scale-Invariant Learning-to-Rank},
author={Alessio Petrozziello, Christian Sommeregger, and Ye-Sheen Lim},
journal={arXiv preprint arXiv:2410.01959},
year={2024},
archivePrefix={arXiv},
eprint={2410.01959},
primaryClass={cs.LG}
} | petrozziello2024scale-invariant |
arxiv-664754 | 2410.01961 | Characterizing and Testing Principal Minor Equivalence of Matrices | <|reference_start|>Characterizing and Testing Principal Minor Equivalence of Matrices: Two matrices are said to be principal minor equivalent if they have equal corresponding principal minors of all orders. We give a characterization of principal minor equivalence and a deterministic polynomial time algorithm to check if two given matrices are principal minor equivalent. Earlier such results were known for certain special cases like symmetric matrices, skew-symmetric matrices with {0, 1, -1}-entries, and matrices with no cuts (i.e., for any non-trivial partition of the indices, the top right block or the bottom left block must have rank more than 1). As an immediate application, we get an algorithm to check if the determinantal point processes corresponding to two given kernel matrices (not necessarily symmetric) are the same. As another application, we give a deterministic polynomial-time test to check equality of two multivariate polynomials, each computed by a symbolic determinant with a rank 1 constraint on coefficient matrices.<|reference_end|> | arxiv | @article{chatterjee2024characterizing,
title={Characterizing and Testing Principal Minor Equivalence of Matrices},
author={Abhranil Chatterjee, Sumanta Ghosh, Rohit Gurjar, Roshan Raj},
journal={arXiv preprint arXiv:2410.01961},
year={2024},
archivePrefix={arXiv},
eprint={2410.01961},
primaryClass={cs.CC cs.DS math.CO}
} | chatterjee2024characterizing |
arxiv-664755 | 2410.01962 | Language Supervised Human Action Recognition with Salient Fusion: Construction Worker Action Recognition as a Use Case | <|reference_start|>Language Supervised Human Action Recognition with Salient Fusion: Construction Worker Action Recognition as a Use Case: Detecting human actions is a crucial task for autonomous robots and vehicles, often requiring the integration of various data modalities for improved accuracy. In this study, we introduce a novel approach to Human Action Recognition (HAR) based on skeleton and visual cues. Our method leverages a language model to guide the feature extraction process in the skeleton encoder. Specifically, we employ learnable prompts for the language model conditioned on the skeleton modality to optimize feature representation. Furthermore, we propose a fusion mechanism that combines dual-modality features using a salient fusion module, incorporating attention and transformer mechanisms to address the modalities' high dimensionality. This fusion process prioritizes informative video frames and body joints, enhancing the recognition accuracy of human actions. Additionally, we introduce a new dataset tailored for real-world robotic applications in construction sites, featuring visual, skeleton, and depth data modalities, named VolvoConstAct. This dataset serves to facilitate the training and evaluation of machine learning models to instruct autonomous construction machines for performing necessary tasks in the real world construction zones. To evaluate our approach, we conduct experiments on our dataset as well as three widely used public datasets, NTU-RGB+D, NTU-RGB+D120 and NW-UCLA. Results reveal that our proposed method achieves promising performance across all datasets, demonstrating its robustness and potential for various applications. The codes and dataset are available at: https://mmahdavian.github.io/ls_har/<|reference_end|> | arxiv | @article{mahdavian2024language,
title={Language Supervised Human Action Recognition with Salient Fusion:
Construction Worker Action Recognition as a Use Case},
author={Mohammad Mahdavian, Mohammad Loni, Mo Chen},
journal={arXiv preprint arXiv:2410.01962},
year={2024},
archivePrefix={arXiv},
eprint={2410.01962},
primaryClass={cs.CV cs.RO}
} | mahdavian2024language |
arxiv-664756 | 2410.01966 | Enhancing Screen Time Identification in Children with a Multi-View Vision Language Model and Screen Time Tracker | <|reference_start|>Enhancing Screen Time Identification in Children with a Multi-View Vision Language Model and Screen Time Tracker: Being able to accurately monitor the screen exposure of young children is important for research on phenomena linked to screen use such as childhood obesity, physical activity, and social interaction. Most existing studies rely upon self-report or manual measures from bulky wearable sensors, thus lacking efficiency and accuracy in capturing quantitative screen exposure data. In this work, we developed a novel sensor informatics framework that utilizes egocentric images from a wearable sensor, termed the screen time tracker (STT), and a vision language model (VLM). In particular, we devised a multi-view VLM that takes multiple views from egocentric image sequences and interprets screen exposure dynamically. We validated our approach by using a dataset of children's free-living activities, demonstrating significant improvement over existing methods in plain vision language models and object detection models. Results supported the promise of this monitoring approach, which could optimize behavioral research on screen exposure in children's naturalistic settings.<|reference_end|> | arxiv | @article{hou2024enhancing,
title={Enhancing Screen Time Identification in Children with a Multi-View
Vision Language Model and Screen Time Tracker},
author={Xinlong Hou, Sen Shen, Xueshen Li, Xinran Gao, Ziyi Huang, Steven J.
Holiday, Matthew R. Cribbet, Susan W. White, Edward Sazonov, Yu Gan},
journal={arXiv preprint arXiv:2410.01966},
year={2024},
archivePrefix={arXiv},
eprint={2410.01966},
primaryClass={cs.CV cs.AI}
} | hou2024enhancing |
arxiv-664757 | 2410.01968 | Bi-Level Motion Imitation for Humanoid Robots | <|reference_start|>Bi-Level Motion Imitation for Humanoid Robots: Imitation learning from human motion capture (MoCap) data provides a promising way to train humanoid robots. However, due to differences in morphology, such as varying degrees of joint freedom and force limits, exact replication of human behaviors may not be feasible for humanoid robots. Consequently, incorporating physically infeasible MoCap data in training datasets can adversely affect the performance of the robot policy. To address this issue, we propose a bi-level optimization-based imitation learning framework that alternates between optimizing both the robot policy and the target MoCap data. Specifically, we first develop a generative latent dynamics model using a novel self-consistent auto-encoder, which learns sparse and structured motion representations while capturing desired motion patterns in the dataset. The dynamics model is then utilized to generate reference motions while the latent representation regularizes the bi-level motion imitation process. Simulations conducted with a realistic model of a humanoid robot demonstrate that our method enhances the robot policy by modifying reference motions to be physically consistent.<|reference_end|> | arxiv | @article{zhao2024bi-level,
title={Bi-Level Motion Imitation for Humanoid Robots},
author={Wenshuai Zhao, Yi Zhao, Joni Pajarinen, Michael Muehlebach},
journal={arXiv preprint arXiv:2410.01968},
year={2024},
archivePrefix={arXiv},
eprint={2410.01968},
primaryClass={cs.RO}
} | zhao2024bi-level |
arxiv-664758 | 2410.01969 | Which Algorithms Have Tight Generalization Bounds? | <|reference_start|>Which Algorithms Have Tight Generalization Bounds?: We study which machine learning algorithms have tight generalization bounds. First, we present conditions that preclude the existence of tight generalization bounds. Specifically, we show that algorithms that have certain inductive biases that cause them to be unstable do not admit tight generalization bounds. Next, we show that algorithms that are sufficiently stable do have tight generalization bounds. We conclude with a simple characterization that relates the existence of tight generalization bounds to the conditional variance of the algorithm's loss.<|reference_end|> | arxiv | @article{gastpar2024which,
title={Which Algorithms Have Tight Generalization Bounds?},
author={Michael Gastpar, Ido Nachum, Jonathan Shafer, Thomas Weinberger},
journal={arXiv preprint arXiv:2410.01969},
year={2024},
archivePrefix={arXiv},
eprint={2410.01969},
primaryClass={cs.LG stat.ML}
} | gastpar2024which |
arxiv-664759 | 2410.01970 | Aerial-based Crisis Management Center (ACMC) | <|reference_start|>Aerial-based Crisis Management Center (ACMC): Crisis management (CM) for critical infrastructures, natural disasters such as wildfires and hurricanes, terrorist actions, or civil unrest requires high speed communications and connectivity, and access to high performance computational resources to deliver timely dynamic responses to the crisis being managed by different first responders. CM systems should detect, recognize, and disseminate huge amounts of heterogeneous dynamic events that operate at different speeds and formats. Furthermore, the processing of crisis events and the development of real-time responses are major research challenges when the communications and computational resources needed by CM stakeholders are not available or severely degraded by the crisis. The main goal of the research presented in this paper is to utilize Unmanned Autonomous Systems (UAS) to provide Aerial-based Crisis Management Center (ACMC) that will provide the required communications services and the computational resources that are critically needed by first responders. In our approach to develop an ACMC architecture, we utilize a set of flexible Unmanned Aerial Systems (UAS) that can be dynamically composed to meet the communications and computational requirements of CM tasks. The ACMC services will be modeled as a deep neural network (DNN) mass transport approach to cover a distributed target in a decentralized manner. This is indeed a new decentralized coverage approach with time-varying communication weights. Furthermore, our analysis proves the stability and convergence of the proposed DNN-based mass transport for a team of UAS (e.g., quadcopters), where each quadcopter uses a feedback nonlinear control to independently attain the intended coverage trajectory in a decentralized manner.<|reference_end|> | arxiv | @article{rastgoftar2024aerial-based,
title={Aerial-based Crisis Management Center (ACMC)},
author={Hossein Rastgoftar and Salim Hariri},
journal={arXiv preprint arXiv:2410.01970},
year={2024},
archivePrefix={arXiv},
eprint={2410.01970},
primaryClass={eess.SY cs.SY}
} | rastgoftar2024aerial-based |
arxiv-664760 | 2410.01971 | Run-time Observation Interventions Make Vision-Language-Action Models More Visually Robust | <|reference_start|>Run-time Observation Interventions Make Vision-Language-Action Models More Visually Robust: Vision-language-action (VLA) models trained on large-scale internet data and robot demonstrations have the potential to serve as generalist robot policies. However, despite their large-scale training, VLAs are often brittle to task-irrelevant visual details such as distractor objects or background colors. We introduce Bring Your Own VLA (BYOVLA): a run-time intervention scheme that (1) dynamically identifies regions of the input image that the model is sensitive to, and (2) minimally alters task-irrelevant regions to reduce the model's sensitivity using automated image editing tools. Our approach is compatible with any off the shelf VLA without model fine-tuning or access to the model's weights. Hardware experiments on language-instructed manipulation tasks demonstrate that BYOVLA enables state-of-the-art VLA models to nearly retain their nominal performance in the presence of distractor objects and backgrounds, which otherwise degrade task success rates by up to 40%. Website with additional information, videos, and code: https://aasherh.github.io/byovla/ .<|reference_end|> | arxiv | @article{hancock2024run-time,
title={Run-time Observation Interventions Make Vision-Language-Action Models
More Visually Robust},
author={Asher J. Hancock, Allen Z. Ren and Anirudha Majumdar},
journal={arXiv preprint arXiv:2410.01971},
year={2024},
archivePrefix={arXiv},
eprint={2410.01971},
primaryClass={cs.RO cs.LG}
} | hancock2024run-time |
arxiv-664761 | 2410.01978 | LLM+KG@VLDB'24 Workshop Summary | <|reference_start|>LLM+KG@VLDB'24 Workshop Summary: The unification of large language models (LLMs) and knowledge graphs (KGs) has emerged as a hot topic. At the LLM+KG'24 workshop, held in conjunction with VLDB 2024 in Guangzhou, China, one of the key themes explored was important data management challenges and opportunities due to the effective interaction between LLMs and KGs. This report outlines the major directions and approaches presented by various speakers during the LLM+KG'24 workshop.<|reference_end|> | arxiv | @article{khan2024llm+kg@vldb'24,
title={LLM+KG@VLDB'24 Workshop Summary},
author={Arijit Khan, Tianxing Wu, Xi Chen},
journal={arXiv preprint arXiv:2410.01978},
year={2024},
archivePrefix={arXiv},
eprint={2410.01978},
primaryClass={cs.DB cs.AI cs.LG}
} | khan2024llm+kg@vldb'24 |
arxiv-664762 | 2410.01979 | Auto-conditioned primal-dual hybrid gradient method and alternating direction method of multipliers | <|reference_start|>Auto-conditioned primal-dual hybrid gradient method and alternating direction method of multipliers: Line search procedures are often employed in primal-dual methods for bilinear saddle point problems, especially when the norm of the linear operator is large or difficult to compute. In this paper, we demonstrate that line search is unnecessary by introducing a novel primal-dual method, the auto-conditioned primal-dual hybrid gradient (AC-PDHG) method, which achieves optimal complexity for solving bilinear saddle point problems. AC-PDHG is fully adaptive to the linear operator, using only past iterates to estimate its norm. We further tailor AC-PDHG to solve linearly constrained problems, providing convergence guarantees for both the optimality gap and constraint violation. Moreover, we explore an important class of linearly constrained problems where both the objective and constraints decompose into two parts. By incorporating the design principles of AC-PDHG into the preconditioned alternating direction method of multipliers (ADMM), we propose the auto-conditioned alternating direction method of multipliers (AC-ADMM), which guarantees convergence based solely on one part of the constraint matrix and fully adapts to it, eliminating the need for line search. Finally, we extend both AC-PDHG and AC-ADMM to solve bilinear problems with an additional smooth term. By integrating these methods with a novel acceleration scheme, we attain optimal iteration complexities under the single-oracle setting.<|reference_end|> | arxiv | @article{lan2024auto-conditioned,
title={Auto-conditioned primal-dual hybrid gradient method and alternating
direction method of multipliers},
author={Guanghui Lan, Tianjiao Li},
journal={arXiv preprint arXiv:2410.01979},
year={2024},
archivePrefix={arXiv},
eprint={2410.01979},
primaryClass={math.OC cs.LG stat.ML}
} | lan2024auto-conditioned |
arxiv-664763 | 2410.01981 | Surveying the Rust Verification Landscape | <|reference_start|>Surveying the Rust Verification Landscape: Rust aims to be a safe programming language applicable to systems programming applications. In particular, its type system has strong guardrails to prevent a variety of issues, such as memory safety bugs and data races. However, these guardrails can be sidestepped via the unsafe keyword. unsafe allows certain otherwise-prohibited operations, but shifts the onus of preventing undefined behaviour from the Rust language's compile-time checks to the developer. We believe that tools have a role to play in ensuring the absence of undefined behaviour in the presence of unsafe code. Moreover, safety aside, programs would also benefit from being verified for functional correctness, ensuring that they meet their specifications. In this research proposal, we explore what it means to do Rust verification. Specifically, we explore which properties are worth verifying for Rust; what techniques exist to verify them; and which code is worth verifying. In doing so, we motivate an effort to verify safety properties of the Rust standard library, presenting the relevant challenges along with ideas to address them.<|reference_end|> | arxiv | @article{blanc2024surveying,
title={Surveying the Rust Verification Landscape},
author={Alex Le Blanc, Patrick Lam},
journal={arXiv preprint arXiv:2410.01981},
year={2024},
archivePrefix={arXiv},
eprint={2410.01981},
primaryClass={cs.PL}
} | blanc2024surveying |
arxiv-664764 | 2410.01982 | Decentralized Collaborative Inertial Tracking | <|reference_start|>Decentralized Collaborative Inertial Tracking: Although people spend most of their time indoors, outdoor tracking systems, such as the Global Positioning System (GPS), are predominantly used for location-based services. These systems are accurate outdoors, easy to use, and operate autonomously on each mobile device. In contrast, Indoor Tracking Systems~(ITS) lack standardization and are often difficult to operate because they require costly infrastructure. In this paper, we propose an indoor tracking algorithm that uses collected data from inertial sensors embedded in most mobile devices. In this setting, mobile devices autonomously estimate their location, hence removing the burden of deploying and maintaining complex and scattered hardware infrastructure. In addition, these devices collaborate by anonymously exchanging data with other nearby devices, using wireless communication, such as Bluetooth, to correct errors in their location estimates. Our collaborative algorithm relies on low-complexity geometry operations and can be deployed on any recent mobile device with commercial-grade sensors. We evaluate our solution on real-life data collected by different devices. Experimentation with 16 simultaneously moving and collaborating devices shows an average accuracy improvement of 44% compared to the standalone Pedestrian Dead Reckoning algorithm.<|reference_end|> | arxiv | @article{diallo2024decentralized,
title={Decentralized Collaborative Inertial Tracking},
author={Alpha Diallo, Benoit Garbinato},
journal={Mobile and Ubiquitous Systems: Computing, Networking and Services.
MobiQuitous 2023. Lecture Notes of the Institute for Computer Sciences,
Social Informatics and Telecommunications Engineering, vol 593. Springer,
Cham},
year={2024},
doi={10.1007/978-3-031-63989-0_2},
archivePrefix={arXiv},
eprint={2410.01982},
primaryClass={cs.ET cs.DC cs.NI eess.SP}
} | diallo2024decentralized |
arxiv-664765 | 2410.01984 | A Preventive-Corrective Scheme for Ensuring Power System Security During Active Wildfire Risks | <|reference_start|>A Preventive-Corrective Scheme for Ensuring Power System Security During Active Wildfire Risks: The focus of this paper is on operating the electric power grid in a secure manner when wildfire risks are high. This is a challenging problem because of the uncertain ways in which the fires can impact the operation of the power system. To address this challenge, we propose a novel preventive-corrective coordinated decision-making scheme that quickly mitigates both static and dynamic insecurities given the risk of active wildfires in a region. The scheme utilizes a comprehensive contingency analysis tool for multi-asset outages that leverages: (i) a Feasibility Test algorithm which exhaustively desaturates overloaded cut-sets to prevent cascading line outages, and (ii) a data-driven transient stability analyzer which alleviates dynamic instabilities. This tool is then used to operate a coordinated unit commitment/optimal power flow model that is designed to adapt to varying risk levels associated with wildfires. Depending on the allowed risk, the model balances economical operation and grid robustness. The results obtained using the IEEE 118-bus system indicate that the proposed approach alleviates system vulnerabilities to wildfires while also minimizing operational cost.<|reference_end|> | arxiv | @article{sahoo2024a,
title={A Preventive-Corrective Scheme for Ensuring Power System Security During
Active Wildfire Risks},
author={Satyaprajna Sahoo, Anamitra Pal},
journal={arXiv preprint arXiv:2410.01984},
year={2024},
archivePrefix={arXiv},
eprint={2410.01984},
primaryClass={eess.SY cs.SY}
} | sahoo2024a |
arxiv-664766 | 2410.01985 | Lost-in-Distance: Impact of Contextual Proximity on LLM Performance in Graph Tasks | <|reference_start|>Lost-in-Distance: Impact of Contextual Proximity on LLM Performance in Graph Tasks: Despite significant advancements, Large Language Models (LLMs) exhibit blind spots that impair their ability to retrieve and process relevant contextual data effectively. We demonstrate that LLM performance in graph tasks with complexities beyond the "needle-in-a-haystack" scenario-where solving the problem requires cross-referencing and reasoning across multiple subproblems jointly-is influenced by the proximity of relevant information within the context, a phenomenon we term "lost-in-distance". We examine two fundamental graph tasks: identifying common connections between two nodes and assessing similarity among three nodes, and show that the model's performance in these tasks significantly depends on the relative positioning of common edges. We evaluate three publicly available LLMs-Llama-3-8B, Llama-3-70B, and GPT-4-using various graph encoding techniques that represent graph structures for LLM input. We propose a formulation for the lost-in-distance phenomenon and demonstrate that lost-in-distance and lost-in-the middle phenomenas occur independently. Results indicate that model accuracy can decline by up to 6x as the distance between node connections increases, independent of graph encoding and model size.<|reference_end|> | arxiv | @article{firooz2024lost-in-distance:,
title={Lost-in-Distance: Impact of Contextual Proximity on LLM Performance in
Graph Tasks},
author={Hamed Firooz, Maziar Sanjabi, Wenlong Jiang, Xiaoling Zhai},
journal={arXiv preprint arXiv:2410.01985},
year={2024},
archivePrefix={arXiv},
eprint={2410.01985},
primaryClass={cs.AI}
} | firooz2024lost-in-distance: |
arxiv-664767 | 2410.01986 | An Analysis of Market-to-Market Coordination | <|reference_start|>An Analysis of Market-to-Market Coordination: The growing usage of renewable energy resources has introduced significant uncertainties in energy generation, enlarging challenges for Regional Transmission Operators (RTOs) in managing transmission congestion. To mitigate congestion that affects neighboring regions, RTOs employ a market-to-market (M2M) process through an iterative method, in which they exchange real-time security-constrained economic dispatch solutions and communicate requests for congestion relief. While this method provides economic benefits, it struggles with issues like power swings and time delays. To explore the full potential of M2M enhancements, in this paper, we first analyze the current M2M iterative method practice to better understand its efficacy and identify places for improvements. Then, we explore enhancements and develop an ADMM method for the M2M coordination that optimizes congestion management. Specifically, our ADMM method can achieve a minimal cost that is the same as the cost obtained through a centralized model that optimizes multiple markets altogether. Our final case studies, across a comprehensive set of multi-area benchmark instances, demonstrate the superior performance of the proposed ADMM algorithm for the M2M process. Meanwhile, we identify scenarios where the existing M2M process fails to provide solutions as a by-product. Finally, the algorithm is implemented in an open-source package UnitCommitment.jl for easy access by a broader audience.<|reference_end|> | arxiv | @article{ren2024an,
title={An Analysis of Market-to-Market Coordination},
author={Weihang Ren and Alinson S. Xavier and Fengyu Wang and Yongpei Guan and
Feng Qiu},
journal={arXiv preprint arXiv:2410.01986},
year={2024},
archivePrefix={arXiv},
eprint={2410.01986},
primaryClass={eess.SY cs.SY}
} | ren2024an |
arxiv-664768 | 2410.01987 | Financial Sentiment Analysis on News and Reports Using Large Language Models and FinBERT | <|reference_start|>Financial Sentiment Analysis on News and Reports Using Large Language Models and FinBERT: Financial sentiment analysis (FSA) is crucial for evaluating market sentiment and making well-informed financial decisions. The advent of large language models (LLMs) such as BERT and its financial variant, FinBERT, has notably enhanced sentiment analysis capabilities. This paper investigates the application of LLMs and FinBERT for FSA, comparing their performance on news articles, financial reports and company announcements. The study emphasizes the advantages of prompt engineering with zero-shot and few-shot strategy to improve sentiment classification accuracy. Experimental results indicate that GPT-4o, with few-shot examples of financial texts, can be as competent as a well fine-tuned FinBERT in this specialized field.<|reference_end|> | arxiv | @article{shen2024financial,
title={Financial Sentiment Analysis on News and Reports Using Large Language
Models and FinBERT},
author={Yanxin Shen, Pulin Kirin Zhang},
journal={arXiv preprint arXiv:2410.01987},
year={2024},
archivePrefix={arXiv},
eprint={2410.01987},
primaryClass={cs.IR cs.CL cs.SI q-fin.GN}
} | shen2024financial |
arxiv-664769 | 2410.01989 | UlcerGPT: A Multimodal Approach Leveraging Large Language and Vision Models for Diabetic Foot Ulcer Image Transcription | <|reference_start|>UlcerGPT: A Multimodal Approach Leveraging Large Language and Vision Models for Diabetic Foot Ulcer Image Transcription: Diabetic foot ulcers (DFUs) are a leading cause of hospitalizations and lower limb amputations, placing a substantial burden on patients and healthcare systems. Early detection and accurate classification of DFUs are critical for preventing serious complications, yet many patients experience delays in receiving care due to limited access to specialized services. Telehealth has emerged as a promising solution, improving access to care and reducing the need for in-person visits. The integration of artificial intelligence and pattern recognition into telemedicine has further enhanced DFU management by enabling automatic detection, classification, and monitoring from images. Despite advancements in artificial intelligence-driven approaches for DFU image analysis, the application of large language models for DFU image transcription has not yet been explored. To address this gap, we introduce UlcerGPT, a novel multimodal approach leveraging large language and vision models for DFU image transcription. This framework combines advanced vision and language models, such as Large Language and Vision Assistant and Chat Generative Pre-trained Transformer, to transcribe DFU images by jointly detecting, classifying, and localizing regions of interest. Through detailed experiments on a public dataset, evaluated by expert clinicians, UlcerGPT demonstrates promising results in the accuracy and efficiency of DFU transcription, offering potential support for clinicians in delivering timely care via telemedicine.<|reference_end|> | arxiv | @article{basiri2024ulcergpt:,
title={UlcerGPT: A Multimodal Approach Leveraging Large Language and Vision
Models for Diabetic Foot Ulcer Image Transcription},
author={Reza Basiri, Ali Abedi, Chau Nguyen, Milos R. Popovic, and Shehroz S.
Khan},
journal={arXiv preprint arXiv:2410.01989},
year={2024},
archivePrefix={arXiv},
eprint={2410.01989},
primaryClass={cs.CV cs.AI}
} | basiri2024ulcergpt: |
arxiv-664770 | 2410.01990 | Deep Learning Alternatives of the Kolmogorov Superposition Theorem | <|reference_start|>Deep Learning Alternatives of the Kolmogorov Superposition Theorem: This paper explores alternative formulations of the Kolmogorov Superposition Theorem (KST) as a foundation for neural network design. The original KST formulation, while mathematically elegant, presents practical challenges due to its limited insight into the structure of inner and outer functions and the large number of unknown variables it introduces. Kolmogorov-Arnold Networks (KANs) leverage KST for function approximation, but they have faced scrutiny due to mixed results compared to traditional multilayer perceptrons (MLPs) and practical limitations imposed by the original KST formulation. To address these issues, we introduce ActNet, a scalable deep learning model that builds on the KST and overcomes many of the drawbacks of Kolmogorov's original formulation. We evaluate ActNet in the context of Physics-Informed Neural Networks (PINNs), a framework well-suited for leveraging KST's strengths in low-dimensional function approximation, particularly for simulating partial differential equations (PDEs). In this challenging setting, where models must learn latent functions without direct measurements, ActNet consistently outperforms KANs across multiple benchmarks and is competitive against the current best MLP-based approaches. These results present ActNet as a promising new direction for KST-based deep learning applications, particularly in scientific computing and PDE simulation tasks.<|reference_end|> | arxiv | @article{guilhoto2024deep,
title={Deep Learning Alternatives of the Kolmogorov Superposition Theorem},
author={Leonardo Ferreira Guilhoto, Paris Perdikaris},
journal={arXiv preprint arXiv:2410.01990},
year={2024},
archivePrefix={arXiv},
eprint={2410.01990},
primaryClass={cs.LG cs.CE}
} | guilhoto2024deep |
arxiv-664771 | 2410.01992 | General Conversion between ANCF and B-spline Surfaces | <|reference_start|>General Conversion between ANCF and B-spline Surfaces: In this paper, general conversion equations are derived between Absolute Nodal Coordinates Formulation (ANCF) finite surface elements and B-spline surfaces, an extension of our previous work on the conversion between ANCF cable elements and B-spline curves. The derivation of the conversion equations is the discovery of the geometric invariance of the ANCF displacement field before and after the conversion. Our study starts from proposing the conversion equation between ANCF finite surface elements and Bezier surfaces which are the special cases of B-spline surfaces, followed by establishing a general conversion equation between ANCF finite surface elements and Bezier surfaces. This general conversion equation has functionalities (1) to realize the one-step direct conversion between ANCF and Bezier surfaces (2) to convert ANCF finite surface elements directly to Bezier surfaces provided the ANCF nodal coordinates are not independent. The direct conversion from a conditional ANCF finite surface to Bezier surfaces enhances the efficiency and ability to control and store data in computers during the conversion process. The conversion between ANCF finite surface elements and B-spline surfaces is derived from a conversion of B-spline surfaces to a more general conversion of B-spline surfaces. B-spline basis functions are utilized in the non-recursive form, from which a more efficient conversion equation is obtained compared with an intuitive conversion semantics where one converts firstly B-spline surfaces to composite Bezier surfaces by inserting knot and converts to ANCF finite surface elements afterward. The obtained conversion equations between ANCF and B-spline surfaces realize the one-step direct conversion.<|reference_end|> | arxiv | @article{wang2024general,
title={General Conversion between ANCF and B-spline Surfaces},
author={Randi Wang, Peng Lan, Zuqing Yu, Nianli Lu},
journal={arXiv preprint arXiv:2410.01992},
year={2024},
archivePrefix={arXiv},
eprint={2410.01992},
primaryClass={cs.CG}
} | wang2024general |
arxiv-664772 | 2410.01999 | CodeMMLU: A Multi-Task Benchmark for Assessing Code Understanding Capabilities of CodeLLMs | <|reference_start|>CodeMMLU: A Multi-Task Benchmark for Assessing Code Understanding Capabilities of CodeLLMs: Recent advancements in Code Large Language Models (CodeLLMs) have predominantly focused on open-ended code generation tasks, often neglecting the critical aspect of code understanding and comprehension. To bridge this gap, we present CodeMMLU, a comprehensive multiple-choice question-answer benchmark designed to evaluate the depth of software and code understanding in LLMs. CodeMMLU includes over 10,000 questions sourced from diverse domains, encompassing tasks such as code analysis, defect detection, and software engineering principles across multiple programming languages. Unlike traditional benchmarks, CodeMMLU assesses models's ability to reason about code rather than merely generate it, providing deeper insights into their grasp of complex software concepts and systems. Our extensive evaluation reveals that even state-of-the-art models face significant challenges with CodeMMLU, highlighting deficiencies in comprehension beyond code generation. By underscoring the crucial relationship between code understanding and effective generation, CodeMMLU serves as a vital resource for advancing AI-assisted software development, ultimately aiming to create more reliable and capable coding assistants.<|reference_end|> | arxiv | @article{manh2024codemmlu:,
title={CodeMMLU: A Multi-Task Benchmark for Assessing Code Understanding
Capabilities of CodeLLMs},
author={Dung Nguyen Manh, Thang Phan Chau, Nam Le Hai, Thong T. Doan, Nam V.
Nguyen, Quang Pham, Nghi D. Q. Bui},
journal={arXiv preprint arXiv:2410.01999},
year={2024},
archivePrefix={arXiv},
eprint={2410.01999},
primaryClass={cs.SE}
} | manh2024codemmlu: |
arxiv-664773 | 2410.02000 | Barycentric rational approximation for learning the index of a dynamical system from limited data | <|reference_start|>Barycentric rational approximation for learning the index of a dynamical system from limited data: We consider the task of data-driven identification of dynamical systems, specifically for systems whose behavior at large frequencies is non-standard, as encoded by a non-trivial relative degree of the transfer function or, alternatively, a non-trivial index of a corresponding realization as a descriptor system. We develop novel surrogate modeling strategies that allow state-of-the-art rational approximation algorithms (e.g., AAA and vector fitting) to better handle data coming from such systems with non-trivial relative degree. Our contribution is twofold. On one hand, we describe a strategy to build rational surrogate models with prescribed relative degree, with the objective of mirroring the high-frequency behavior of the high-fidelity problem, when known. The surrogate model's desired degree is achieved through constraints on its barycentric coefficients, rather than through ad-hoc modifications of the rational form. On the other hand, we present a degree-identification routine that allows one to estimate the unknown relative degree of a system from low-frequency data. By identifying the degree of the system that generated the data, we can build a surrogate model that, in addition to matching the data well (at low frequencies), has enhanced extrapolation capabilities (at high frequencies). We showcase the effectiveness and robustness of the newly proposed method through a suite of numerical tests.<|reference_end|> | arxiv | @article{pradovera2024barycentric,
title={Barycentric rational approximation for learning the index of a dynamical
system from limited data},
author={Davide Pradovera, Ion Victor Gosea, Jan Heiland},
journal={arXiv preprint arXiv:2410.02000},
year={2024},
archivePrefix={arXiv},
eprint={2410.02000},
primaryClass={math.NA cs.NA cs.SY eess.SY}
} | pradovera2024barycentric |
arxiv-664774 | 2410.02003 | SkyAI Sim: An Open-Source Simulation of UAV Aerial Imaging from Satellite Data | <|reference_start|>SkyAI Sim: An Open-Source Simulation of UAV Aerial Imaging from Satellite Data: Capturing real-world aerial images for vision-based navigation (VBN) is challenging due to limited availability and conditions that make it nearly impossible to access all desired images from any location. The complexity increases when multiple locations are involved. The state of the art solutions, such as flying a UAV (Unmanned Aerial Vehicle) to take pictures or using existing research databases, have significant limitations. SkyAI Sim offers a compelling alternative by simulating a UAV to capture bird's-eye view satellite images at zero-yaw with real-world visible-band specifications. This open-source tool allows users to specify the bounding box (top-left and bottom-right) coordinates of any region on a map. Without the need to physically fly a drone, the virtual Python UAV performs a raster search to capture satellite images using the Google Maps Static API. Users can define parameters such as flight altitude, aspect ratio and diagonal field of view of the camera, and the overlap between consecutive images. SkyAI Sim's capabilities range from capturing a few low-altitude images for basic applications to generating extensive datasets of entire cities for complex tasks like deep learning. This versatility makes SkyAI a valuable tool for not only VBN, but also other applications including environmental monitoring, construction, and city management. The open-source nature of the tool also allows for extending the raster search to other missions. A dataset of Memphis, TN has been provided along with this simulator, partially generated using SkyAI and, also includes data from a 3D world generation package for comparison.<|reference_end|> | arxiv | @article{dajkhosh2024skyai,
title={SkyAI Sim: An Open-Source Simulation of UAV Aerial Imaging from
Satellite Data},
author={S. Parisa Dajkhosh, Peter M. Le, Orges Furxhi, Eddie L. Jacobs},
journal={arXiv preprint arXiv:2410.02003},
year={2024},
archivePrefix={arXiv},
eprint={2410.02003},
primaryClass={cs.CV cs.HC eess.IV}
} | dajkhosh2024skyai |
arxiv-664775 | 2410.02004 | Normalizing Flow-Based Metric for Image Generation | <|reference_start|>Normalizing Flow-Based Metric for Image Generation: We propose two new evaluation metrics to assess realness of generated images based on normalizing flows: a simpler and efficient flow-based likelihood distance (FLD) and a more exact dual-flow based likelihood distance (D-FLD). Because normalizing flows can be used to compute the exact likelihood, the proposed metrics assess how closely generated images align with the distribution of real images from a given domain. This property gives the proposed metrics a few advantages over the widely used Fr\'echet inception distance (FID) and other recent metrics. Firstly, the proposed metrics need only a few hundred images to stabilize (converge in mean), as opposed to tens of thousands needed for FID, and at least a few thousand for the other metrics. This allows confident evaluation of even small sets of generated images, such as validation batches inside training loops. Secondly, the network used to compute the proposed metric has over an order of magnitude fewer parameters compared to Inception-V3 used to compute FID, making it computationally more efficient. For assessing the realness of generated images in new domains (e.g., x-ray images), ideally these networks should be retrained on real images to model their distinct distributions. Thus, our smaller network will be even more advantageous for new domains. Extensive experiments show that the proposed metrics have the desired monotonic relationships with the extent of image degradation of various kinds.<|reference_end|> | arxiv | @article{jeevan2024normalizing,
title={Normalizing Flow-Based Metric for Image Generation},
author={Pranav Jeevan, Neeraj Nixon, Amit Sethi},
journal={arXiv preprint arXiv:2410.02004},
year={2024},
archivePrefix={arXiv},
eprint={2410.02004},
primaryClass={cs.CV cs.AI cs.LG}
} | jeevan2024normalizing |
arxiv-664776 | 2410.02005 | FairlyUncertain: A Comprehensive Benchmark of Uncertainty in Algorithmic Fairness | <|reference_start|>FairlyUncertain: A Comprehensive Benchmark of Uncertainty in Algorithmic Fairness: Fair predictive algorithms hinge on both equality and trust, yet inherent uncertainty in real-world data challenges our ability to make consistent, fair, and calibrated decisions. While fairly managing predictive error has been extensively explored, some recent work has begun to address the challenge of fairly accounting for irreducible prediction uncertainty. However, a clear taxonomy and well-specified objectives for integrating uncertainty into fairness remains undefined. We address this gap by introducing FairlyUncertain, an axiomatic benchmark for evaluating uncertainty estimates in fairness. Our benchmark posits that fair predictive uncertainty estimates should be consistent across learning pipelines and calibrated to observed randomness. Through extensive experiments on ten popular fairness datasets, our evaluation reveals: (1) A theoretically justified and simple method for estimating uncertainty in binary settings is more consistent and calibrated than prior work; (2) Abstaining from binary predictions, even with improved uncertainty estimates, reduces error but does not alleviate outcome imbalances between demographic groups; (3) Incorporating consistent and calibrated uncertainty estimates in regression tasks improves fairness without any explicit fairness interventions. Additionally, our benchmark package is designed to be extensible and open-source, to grow with the field. By providing a standardized framework for assessing the interplay between uncertainty and fairness, FairlyUncertain paves the way for more equitable and trustworthy machine learning practices.<|reference_end|> | arxiv | @article{rosenblatt2024fairlyuncertain:,
title={FairlyUncertain: A Comprehensive Benchmark of Uncertainty in Algorithmic
Fairness},
author={Lucas Rosenblatt and R. Teal Witter},
journal={arXiv preprint arXiv:2410.02005},
year={2024},
archivePrefix={arXiv},
eprint={2410.02005},
primaryClass={cs.LG stat.ML}
} | rosenblatt2024fairlyuncertain: |
arxiv-664777 | 2410.02006 | Addressing Data Heterogeneity in Federated Learning with Adaptive Normalization-Free Feature Recalibration | <|reference_start|>Addressing Data Heterogeneity in Federated Learning with Adaptive Normalization-Free Feature Recalibration: Federated learning is a decentralized collaborative training paradigm that preserves stakeholders' data ownership while improving performance and generalization. However, statistical heterogeneity among client datasets poses a fundamental challenge by degrading system performance. To address this issue, we propose Adaptive Normalization-free Feature Recalibration (ANFR), an architecture-level approach that combines weight standardization and channel attention. Weight standardization normalizes the weights of layers instead of activations. This is less susceptible to mismatched client statistics and inconsistent averaging, thereby more robust under heterogeneity. Channel attention produces learnable scaling factors for feature maps, suppressing those that are inconsistent between clients due to heterogeneity. We demonstrate that combining these techniques boosts model performance beyond their individual contributions, by enhancing class selectivity and optimizing channel attention weight distribution. ANFR operates independently of the aggregation method and is effective in both global and personalized federated learning settings, with minimal computational overhead. Furthermore, when training with differential privacy, ANFR achieves an appealing balance between privacy and utility, enabling strong privacy guarantees without sacrificing performance. By integrating weight standardization and channel attention in the backbone model, ANFR offers a novel and versatile approach to the challenge of statistical heterogeneity. We demonstrate through extensive experiments that ANFR consistently outperforms established baselines across various aggregation methods, datasets, and heterogeneity conditions.<|reference_end|> | arxiv | @article{siomos2024addressing,
title={Addressing Data Heterogeneity in Federated Learning with Adaptive
Normalization-Free Feature Recalibration},
author={Vasilis Siomos, Sergio Naval-Marimont, Jonathan Passerat-Palmbach,
Giacomo Tarroni},
journal={arXiv preprint arXiv:2410.02006},
year={2024},
archivePrefix={arXiv},
eprint={2410.02006},
primaryClass={cs.LG cs.AI cs.CV}
} | siomos2024addressing |
arxiv-664778 | 2410.02010 | MONICA: Benchmarking on Long-tailed Medical Image Classification | <|reference_start|>MONICA: Benchmarking on Long-tailed Medical Image Classification: Long-tailed learning is considered to be an extremely challenging problem in data imbalance learning. It aims to train well-generalized models from a large number of images that follow a long-tailed class distribution. In the medical field, many diagnostic imaging exams such as dermoscopy and chest radiography yield a long-tailed distribution of complex clinical findings. Recently, long-tailed learning in medical image analysis has garnered significant attention. However, the field currently lacks a unified, strictly formulated, and comprehensive benchmark, which often leads to unfair comparisons and inconclusive results. To help the community improve the evaluation and advance, we build a unified, well-structured codebase called Medical OpeN-source Long-taIled ClassifiCAtion (MONICA), which implements over 30 methods developed in relevant fields and evaluated on 12 long-tailed medical datasets covering 6 medical domains. Our work provides valuable practical guidance and insights for the field, offering detailed analysis and discussion on the effectiveness of individual components within the inbuilt state-of-the-art methodologies. We hope this codebase serves as a comprehensive and reproducible benchmark, encouraging further advancements in long-tailed medical image learning. The codebase is publicly available on https://github.com/PyJulie/MONICA.<|reference_end|> | arxiv | @article{ju2024monica:,
title={MONICA: Benchmarking on Long-tailed Medical Image Classification},
author={Lie Ju, Siyuan Yan, Yukun Zhou, Yang Nan, Xiaodan Xing, Peibo Duan,
Zongyuan Ge},
journal={arXiv preprint arXiv:2410.02010},
year={2024},
archivePrefix={arXiv},
eprint={2410.02010},
primaryClass={eess.IV cs.CV}
} | ju2024monica: |
arxiv-664779 | 2410.02011 | A Census-Based Genetic Algorithm for Target Set Selection Problem in Social Networks | <|reference_start|>A Census-Based Genetic Algorithm for Target Set Selection Problem in Social Networks: This paper considers the Target Set Selection (TSS) Problem in social networks, a fundamental problem in viral marketing. In the TSS problem, a graph and a threshold value for each vertex of the graph are given. We need to find a minimum size vertex subset to "activate" such that all graph vertices are activated at the end of the propagation process. Specifically, we propose a novel approach called "a census-based genetic algorithm" for the TSS problem. In our algorithm, we use the idea of a census to gather and store information about each individual in a population and collect census data from the individuals constructed during the algorithm's execution so that we can achieve greater diversity and avoid premature convergence at locally optimal solutions. We use two distinct census information: (a) for each individual, the algorithm stores how many times it has been identified during the execution (b) for each network node, the algorithm counts how many times it has been included in a solution. The proposed algorithm can also self-adjust by using a parameter specifying the aggressiveness employed in each reproduction method. Additionally, the algorithm is designed to run in a parallelized environment to minimize the computational cost and check each individual's feasibility. Moreover, our algorithm finds the optimal solution in all cases while experimenting on random graphs. Furthermore, we execute the proposed algorithm on 14 large graphs of real-life social network instances from the literature, improving around 9.57 solution size (on average) and 134 vertices (in total) compared to the best solutions obtained in previous studies.<|reference_end|> | arxiv | @article{rahman2024a,
title={A Census-Based Genetic Algorithm for Target Set Selection Problem in
Social Networks},
author={Md. Samiur Rahman, Mohammad Shamim Ahsan, Tim Chen, and Vijayakumar
Varadarajan},
journal={arXiv preprint arXiv:2410.02011},
year={2024},
archivePrefix={arXiv},
eprint={2410.02011},
primaryClass={cs.NE cs.SI}
} | rahman2024a |
arxiv-664780 | 2410.02012 | Semi-Supervised Contrastive VAE for Disentanglement of Digital Pathology Images | <|reference_start|>Semi-Supervised Contrastive VAE for Disentanglement of Digital Pathology Images: Despite the strong prediction power of deep learning models, their interpretability remains an important concern. Disentanglement models increase interpretability by decomposing the latent space into interpretable subspaces. In this paper, we propose the first disentanglement method for pathology images. We focus on the task of detecting tumor-infiltrating lymphocytes (TIL). We propose different ideas including cascading disentanglement, novel architecture, and reconstruction branches. We achieve superior performance on complex pathology images, thus improving the interpretability and even generalization power of TIL detection deep learning models. Our codes are available at https://github.com/Shauqi/SS-cVAE.<|reference_end|> | arxiv | @article{hasan2024semi-supervised,
title={Semi-Supervised Contrastive VAE for Disentanglement of Digital Pathology
Images},
author={Mahmudul Hasan, Xiaoling Hu, Shahira Abousamra, Prateek Prasanna, Joel
Saltz, Chao Chen},
journal={arXiv preprint arXiv:2410.02012},
year={2024},
archivePrefix={arXiv},
eprint={2410.02012},
primaryClass={eess.IV cs.CV}
} | hasan2024semi-supervised |
arxiv-664781 | 2410.02016 | Adaptively Private Next-Token Prediction of Large Language Models | <|reference_start|>Adaptively Private Next-Token Prediction of Large Language Models: As Large Language Models (LLMs) proliferate, developing privacy safeguards for these models is crucial. One popular safeguard involves training LLMs in a differentially private manner. However, such solutions are shown to be computationally expensive and detrimental to the utility of these models. Since LLMs are deployed on the cloud and thus only accessible via an API, a Machine Learning as a Service (MLaaS) provider can protect its downstream data by privatizing the predictions during the decoding process. However, the practicality of such solutions still largely lags behind DP training methods. One recent promising approach, Private Mixing of Ensemble Distributions (PMixED), avoids additive noise by sampling from the output distributions of private LLMs mixed with the output distribution of a public model. Yet, PMixED must satisfy a fixed privacy level for a given number of queries, which is difficult for an analyst to estimate before inference and, hence, does not scale. To this end, we relax the requirements to a more practical setting by introducing Adaptive PMixED (AdaPMixED), a private decoding framework based on PMixED that is adaptive to the private and public output distributions evaluated on a given input query. In this setting, we introduce a noisy screening mechanism that filters out queries with potentially expensive privacy loss, and a data-dependent analysis that exploits the divergence of the private and public output distributions in its privacy loss calculation. Our experimental evaluations demonstrate that our mechanism and analysis can reduce the privacy loss by 16x while preserving the utility over the original PMixED. Furthermore, performing 100K predictions with AdaPMixED still achieves strong utility and a reasonable data-dependent privacy loss of 5.25.<|reference_end|> | arxiv | @article{flemings2024adaptively,
title={Adaptively Private Next-Token Prediction of Large Language Models},
author={James Flemings, Meisam Razaviyayn, and Murali Annavaram},
journal={arXiv preprint arXiv:2410.02016},
year={2024},
archivePrefix={arXiv},
eprint={2410.02016},
primaryClass={cs.LG cs.CR}
} | flemings2024adaptively |
arxiv-664782 | 2410.02017 | Review Non-convex Optimization Method for Machine Learning | <|reference_start|>Review Non-convex Optimization Method for Machine Learning: Non-convex optimization is a critical tool in advancing machine learning, especially for complex models like deep neural networks and support vector machines. Despite challenges such as multiple local minima and saddle points, non-convex techniques offer various pathways to reduce computational costs. These include promoting sparsity through regularization, efficiently escaping saddle points, and employing subsampling and approximation strategies like stochastic gradient descent. Additionally, non-convex methods enable model pruning and compression, which reduce the size of models while maintaining performance. By focusing on good local minima instead of exact global minima, non-convex optimization ensures competitive accuracy with faster convergence and lower computational overhead. This paper examines the key methods and applications of non-convex optimization in machine learning, exploring how it can lower computation costs while enhancing model performance. Furthermore, it outlines future research directions and challenges, including scalability and generalization, that will shape the next phase of non-convex optimization in machine learning.<|reference_end|> | arxiv | @article{fotopoulos2024review,
title={Review Non-convex Optimization Method for Machine Learning},
author={Greg B Fotopoulos, Paul Popovich, Nicholas Hall Papadopoulos},
journal={arXiv preprint arXiv:2410.02017},
year={2024},
archivePrefix={arXiv},
eprint={2410.02017},
primaryClass={cs.LG cs.AI}
} | fotopoulos2024review |
arxiv-664783 | 2410.02021 | On the Resilience of Fast Failover Routing Against Dynamic Link Failures | <|reference_start|>On the Resilience of Fast Failover Routing Against Dynamic Link Failures: Modern communication networks feature local fast failover mechanisms in the data plane, to swiftly respond to link failures with pre-installed rerouting rules. This paper explores resilient routing meant to tolerate $\leq k$ simultaneous link failures, ensuring packet delivery, provided that the source and destination remain connected. While past theoretical works studied failover routing under static link failures, i.e., links which permanently and simultaneously fail, real-world networks often face link flapping--dynamic down states caused by, e.g., numerous short-lived software-related faults. Thus, in this initial work, we re-investigate the resilience of failover routing against link flapping, by categorizing link failures into static, semi-dynamic (removing the assumption that links fail simultaneously), and dynamic (removing the assumption that links fail permanently) types, shedding light on the capabilities and limitations of failover routing under these scenarios. We show that $k$-edge-connected graphs exhibit $(k-1)$-resilient routing against dynamic failures for $k \leq 5$. We further show that this result extends to arbitrary $k$ if it is possible to rewrite $\log k$ bits in the packet header. Rewriting $3$ bits suffices to cope with $k$ semi-dynamic failures. However, on general graphs, tolerating $2$ dynamic failures becomes impossible without bit-rewriting. Even by rewriting $\log k$ bits, resilient routing cannot resolve $k$ dynamic failures, demonstrating the limitation of local fast rerouting.<|reference_end|> | arxiv | @article{dai2024on,
title={On the Resilience of Fast Failover Routing Against Dynamic Link Failures},
author={Wenkai Dai, Klaus-Tycho Foerster, Stefan Schmid},
journal={arXiv preprint arXiv:2410.02021},
year={2024},
archivePrefix={arXiv},
eprint={2410.02021},
primaryClass={cs.NI cs.DC cs.DS}
} | dai2024on |
arxiv-664784 | 2410.02023 | DeepProtein: Deep Learning Library and Benchmark for Protein Sequence Learning | <|reference_start|>DeepProtein: Deep Learning Library and Benchmark for Protein Sequence Learning: In recent years, deep learning has revolutionized the field of protein science, enabling advancements in predicting protein properties, structural folding and interactions. This paper presents DeepProtein, a comprehensive and user-friendly deep learning library specifically designed for protein-related tasks. DeepProtein integrates a couple of state-of-the-art neural network architectures, which include convolutional neural network (CNN), recurrent neural network (RNN), transformer, graph neural network (GNN), and graph transformer (GT). It provides user-friendly interfaces, facilitating domain researchers in applying deep learning techniques to protein data. Also, we curate a benchmark that evaluates these neural architectures on a variety of protein tasks, including protein function prediction, protein localization prediction, and protein-protein interaction prediction, showcasing its superior performance and scalability. Additionally, we provide detailed documentation and tutorials to promote accessibility and encourage reproducible research. This library is extended from a well-known drug discovery library, DeepPurpose and publicly available at https://github.com/jiaqingxie/DeepProtein/tree/main.<|reference_end|> | arxiv | @article{xie2024deepprotein:,
title={DeepProtein: Deep Learning Library and Benchmark for Protein Sequence
Learning},
author={Jiaqing Xie, Yue Zhao, Tianfan Fu},
journal={arXiv preprint arXiv:2410.02023},
year={2024},
archivePrefix={arXiv},
eprint={2410.02023},
primaryClass={cs.LG cs.AI q-bio.QM}
} | xie2024deepprotein: |
arxiv-664785 | 2410.02024 | FLAG: Financial Long Document Classification via AMR-based GNN | <|reference_start|>FLAG: Financial Long Document Classification via AMR-based GNN: The advent of large language models (LLMs) has initiated much research into their various financial applications. However, in applying LLMs on long documents, semantic relations are not explicitly incorporated, and a full or arbitrarily sparse attention operation is employed. In recent years, progress has been made in Abstract Meaning Representation (AMR), which is a graph-based representation of text to preserve its semantic relations. Since AMR can represent semantic relationships at a deeper level, it can be beneficially utilized by graph neural networks (GNNs) for constructing effective document-level graph representations built upon LLM embeddings to predict target metrics in the financial domain. We propose FLAG: Financial Long document classification via AMR-based GNN, an AMR graph based framework to generate document-level embeddings for long financial document classification. We construct document-level graphs from sentence-level AMR graphs, endow them with specialized LLM word embeddings in the financial domain, apply a deep learning mechanism that utilizes a GNN, and examine the efficacy of our AMR-based approach in predicting labeled target data from long financial documents. Extensive experiments are conducted on a dataset of quarterly earnings calls transcripts of companies in various sectors of the economy, as well as on a corpus of more recent earnings calls of companies in the S&P 1500 Composite Index. We find that our AMR-based approach outperforms fine-tuning LLMs directly on text in predicting stock price movement trends at different time horizons in both datasets. Our work also outperforms previous work utilizing document graphs and GNNs for text classification.<|reference_end|> | arxiv | @article{xia2024flag:,
title={FLAG: Financial Long Document Classification via AMR-based GNN},
author={Bolun "Namir" Xia, Aparna Gupta, Mohammed J. Zaki},
journal={arXiv preprint arXiv:2410.02024},
year={2024},
archivePrefix={arXiv},
eprint={2410.02024},
primaryClass={cs.CE cs.AI cs.CL cs.LG}
} | xia2024flag: |
arxiv-664786 | 2410.02025 | A Likelihood Based Approach to Distribution Regression Using Conditional Deep Generative Models | <|reference_start|>A Likelihood Based Approach to Distribution Regression Using Conditional Deep Generative Models: In this work, we explore the theoretical properties of conditional deep generative models under the statistical framework of distribution regression where the response variable lies in a high-dimensional ambient space but concentrates around a potentially lower-dimensional manifold. More specifically, we study the large-sample properties of a likelihood-based approach for estimating these models. Our results lead to the convergence rate of a sieve maximum likelihood estimator (MLE) for estimating the conditional distribution (and its devolved counterpart) of the response given predictors in the Hellinger (Wasserstein) metric. Our rates depend solely on the intrinsic dimension and smoothness of the true conditional distribution. These findings provide an explanation of why conditional deep generative models can circumvent the curse of dimensionality from the perspective of statistical foundations and demonstrate that they can learn a broader class of nearly singular conditional distributions. Our analysis also emphasizes the importance of introducing a small noise perturbation to the data when they are supported sufficiently close to a manifold. Finally, in our numerical studies, we demonstrate the effective implementation of the proposed approach using both synthetic and real-world datasets, which also provide complementary validation to our theoretical findings.<|reference_end|> | arxiv | @article{kumar2024a,
title={A Likelihood Based Approach to Distribution Regression Using Conditional
Deep Generative Models},
author={Shivam Kumar, Yun Yang, and Lizhen Lin},
journal={arXiv preprint arXiv:2410.02025},
year={2024},
archivePrefix={arXiv},
eprint={2410.02025},
primaryClass={math.ST cs.AI cs.LG stat.ME stat.ML stat.TH}
} | kumar2024a |
arxiv-664787 | 2410.02026 | Zodiac: A Cardiologist-Level LLM Framework for Multi-Agent Diagnostics | <|reference_start|>Zodiac: A Cardiologist-Level LLM Framework for Multi-Agent Diagnostics: Large language models (LLMs) have demonstrated remarkable progress in healthcare. However, a significant gap remains regarding LLMs' professionalism in domain-specific clinical practices, limiting their application in real-world diagnostics. In this work, we introduce ZODIAC, an LLM-powered framework with cardiologist-level professionalism designed to engage LLMs in cardiological diagnostics. ZODIAC assists cardiologists by extracting clinically relevant characteristics from patient data, detecting significant arrhythmias, and generating preliminary reports for the review and refinement by cardiologists. To achieve cardiologist-level professionalism, ZODIAC is built on a multi-agent collaboration framework, enabling the processing of patient data across multiple modalities. Each LLM agent is fine-tuned using real-world patient data adjudicated by cardiologists, reinforcing the model's professionalism. ZODIAC undergoes rigorous clinical validation with independent cardiologists, evaluated across eight metrics that measure clinical effectiveness and address security concerns. Results show that ZODIAC outperforms industry-leading models, including OpenAI's GPT-4o, Meta's Llama-3.1-405B, and Google's Gemini-pro, as well as medical-specialist LLMs like Microsoft's BioGPT. ZODIAC demonstrates the transformative potential of specialized LLMs in healthcare by delivering domain-specific solutions that meet the stringent demands of medical practice. Notably, ZODIAC has been successfully integrated into electrocardiography (ECG) devices, exemplifying the growing trend of embedding LLMs into Software-as-Medical-Device (SaMD).<|reference_end|> | arxiv | @article{zhou2024zodiac:,
title={Zodiac: A Cardiologist-Level LLM Framework for Multi-Agent Diagnostics},
author={Yuan Zhou, Peng Zhang, Mengya Song, Alice Zheng, Yiwen Lu, Zhiheng
Liu, Yong Chen, Zhaohan Xi},
journal={arXiv preprint arXiv:2410.02026},
year={2024},
archivePrefix={arXiv},
eprint={2410.02026},
primaryClass={cs.AI cs.CL}
} | zhou2024zodiac: |
arxiv-664788 | 2410.02027 | Quantifying the Gaps Between Translation and Native Perception in Training for Multimodal, Multilingual Retrieval | <|reference_start|>Quantifying the Gaps Between Translation and Native Perception in Training for Multimodal, Multilingual Retrieval: There is a scarcity of multilingual vision-language models that properly account for the perceptual differences that are reflected in image captions across languages and cultures. In this work, through a multimodal, multilingual retrieval case study, we quantify the existing lack of model flexibility. We empirically show performance gaps between training on captions that come from native German perception and captions that have been either machine-translated or human-translated from English into German. To address these gaps, we further propose and evaluate caption augmentation strategies. While we achieve mean recall improvements (+1.3), gaps still remain, indicating an open area of future work for the community.<|reference_end|> | arxiv | @article{buettner2024quantifying,
title={Quantifying the Gaps Between Translation and Native Perception in
Training for Multimodal, Multilingual Retrieval},
author={Kyle Buettner, Adriana Kovashka},
journal={arXiv preprint arXiv:2410.02027},
year={2024},
archivePrefix={arXiv},
eprint={2410.02027},
primaryClass={cs.CV cs.AI}
} | buettner2024quantifying |
arxiv-664789 | 2410.02028 | Are Large Language Models Good Classifiers? A Study on Edit Intent Classification in Scientific Document Revisions | <|reference_start|>Are Large Language Models Good Classifiers? A Study on Edit Intent Classification in Scientific Document Revisions: Classification is a core NLP task architecture with many potential applications. While large language models (LLMs) have brought substantial advancements in text generation, their potential for enhancing classification tasks remains underexplored. To address this gap, we propose a framework for thoroughly investigating fine-tuning LLMs for classification, including both generation- and encoding-based approaches. We instantiate this framework in edit intent classification (EIC), a challenging and underexplored classification task. Our extensive experiments and systematic comparisons with various training approaches and a representative selection of LLMs yield new insights into their application for EIC. We investigate the generalizability of these findings on five further classification tasks. To demonstrate the proposed methods and address the data shortage for empirical edit analysis, we use our best-performing EIC model to create Re3-Sci2.0, a new large-scale dataset of 1,780 scientific document revisions with over 94k labeled edits. The quality of the dataset is assessed through human evaluation. The new dataset enables an in-depth empirical study of human editing behavior in academic writing. We make our experimental framework, models and data publicly available.<|reference_end|> | arxiv | @article{ruan2024are,
title={Are Large Language Models Good Classifiers? A Study on Edit Intent
Classification in Scientific Document Revisions},
author={Qian Ruan, Ilia Kuznetsov, Iryna Gurevych},
journal={arXiv preprint arXiv:2410.02028},
year={2024},
archivePrefix={arXiv},
eprint={2410.02028},
primaryClass={cs.CL}
} | ruan2024are |
arxiv-664790 | 2410.02029 | XChainWatcher: Monitoring and Identifying Attacks in Cross-Chain Bridges | <|reference_start|>XChainWatcher: Monitoring and Identifying Attacks in Cross-Chain Bridges: Cross-chain bridges are widely used blockchain interoperability mechanisms. However, several of these bridges have vulnerabilities that have caused 3.2 billion dollars in losses since May 2021. Some studies have revealed the existence of these vulnerabilities, but little quantitative research is available, and there are no safeguard mechanisms to protect bridges from such attacks. We propose XChainWatcher, the first mechanism for monitoring bridges and detecting attacks against them. XChainWatcher relies on a cross-chain model powered by a Datalog engine, designed to be pluggable into any cross-chain bridge. Analyzing data from the Ronin and Nomad bridges, we successfully identified the transactions that led to losses of \$611M and \$190M USD, respectively. XChainWatcher not only uncovers successful attacks but also reveals unintended behavior, such as 37 cross-chain transactions (cctx) that these bridges should not have accepted, failed attempts to exploit Nomad, over \$7.8M locked on one chain but never released on Ethereum, and \$200K lost due to inadequate interaction with bridges. We provide the first open-source dataset of 81,000 cctxs across three blockchains, capturing \$585M and \$3.7B in token transfers in Nomad and Ronin, respectively.<|reference_end|> | arxiv | @article{augusto2024xchainwatcher:,
title={XChainWatcher: Monitoring and Identifying Attacks in Cross-Chain Bridges},
author={Andr'e Augusto, Rafael Belchior, Jonas Pfannschmidt, Andr'e
Vasconcelos and Miguel Correia},
journal={arXiv preprint arXiv:2410.02029},
year={2024},
archivePrefix={arXiv},
eprint={2410.02029},
primaryClass={cs.CR cs.DC}
} | augusto2024xchainwatcher: |
arxiv-664791 | 2410.02031 | Scene Flow as a Partial Differential Equation | <|reference_start|>Scene Flow as a Partial Differential Equation: We reframe scene flow as the problem of estimating a continuous space and time PDE that describes motion for an entire observation sequence, represented with a neural prior. Our resulting unsupervised method, EulerFlow, produces high quality scene flow on real-world data across multiple domains, including large-scale autonomous driving scenes and dynamic tabletop settings. Notably, EulerFlow produces high quality flow on small, fast moving objects like birds and tennis balls, and exhibits emergent 3D point tracking behavior by solving its estimated PDE over long time horizons. On the Argoverse 2 2024 Scene Flow Challenge, EulerFlow outperforms all prior art, beating the next best unsupervised method by over 2.5x and the next best supervised method by over 10%.<|reference_end|> | arxiv | @article{vedder2024neural,
title={Neural Eulerian Scene Flow Fields},
author={Kyle Vedder, Neehar Peri, Ishan Khatri, Siyi Li, Eric Eaton, Mehmet
Kocamaz, Yue Wang, Zhiding Yu, Deva Ramanan, Joachim Pehserl},
journal={arXiv preprint arXiv:2410.02031},
year={2024},
archivePrefix={arXiv},
eprint={2410.02031},
primaryClass={cs.CV}
} | vedder2024neural |
arxiv-664792 | 2410.02033 | Model Comparisons: XNet Outperforms KAN | <|reference_start|>Model Comparisons: XNet Outperforms KAN: In the fields of computational mathematics and artificial intelligence, the need for precise data modeling is crucial, especially for predictive machine learning tasks. This paper explores further XNet, a novel algorithm that employs the complex-valued Cauchy integral formula, offering a superior network architecture that surpasses traditional Multi-Layer Perceptrons (MLPs) and Kolmogorov-Arnold Networks (KANs). XNet significant improves speed and accuracy across various tasks in both low and high-dimensional spaces, redefining the scope of data-driven model development and providing substantial improvements over established time series models like LSTMs.<|reference_end|> | arxiv | @article{li2024model,
title={Model Comparisons: XNet Outperforms KAN},
author={Xin Li, Zhihong Jeff Xia, Xiaotao Zheng},
journal={arXiv preprint arXiv:2410.02033},
year={2024},
archivePrefix={arXiv},
eprint={2410.02033},
primaryClass={cs.LG cs.AI}
} | li2024model |
arxiv-664793 | 2410.02035 | Tuning Frequency Bias of State Space Models | <|reference_start|>Tuning Frequency Bias of State Space Models: State space models (SSMs) leverage linear, time-invariant (LTI) systems to effectively learn sequences with long-range dependencies. By analyzing the transfer functions of LTI systems, we find that SSMs exhibit an implicit bias toward capturing low-frequency components more effectively than high-frequency ones. This behavior aligns with the broader notion of frequency bias in deep learning model training. We show that the initialization of an SSM assigns it an innate frequency bias and that training the model in a conventional way does not alter this bias. Based on our theory, we propose two mechanisms to tune frequency bias: either by scaling the initialization to tune the inborn frequency bias; or by applying a Sobolev-norm-based filter to adjust the sensitivity of the gradients to high-frequency inputs, which allows us to change the frequency bias via training. Using an image-denoising task, we empirically show that we can strengthen, weaken, or even reverse the frequency bias using both mechanisms. By tuning the frequency bias, we can also improve SSMs' performance on learning long-range sequences, averaging an 88.26% accuracy on the Long-Range Arena (LRA) benchmark tasks.<|reference_end|> | arxiv | @article{yu2024tuning,
title={Tuning Frequency Bias of State Space Models},
author={Annan Yu, Dongwei Lyu, Soon Hoe Lim, Michael W. Mahoney, N. Benjamin
Erichson},
journal={arXiv preprint arXiv:2410.02035},
year={2024},
archivePrefix={arXiv},
eprint={2410.02035},
primaryClass={cs.LG stat.ML}
} | yu2024tuning |
arxiv-664794 | 2410.02038 | Realizable Continuous-Space Shields for Safe Reinforcement Learning | <|reference_start|>Realizable Continuous-Space Shields for Safe Reinforcement Learning: While Deep Reinforcement Learning (DRL) has achieved remarkable success across various domains, it remains vulnerable to occasional catastrophic failures without additional safeguards. One effective solution to prevent these failures is to use a shield that validates and adjusts the agent's actions to ensure compliance with a provided set of safety specifications. For real-life robot domains, it is desirable to be able to define such safety specifications over continuous state and action spaces to accurately account for system dynamics and calculate new safe actions that minimally alter the agent's output. In this paper, we propose the first shielding approach to automatically guarantee the realizability of safety requirements for continuous state and action spaces. Realizability is an essential property that confirms the shield will always be able to generate a safe action for any state in the environment. We formally prove that realizability can also be verified with a stateful shield, enabling the incorporation of non-Markovian safety requirements. Finally, we demonstrate the effectiveness of our approach in ensuring safety without compromising policy accuracy by applying it to a navigation problem and a multi-agent particle environment.<|reference_end|> | arxiv | @article{kim2024realizable,
title={Realizable Continuous-Space Shields for Safe Reinforcement Learning},
author={Kyungmin Kim, Davide Corsi, Andoni Rodriguez, JB Lanier, Benjami
Parellada, Pierre Baldi, Cesar Sanchez, Roy Fox},
journal={arXiv preprint arXiv:2410.02038},
year={2024},
archivePrefix={arXiv},
eprint={2410.02038},
primaryClass={cs.LG}
} | kim2024realizable |
arxiv-664795 | 2410.02040 | Clid: Identifying TLS Clients With Unsupervised Learning on Domain Names | <|reference_start|>Clid: Identifying TLS Clients With Unsupervised Learning on Domain Names: In this paper, we introduce Clid, a Transport Layer Security (TLS) client identification tool based on unsupervised learning on domain names in the server name indication (SNI) field. Clid aims to provide some information on a wide range of clients, even though it may not be able to identify a definitive characteristic about each one of the clients. This is a different approach from that of many existing rule-based client identification tools that rely on hardcoded databases to identify granular characteristics of a few clients. Often times, these tools can identify only a small number of clients in a real-world network as their databases grow outdated, which motivates an alternative approach like Clid. For this research, we utilize some 345 million anonymized TLS handshakes collected from a large university campus network. From each handshake, we create a TCP fingerprint that identifies each unique client that corresponds to a physical device on the network. Clid uses Bayesian optimization to find the 'optimal' DBSCAN clustering of clients and domain names for a set of TLS connections. Clid maps each client cluster to one or more domain clusters that are most strongly associated with it based on the frequency and exclusivity of their TLS connections. While learning highly associated domain names of a client may not immediately tell us specific characteristics of the client like its the operating system, manufacturer, or TLS configuration, it may serve as a strong first step to doing so. We evaluate Clid's performance on various subsets of our captured TLS handshakes and on different parameter settings that affect the granularity of identification results. Our experiments show that Clid is able to identify 'strongly associated' domain names for at least 60% of all clients in all our experiments.<|reference_end|> | arxiv | @article{nam2024clid:,
title={Clid: Identifying TLS Clients With Unsupervised Learning on Domain Names},
author={Ihyun Nam, Gerry Wan},
journal={arXiv preprint arXiv:2410.02040},
year={2024},
archivePrefix={arXiv},
eprint={2410.02040},
primaryClass={cs.NI}
} | nam2024clid: |
arxiv-664796 | 2410.02042 | EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated Learning | <|reference_start|>EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated Learning: Federated Learning (FL) is a technique that allows multiple parties to train a shared model collaboratively without disclosing their private data. It has become increasingly popular due to its distinct privacy advantages. However, FL models can suffer from biases against certain demographic groups (e.g., racial and gender groups) due to the heterogeneity of data and party selection. Researchers have proposed various strategies for characterizing the group fairness of FL algorithms to address this issue. However, the effectiveness of these strategies in the face of deliberate adversarial attacks has not been fully explored. Although existing studies have revealed various threats (e.g., model poisoning attacks) against FL systems caused by malicious participants, their primary aim is to decrease model accuracy, while the potential of leveraging poisonous model updates to exacerbate model unfairness remains unexplored. In this paper, we propose a new type of model poisoning attack, EAB-FL, with a focus on exacerbating group unfairness while maintaining a good level of model utility. Extensive experiments on three datasets demonstrate the effectiveness and efficiency of our attack, even with state-of-the-art fairness optimization algorithms and secure aggregation rules employed.<|reference_end|> | arxiv | @article{meerza2024eab-fl:,
title={EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in
Federated Learning},
author={Syed Irfan Ali Meerza, Jian Liu},
journal={arXiv preprint arXiv:2410.02042},
year={2024},
doi={10.24963/ijcai.2024/51},
archivePrefix={arXiv},
eprint={2410.02042},
primaryClass={cs.LG cs.AI cs.CR}
} | meerza2024eab-fl: |
arxiv-664797 | 2410.02043 | Impact of White-Box Adversarial Attacks on Convolutional Neural Networks | <|reference_start|>Impact of White-Box Adversarial Attacks on Convolutional Neural Networks: Autonomous vehicle navigation and healthcare diagnostics are among the many fields where the reliability and security of machine learning models for image data are critical. We conduct a comprehensive investigation into the susceptibility of Convolutional Neural Networks (CNNs), which are widely used for image data, to white-box adversarial attacks. We investigate the effects of various sophisticated attacks -- Fast Gradient Sign Method, Basic Iterative Method, Jacobian-based Saliency Map Attack, Carlini & Wagner, Projected Gradient Descent, and DeepFool -- on CNN performance metrics, (e.g., loss, accuracy), the differential efficacy of adversarial techniques in increasing error rates, the relationship between perceived image quality metrics (e.g., ERGAS, PSNR, SSIM, and SAM) and classification performance, and the comparative effectiveness of iterative versus single-step attacks. Using the MNIST, CIFAR-10, CIFAR-100, and Fashio_MNIST datasets, we explore the effect of different attacks on the CNNs performance metrics by varying the hyperparameters of CNNs. Our study provides insights into the robustness of CNNs against adversarial threats, pinpoints vulnerabilities, and underscores the urgent need for developing robust defense mechanisms to protect CNNs and ensuring their trustworthy deployment in real-world scenarios.<|reference_end|> | arxiv | @article{podder2024impact,
title={Impact of White-Box Adversarial Attacks on Convolutional Neural Networks},
author={Rakesh Podder and Sudipto Ghosh},
journal={arXiv preprint arXiv:2410.02043},
year={2024},
archivePrefix={arXiv},
eprint={2410.02043},
primaryClass={cs.CR cs.LG cs.NE}
} | podder2024impact |
arxiv-664798 | 2410.02046 | QuickCheck for VDM | <|reference_start|>QuickCheck for VDM: We describe recent work on a lightweight verification tool for VDM specifications, called QuickCheck. The objective of the tool is to quickly categorise proof obligations: identifying those that fail with counterexamples, those that are probably provable and those that require deeper analysis. The paper discusses the design of the tool and its use of pluggable strategies for adding extra checking. We present the results of the tool being used to check a large set of VDM specifications, and suggest future directions.<|reference_end|> | arxiv | @article{battle2024quickcheck,
title={QuickCheck for VDM},
author={Nick Battle and Markus Solecki Ellyton},
journal={arXiv preprint arXiv:2410.02046},
year={2024},
number={OVT22/2024/01},
archivePrefix={arXiv},
eprint={2410.02046},
primaryClass={cs.SE}
} | battle2024quickcheck |
arxiv-664799 | 2410.02048 | FeelAnyForce: Estimating Contact Force Feedback from Tactile Sensation for Vision-Based Tactile Sensors | <|reference_start|>FeelAnyForce: Estimating Contact Force Feedback from Tactile Sensation for Vision-Based Tactile Sensors: In this paper, we tackle the problem of estimating 3D contact forces using vision-based tactile sensors. In particular, our goal is to estimate contact forces over a large range (up to 15 N) on any objects while generalizing across different vision-based tactile sensors. Thus, we collected a dataset of over 200K indentations using a robotic arm that pressed various indenters onto a GelSight Mini sensor mounted on a force sensor and then used the data to train a multi-head transformer for force regression. Strong generalization is achieved via accurate data collection and multi-objective optimization that leverages depth contact images. Despite being trained only on primitive shapes and textures, the regressor achieves a mean absolute error of 4\% on a dataset of unseen real-world objects. We further evaluate our approach's generalization capability to other GelSight mini and DIGIT sensors, and propose a reproducible calibration procedure for adapting the pre-trained model to other vision-based sensors. Furthermore, the method was evaluated on real-world tasks, including weighing objects and controlling the deformation of delicate objects, which relies on accurate force feedback. Project webpage: http://prg.cs.umd.edu/FeelAnyForce<|reference_end|> | arxiv | @article{shahidzadeh2024feelanyforce:,
title={FeelAnyForce: Estimating Contact Force Feedback from Tactile Sensation
for Vision-Based Tactile Sensors},
author={Amir-Hossein Shahidzadeh, Gabriele Caddeo, Koushik Alapati, Lorenzo
Natale, Cornelia Ferm"uller, Yiannis Aloimonos},
journal={arXiv preprint arXiv:2410.02048},
year={2024},
archivePrefix={arXiv},
eprint={2410.02048},
primaryClass={cs.RO cs.CV}
} | shahidzadeh2024feelanyforce: |
arxiv-664800 | 2410.02049 | Emo3D: Metric and Benchmarking Dataset for 3D Facial Expression Generation from Emotion Description | <|reference_start|>Emo3D: Metric and Benchmarking Dataset for 3D Facial Expression Generation from Emotion Description: Existing 3D facial emotion modeling have been constrained by limited emotion classes and insufficient datasets. This paper introduces "Emo3D", an extensive "Text-Image-Expression dataset" spanning a wide spectrum of human emotions, each paired with images and 3D blendshapes. Leveraging Large Language Models (LLMs), we generate a diverse array of textual descriptions, facilitating the capture of a broad spectrum of emotional expressions. Using this unique dataset, we conduct a comprehensive evaluation of language-based models' fine-tuning and vision-language models like Contranstive Language Image Pretraining (CLIP) for 3D facial expression synthesis. We also introduce a new evaluation metric for this task to more directly measure the conveyed emotion. Our new evaluation metric, Emo3D, demonstrates its superiority over Mean Squared Error (MSE) metrics in assessing visual-text alignment and semantic richness in 3D facial expressions associated with human emotions. "Emo3D" has great applications in animation design, virtual reality, and emotional human-computer interaction.<|reference_end|> | arxiv | @article{dehghani2024emo3d:,
title={Emo3D: Metric and Benchmarking Dataset for 3D Facial Expression
Generation from Emotion Description},
author={Mahshid Dehghani, Amirahmad Shafiee, Ali Shafiei, Neda Fallah,
Farahmand Alizadeh, Mohammad Mehdi Gholinejad, Hamid Behroozi, Jafar Habibi,
Ehsaneddin Asgari},
journal={arXiv preprint arXiv:2410.02049},
year={2024},
archivePrefix={arXiv},
eprint={2410.02049},
primaryClass={cs.CV cs.CL cs.GR}
} | dehghani2024emo3d: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.