corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-663101
|
2409.19352
|
Analytical Construction of CBF-Based Safety Filters for Simultaneous State and Input Constraints
|
<|reference_start|>Analytical Construction of CBF-Based Safety Filters for Simultaneous State and Input Constraints: We revisit the problem explored in [1] of guaranteeing satisfaction of multiple simultaneous state constraints applied to a single-input, single-output plant consisting of a chain of n integrators subject to input limitations. For this problem setting, we derive an analytic, easy-to-implement safety filter which respects input limitations and ensures forward-invariance of all state constraints simultaneously. Additionally, we provide a straightforward extension to the multi-input, multi-output chained integrator setting, and provide an analytic safety filter guaranteeing satisfaction of arbitrarily many simultaneous hyperplane constraints on the output vector. Whereas the approach in [1] obtains maximal invariant sets, our approach trades off some degree of conservatism in exchange for a recursive safety filter which is analytic for any arbitrary n >= 1.<|reference_end|>
|
arxiv
|
@article{fisher2024analytical,
title={Analytical Construction of CBF-Based Safety Filters for Simultaneous
State and Input Constraints},
author={Peter A. Fisher, Anuradha M. Annaswamy},
journal={arXiv preprint arXiv:2409.19352},
year={2024},
archivePrefix={arXiv},
eprint={2409.19352},
primaryClass={eess.SY cs.SY}
}
|
fisher2024analytical
|
arxiv-663102
|
2409.19354
|
Toward Deep Learning-based Segmentation and Quantitative Analysis of Cervical Spinal Cord Magnetic Resonance Images
|
<|reference_start|>Toward Deep Learning-based Segmentation and Quantitative Analysis of Cervical Spinal Cord Magnetic Resonance Images: This research proposal discusses two challenges in the field of medical image analysis: the multi-parametric investigation on microstructural and macrostructural characteristics of the cervical spinal cord and deep learning-based medical image segmentation. First, we conduct a thorough analysis of the cervical spinal cord within a healthy population. Unlike most previous studies, which required medical professionals to perform functional examinations using metrics like the modified Japanese Orthopaedic Association (mJOA) score or the American Spinal Injury Association (ASIA) impairment scale, this research focuses solely on Magnetic Resonance (MR) images of the cervical spinal cord. Second, we employ cutting-edge deep learning-based segmentation methods to achieve highly accurate macrostructural measurements from MR images. To this end, we propose an enhanced UNet-like Transformer-based framework with attentive skip connections. This paper reports on the problem domain, proposed solutions, current status of research, and expected contributions.<|reference_end|>
|
arxiv
|
@article{elahi2024toward,
title={Toward Deep Learning-based Segmentation and Quantitative Analysis of
Cervical Spinal Cord Magnetic Resonance Images},
author={Maryam Tavakol Elahi (The University of Ottawa)},
journal={arXiv preprint arXiv:2409.19354},
year={2024},
archivePrefix={arXiv},
eprint={2409.19354},
primaryClass={eess.IV cs.CV}
}
|
elahi2024toward
|
arxiv-663103
|
2409.19356
|
Steering Prediction via a Multi-Sensor System for Autonomous Racing
|
<|reference_start|>Steering Prediction via a Multi-Sensor System for Autonomous Racing: Autonomous racing has rapidly gained research attention. Traditionally, racing cars rely on 2D LiDAR as their primary visual system. In this work, we explore the integration of an event camera with the existing system to provide enhanced temporal information. Our goal is to fuse the 2D LiDAR data with event data in an end-to-end learning framework for steering prediction, which is crucial for autonomous racing. To the best of our knowledge, this is the first study addressing this challenging research topic. We start by creating a multisensor dataset specifically for steering prediction. Using this dataset, we establish a benchmark by evaluating various SOTA fusion methods. Our observations reveal that existing methods often incur substantial computational costs. To address this, we apply low-rank techniques to propose a novel, efficient, and effective fusion design. We introduce a new fusion learning policy to guide the fusion process, enhancing robustness against misalignment. Our fusion architecture provides better steering prediction than LiDAR alone, significantly reducing the RMSE from 7.72 to 1.28. Compared to the second-best fusion method, our work represents only 11% of the learnable parameters while achieving better accuracy. The source code, dataset, and benchmark will be released to promote future research.<|reference_end|>
|
arxiv
|
@article{zhou2024steering,
title={Steering Prediction via a Multi-Sensor System for Autonomous Racing},
author={Zhuyun Zhou, Zongwei Wu, Florian Bolli, R'emi Boutteau, Fan Yang,
Radu Timofte, Dominique Ginhac, Tobi Delbruck},
journal={arXiv preprint arXiv:2409.19356},
year={2024},
archivePrefix={arXiv},
eprint={2409.19356},
primaryClass={cs.CV cs.RO}
}
|
zhou2024steering
|
arxiv-663104
|
2409.19359
|
Quantum delegated and federated learning via quantum homomorphic encryption
|
<|reference_start|>Quantum delegated and federated learning via quantum homomorphic encryption: Quantum learning models hold the potential to bring computational advantages over the classical realm. As powerful quantum servers become available on the cloud, ensuring the protection of clients' private data becomes crucial. By incorporating quantum homomorphic encryption schemes, we present a general framework that enables quantum delegated and federated learning with a computation-theoretical data privacy guarantee. We show that learning and inference under this framework feature substantially lower communication complexity compared with schemes based on blind quantum computing. In addition, in the proposed quantum federated learning scenario, there is less computational burden on local quantum devices from the client side, since the server can operate on encrypted quantum data without extracting any information. We further prove that certain quantum speedups in supervised learning carry over to private delegated learning scenarios employing quantum kernel methods. Our results provide a valuable guide toward privacy-guaranteed quantum learning on the cloud, which may benefit future studies and security-related applications.<|reference_end|>
|
arxiv
|
@article{li2024quantum,
title={Quantum delegated and federated learning via quantum homomorphic
encryption},
author={Weikang Li and Dong-Ling Deng},
journal={arXiv preprint arXiv:2409.19359},
year={2024},
archivePrefix={arXiv},
eprint={2409.19359},
primaryClass={quant-ph cs.CR cs.LG}
}
|
li2024quantum
|
arxiv-663105
|
2409.19361
|
Sparse Modelling for Feature Learning in High Dimensional Data
|
<|reference_start|>Sparse Modelling for Feature Learning in High Dimensional Data: This paper presents an innovative approach to dimensionality reduction and feature extraction in high-dimensional datasets, with a specific application focus on wood surface defect detection. The proposed framework integrates sparse modeling techniques, particularly Lasso and proximal gradient methods, into a comprehensive pipeline for efficient and interpretable feature selection. Leveraging pre-trained models such as VGG19 and incorporating anomaly detection methods like Isolation Forest and Local Outlier Factor, our methodology addresses the challenge of extracting meaningful features from complex datasets. Evaluation metrics such as accuracy and F1 score, alongside visualizations, are employed to assess the performance of the sparse modeling techniques. Through this work, we aim to advance the understanding and application of sparse modeling in machine learning, particularly in the context of wood surface defect detection.<|reference_end|>
|
arxiv
|
@article{neelam2024sparse,
title={Sparse Modelling for Feature Learning in High Dimensional Data},
author={Harish Neelam, Koushik Sai Veerella, Souradip Biswas},
journal={arXiv preprint arXiv:2409.19361},
year={2024},
archivePrefix={arXiv},
eprint={2409.19361},
primaryClass={cs.LG cs.CV}
}
|
neelam2024sparse
|
arxiv-663106
|
2409.19362
|
1st Place Solution of Multiview Egocentric Hand Tracking Challenge ECCV2024
|
<|reference_start|>1st Place Solution of Multiview Egocentric Hand Tracking Challenge ECCV2024: Multi-view egocentric hand tracking is a challenging task and plays a critical role in VR interaction. In this report, we present a method that uses multi-view input images and camera extrinsic parameters to estimate both hand shape and pose. To reduce overfitting to the camera layout, we apply crop jittering and extrinsic parameter noise augmentation. Additionally, we propose an offline neural smoothing post-processing method to further improve the accuracy of hand position and pose. Our method achieves 13.92mm MPJPE on the Umetrack dataset and 21.66mm MPJPE on the HOT3D dataset.<|reference_end|>
|
arxiv
|
@article{zou20241st,
title={1st Place Solution of Multiview Egocentric Hand Tracking Challenge
ECCV2024},
author={Minqiang Zou, Zhi Lv, Riqiang Jin, Tian Zhan, Mochen Yu, Yao Tang,
Jiajun Liang},
journal={arXiv preprint arXiv:2409.19362},
year={2024},
archivePrefix={arXiv},
eprint={2409.19362},
primaryClass={cs.CV cs.AI}
}
|
zou20241st
|
arxiv-663107
|
2409.19363
|
Learning Strategy Representation for Imitation Learning in Multi-Agent Games
|
<|reference_start|>Learning Strategy Representation for Imitation Learning in Multi-Agent Games: The offline datasets for imitation learning (IL) in multi-agent games typically contain player trajectories exhibiting diverse strategies, which necessitate measures to prevent learning algorithms from acquiring undesirable behaviors. Learning representations for these trajectories is an effective approach to depicting the strategies employed by each demonstrator. However, existing learning strategies often require player identification or rely on strong assumptions, which are not appropriate for multi-agent games. Therefore, in this paper, we introduce the Strategy Representation for Imitation Learning (STRIL) framework, which (1) effectively learns strategy representations in multi-agent games, (2) estimates proposed indicators based on these representations, and (3) filters out sub-optimal data using the indicators. STRIL is a plug-in method that can be integrated into existing IL algorithms. We demonstrate the effectiveness of STRIL across competitive multi-agent scenarios, including Two-player Pong, Limit Texas Hold'em, and Connect Four. Our approach successfully acquires strategy representations and indicators, thereby identifying dominant trajectories and significantly enhancing existing IL performance across these environments.<|reference_end|>
|
arxiv
|
@article{lei2024learning,
title={Learning Strategy Representation for Imitation Learning in Multi-Agent
Games},
author={Shiqi Lei, Kanghon Lee, Linjing Li, Jinkyoo Park},
journal={arXiv preprint arXiv:2409.19363},
year={2024},
archivePrefix={arXiv},
eprint={2409.19363},
primaryClass={cs.MA cs.AI cs.LG}
}
|
lei2024learning
|
arxiv-663108
|
2409.19365
|
Conditional Image Synthesis with Diffusion Models: A Survey
|
<|reference_start|>Conditional Image Synthesis with Diffusion Models: A Survey: Conditional image synthesis based on user-specified requirements is a key component in creating complex visual content. In recent years, diffusion-based generative modeling has become a highly effective way for conditional image synthesis, leading to exponential growth in the literature. However, the complexity of diffusion-based modeling, the wide range of image synthesis tasks, and the diversity of conditioning mechanisms present significant challenges for researchers to keep up with rapid developments and understand the core concepts on this topic. In this survey, we categorize existing works based on how conditions are integrated into the two fundamental components of diffusion-based modeling, i.e., the denoising network and the sampling process. We specifically highlight the underlying principles, advantages, and potential challenges of various conditioning approaches in the training, re-purposing, and specialization stages to construct a desired denoising network. We also summarize six mainstream conditioning mechanisms in the essential sampling process. All discussions are centered around popular applications. Finally, we pinpoint some critical yet still open problems to be solved in the future and suggest some possible solutions. Our reviewed works are itemized at https://github.com/zju-pi/Awesome-Conditional-Diffusion-Models.<|reference_end|>
|
arxiv
|
@article{zhan2024conditional,
title={Conditional Image Synthesis with Diffusion Models: A Survey},
author={Zheyuan Zhan, Defang Chen, Jian-Ping Mei, Zhenghe Zhao, Jiawei Chen,
Chun Chen, Siwei Lyu, Can Wang},
journal={arXiv preprint arXiv:2409.19365},
year={2024},
archivePrefix={arXiv},
eprint={2409.19365},
primaryClass={cs.CV cs.AI}
}
|
zhan2024conditional
|
arxiv-663109
|
2409.19366
|
Mind the Gap: Promoting Missing Modality Brain Tumor Segmentation with Alignment
|
<|reference_start|>Mind the Gap: Promoting Missing Modality Brain Tumor Segmentation with Alignment: Brain tumor segmentation is often based on multiple magnetic resonance imaging (MRI). However, in clinical practice, certain modalities of MRI may be missing, which presents an even more difficult scenario. To cope with this challenge, knowledge distillation has emerged as one promising strategy. However, recent efforts typically overlook the modality gaps and thus fail to learn invariant feature representations across different modalities. Such drawback consequently leads to limited performance for both teachers and students. To ameliorate these problems, in this paper, we propose a novel paradigm that aligns latent features of involved modalities to a well-defined distribution anchor. As a major contribution, we prove that our novel training paradigm ensures a tight evidence lower bound, thus theoretically certifying its effectiveness. Extensive experiments on different backbones validate that the proposed paradigm can enable invariant feature representations and produce a teacher with narrowed modality gaps. This further offers superior guidance for missing modality students, achieving an average improvement of 1.75 on dice score.<|reference_end|>
|
arxiv
|
@article{liu2024mind,
title={Mind the Gap: Promoting Missing Modality Brain Tumor Segmentation with
Alignment},
author={Tianyi Liu and Zhaorui Tan and Haochuan Jiang and Xi Yang and Kaizhu
Huang},
journal={arXiv preprint arXiv:2409.19366},
year={2024},
archivePrefix={arXiv},
eprint={2409.19366},
primaryClass={eess.IV cs.AI cs.CV}
}
|
liu2024mind
|
arxiv-663110
|
2409.19370
|
MambaEviScrib: Mamba and Evidence-Guided Consistency Make CNN Work Robustly for Scribble-Based Weakly Supervised Ultrasound Image Segmentation
|
<|reference_start|>MambaEviScrib: Mamba and Evidence-Guided Consistency Make CNN Work Robustly for Scribble-Based Weakly Supervised Ultrasound Image Segmentation: Segmenting anatomical structures and lesions from ultrasound images contributes to disease assessment, diagnosis, and treatment. Weakly supervised learning (WSL) based on sparse annotation has achieved encouraging performance and demonstrated the potential to reduce annotation costs. However, ultrasound images often suffer from issues such as poor contrast, unclear edges, as well as varying sizes and locations of lesions. This makes it challenging for convolutional networks with local receptive fields to extract global morphological features from the sparse information provided by scribble annotations. Recently, the visual Mamba based on state space sequence models (SSMs) has significantly reduced computational complexity while ensuring long-range dependencies compared to Transformers. Consequently, for the first time, we apply scribble-based WSL to ultrasound image segmentation and propose a novel hybrid CNN-Mamba framework. Furthermore, due to the characteristics of ultrasound images and insufficient supervision signals, existing consistency regularization often filters out predictions near decision boundaries, leading to unstable predictions of edges. Hence, we introduce the Dempster-Shafer theory (DST) of evidence to devise an Evidence-Guided Consistency (EGC) strategy, which leverages high-evidence predictions more likely to occur near high-density regions to guide low-evidence predictions potentially present near decision boundaries for optimization. During training, the collaboration between the CNN branch and the Mamba branch in the proposed framework draws inspiration from each other based on the EGC strategy. Extensive experiments on four ultrasound public datasets for binary-class and multi-class segmentation demonstrate the competitiveness of the proposed method. The scribble-annotated dataset and code will be made available on https://github.com/GtLinyer/MambaEviScrib.<|reference_end|>
|
arxiv
|
@article{han2024mambaeviscrib:,
title={MambaEviScrib: Mamba and Evidence-Guided Consistency Enhance CNN
Robustness for Scribble-Based Weakly Supervised Ultrasound Image Segmentation},
author={Xiaoxiang Han, Xinyu Li, Jiang Shang, Yiman Liu, Keyan Chen, Shugong
Xu, Qiaohong Liu, Qi Zhang},
journal={arXiv preprint arXiv:2409.19370},
year={2024},
archivePrefix={arXiv},
eprint={2409.19370},
primaryClass={eess.IV cs.CV}
}
|
han2024mambaeviscrib:
|
arxiv-663111
|
2409.19371
|
Efficient Semantic Diffusion Architectures for Model Training on Synthetic Echocardiograms
|
<|reference_start|>Efficient Semantic Diffusion Architectures for Model Training on Synthetic Echocardiograms: We investigate the utility of diffusion generative models to efficiently synthesise datasets that effectively train deep learning models for image analysis. Specifically, we propose novel $\Gamma$-distribution Latent Denoising Diffusion Models (LDMs) designed to generate semantically guided synthetic cardiac ultrasound images with improved computational efficiency. We also investigate the potential of using these synthetic images as a replacement for real data in training deep networks for left-ventricular segmentation and binary echocardiogram view classification tasks. We compared six diffusion models in terms of the computational cost of generating synthetic 2D echo data, the visual realism of the resulting images, and the performance, on real data, of downstream tasks (segmentation and classification) trained using these synthetic echoes. We compare various diffusion strategies and ODE solvers for their impact on segmentation and classification performance. The results show that our propose architectures significantly reduce computational costs while maintaining or improving downstream task performance compared to state-of-the-art methods. While other diffusion models generated more realistic-looking echo images at higher computational cost, our research suggests that for model training, visual realism is not necessarily related to model performance, and considerable compute costs can be saved by using more efficient models.<|reference_end|>
|
arxiv
|
@article{stojanovski2024efficient,
title={Efficient Semantic Diffusion Architectures for Model Training on
Synthetic Echocardiograms},
author={David Stojanovski, Mariana da Silva, Pablo Lamata, Arian Beqiri and
Alberto Gomez},
journal={arXiv preprint arXiv:2409.19371},
year={2024},
archivePrefix={arXiv},
eprint={2409.19371},
primaryClass={eess.IV cs.CV}
}
|
stojanovski2024efficient
|
arxiv-663112
|
2409.19375
|
DOTA: Distributional Test-Time Adaptation of Vision-Language Models
|
<|reference_start|>DOTA: Distributional Test-Time Adaptation of Vision-Language Models: Vision-language foundation models (e.g., CLIP) have shown remarkable performance across a wide range of tasks. However, deploying these models may be unreliable when significant distribution gaps exist between the training and test data. The training-free test-time dynamic adapter (TDA) is a promising approach to address this issue by storing representative test samples to guide the classification of subsequent ones. However, TDA only naively maintains a limited number of reference samples in the cache, leading to severe test-time catastrophic forgetting when the cache is updated by dropping samples. In this paper, we propose a simple yet effective method for DistributiOnal Test-time Adaptation (Dota). Instead of naively memorizing representative test samples, Dota continually estimates the distributions of test samples, allowing the model to continually adapt to the deployment environment. The test-time posterior probabilities are then computed using the estimated distributions based on Bayes' theorem for adaptation purposes. To further enhance the adaptability on the uncertain samples, we introduce a new human-in-the-loop paradigm which identifies uncertain samples, collects human-feedback, and incorporates it into the Dota framework. Extensive experiments validate that Dota enables CLIP to continually learn, resulting in a significant improvement compared to current state-of-the-art methods.<|reference_end|>
|
arxiv
|
@article{han2024dota:,
title={DOTA: Distributional Test-Time Adaptation of Vision-Language Models},
author={Zongbo Han, Jialong Yang, Junfan Li, Qinghua Hu, Qianli Xu, Mike Zheng
Shou, Changqing Zhang},
journal={arXiv preprint arXiv:2409.19375},
year={2024},
archivePrefix={arXiv},
eprint={2409.19375},
primaryClass={cs.LG cs.AI cs.CL cs.CV cs.HC}
}
|
han2024dota:
|
arxiv-663113
|
2409.19377
|
How much do we really know about Structure Learning from iid Data? Interpretable, multi-dimensional Performance Indicator for Causal Discovery
|
<|reference_start|>How much do we really know about Structure Learning from iid Data? Interpretable, multi-dimensional Performance Indicator for Causal Discovery: Nonlinear causal discovery from observational data imposes strict identifiability assumptions on the formulation of structural equations utilized in the data generating process. The evaluation of structure learning methods under assumption violations requires a rigorous and interpretable approach, which quantifies both the structural similarity of the estimation with the ground truth and the capacity of the discovered graphs to be used for causal inference. Motivated by the lack of unified performance assessment framework, we introduce an interpretable, six-dimensional evaluation metric, i.e., distance to optimal solution (DOS), which is specifically tailored to the field of causal discovery. Furthermore, this is the first research to assess the performance of structure learning algorithms from seven different families on increasing percentage of non-identifiable, nonlinear causal patterns, inspired by real-world processes. Our large-scale simulation study, which incorporates seven experimental factors, shows that besides causal order-based methods, amortized causal discovery delivers results with comparatively high proximity to the optimal solution. In addition to the findings from our sensitivity analysis, we explore interactions effects between the experimental factors of our simulation framework in order to provide transparency about the expected performance of causal discovery techniques in different scenarios.<|reference_end|>
|
arxiv
|
@article{velev2024how,
title={How much do we really know about Structure Learning from i.i.d. Data?
Interpretable, multi-dimensional Performance Indicator for Causal Discovery},
author={Georg Velev, Stefan Lessmann},
journal={arXiv preprint arXiv:2409.19377},
year={2024},
archivePrefix={arXiv},
eprint={2409.19377},
primaryClass={stat.ML cs.LG}
}
|
velev2024how
|
arxiv-663114
|
2409.19379
|
Automated conjecturing in mathematics with \emphTxGraffiti
|
<|reference_start|>Automated conjecturing in mathematics with \emphTxGraffiti: \emph{TxGraffiti} is a data-driven, heuristic-based computer program developed to automate the process of generating conjectures across various mathematical domains. Since its creation in 2017, \emph{TxGraffiti} has contributed to numerous mathematical publications, particularly in graph theory. In this paper, we present the design and core principles of \emph{TxGraffiti}, including its roots in the original \emph{Graffiti} program, which pioneered the automation of mathematical conjecturing. We describe the data collection process, the generation of plausible conjectures, and methods such as the \emph{Dalmatian} heuristic for filtering out redundant or transitive conjectures. Additionally, we highlight its contributions to the mathematical literature and introduce a new web-based interface that allows users to explore conjectures interactively. While we focus on graph theory, the techniques demonstrated extend to other areas of mathematics.<|reference_end|>
|
arxiv
|
@article{davila2024automated,
title={Automated conjecturing in mathematics with \emph{TxGraffiti}},
author={Randy Davila},
journal={arXiv preprint arXiv:2409.19379},
year={2024},
archivePrefix={arXiv},
eprint={2409.19379},
primaryClass={math.CO cs.AI}
}
|
davila2024automated
|
arxiv-663115
|
2409.19380
|
On Computing Elastic Shape Distances between Curves in d-dimensional Space
|
<|reference_start|>On Computing Elastic Shape Distances between Curves in d-dimensional Space: The computation of the elastic registration of two simple curves in higher dimensions and therefore of the elastic shape distance between them has been investigated by Srivastava et al. Assuming the first curve has one or more starting points, and the second curve has only one, they accomplish the computation, one starting point of the first curve at a time, by minimizing an L2 type distance between them based on alternating computations of optimal diffeomorphisms of the unit interval and optimal rotation matrices that reparametrize and rotate, respectively, one of the curves. We recreate the work by Srivastava et al., but in contrast to it, again for curves in any dimension, we present a Dynamic Programming algorithm for computing optimal diffeomorphisms that is linear, and justify in a purely algebraic manner the usual algorithm for computing optimal rotation matrices, the Kabsch-Umeyama algorithm, which is based on the computation of the singular value decomposition of a matrix. In addition, we minimize the L2 type distance with a procedure that alternates computations of optimal diffeomorphisms with successive computations of optimal rotation matrices for all starting points of the first curve. Carrying out computations this way is not only more efficient all by itself, but, if both curves are closed, allows applications of the Fast Fourier Transform for computing successively in an even more efficient manner, optimal rotation matrices for all starting points of the first curve.<|reference_end|>
|
arxiv
|
@article{bernal2024on,
title={On Computing Elastic Shape Distances between Curves in d-dimensional
Space},
author={Javier Bernal, Jim Lawrence, Gunay Dogan, Charles Hagwood},
journal={2021 NIST Technical Note 2164},
year={2024},
doi={10.6028/NIST.TN.2164},
number={NIST TN 2164},
archivePrefix={arXiv},
eprint={2409.19380},
primaryClass={math.DG cs.CG}
}
|
bernal2024on
|
arxiv-663116
|
2409.19381
|
MetaMath: Integrating Natural Language and Code for Enhanced Mathematical Reasoning in Large Language Models
|
<|reference_start|>MetaMath: Integrating Natural Language and Code for Enhanced Mathematical Reasoning in Large Language Models: Large Language Models (LLMs) are commonly used to generate solutions for mathematical reasoning problems in the following formats: natural language, code, or a combination of both. In this paper, we explore fundamental questions related to solving mathematical reasoning problems using natural language and code with state-of-the-art LLMs, including GPT-4o-mini and LLama-3.1-8b-Turbo. Our findings show that LLMs are better at reasoning in natural language compared to code. Additionally, although natural language and code serve as complementary forms of reasoning, they can affect each other in a negative way in certain scenarios. These insights motivate our development of a new prompting method, MetaMath, which leverages an LLM to dynamically select the most appropriate reasoning form, resulting in improved performance over comparable baselines with GPT-4o-mini.<|reference_end|>
|
arxiv
|
@article{xiong2024inc-math:,
title={INC-Math: Integrating Natural Language and Code for Enhanced
Mathematical Reasoning in Large Language Models},
author={Xuyuan Xiong, Simeng Han, Ziyue Zhou, Arman Cohan},
journal={arXiv preprint arXiv:2409.19381},
year={2024},
archivePrefix={arXiv},
eprint={2409.19381},
primaryClass={cs.CL}
}
|
xiong2024inc-math:
|
arxiv-663117
|
2409.19382
|
Zero-Shot Multi-Hop Question Answering via Monte-Carlo Tree Search with Large Language Models
|
<|reference_start|>Zero-Shot Multi-Hop Question Answering via Monte-Carlo Tree Search with Large Language Models: Recent advances in large language models (LLMs) have significantly impacted the domain of multi-hop question answering (MHQA), where systems are required to aggregate information and infer answers from disparate pieces of text. However, the autoregressive nature of LLMs inherently poses a challenge as errors may accumulate if mistakes are made in the intermediate reasoning steps. This paper introduces Monte-Carlo tree search for Zero-shot multi-hop Question Answering (MZQA), a framework based on Monte-Carlo tree search (MCTS) to identify optimal reasoning paths in MHQA tasks, mitigating the error propagation from sequential reasoning processes. Unlike previous works, we propose a zero-shot prompting method, which relies solely on instructions without the support of hand-crafted few-shot examples that typically require domain expertise. We also introduce a behavioral cloning approach (MZQA-BC) trained on self-generated MCTS inference trajectories, achieving an over 10-fold increase in reasoning speed with bare compromise in performance. The efficacy of our method is validated on standard benchmarks such as HotpotQA, 2WikiMultihopQA, and MuSiQue, demonstrating that it outperforms existing frameworks.<|reference_end|>
|
arxiv
|
@article{lee2024zero-shot,
title={Zero-Shot Multi-Hop Question Answering via Monte-Carlo Tree Search with
Large Language Models},
author={Seongmin Lee and Jaewook Shin and Youngjin Ahn and Seokin Seo and
Ohjoon Kwon and Kee-Eung Kim},
journal={arXiv preprint arXiv:2409.19382},
year={2024},
archivePrefix={arXiv},
eprint={2409.19382},
primaryClass={cs.CL}
}
|
lee2024zero-shot
|
arxiv-663118
|
2409.19389
|
Co-design of a novel CMOS highly parallel, low-power, multi-chip neural network accelerator
|
<|reference_start|>Co-design of a novel CMOS highly parallel, low-power, multi-chip neural network accelerator: Why do security cameras, sensors, and siri use cloud servers instead of on-board computation? The lack of very-low-power, high-performance chips greatly limits the ability to field untethered edge devices. We present the NV-1, a new low-power ASIC AI processor that greatly accelerates parallel processing (> 10X) with dramatic reduction in energy consumption (> 100X), via many parallel combined processor-memory units, i.e., a drastically non-von-Neumann architecture, allowing very large numbers of independent processing streams without bottlenecks due to typical monolithic memory. The current initial prototype fab arises from a successful co-development effort between algorithm- and software-driven architectural design and VLSI design realities. An innovative communication protocol minimizes power usage, and data transport costs among nodes were vastly reduced by eliminating the address bus, through local target address matching. Throughout the development process, the software and architecture teams were able to innovate alongside the circuit design team's implementation effort. A digital twin of the proposed hardware was developed early on to ensure that the technical implementation met the architectural specifications, and indeed the predicted performance metrics have now been thoroughly verified in real hardware test data. The resulting device is currently being used in a fielded edge sensor application; additional proofs of principle are in progress demonstrating the proof on the ground of this new real-world extremely low-power high-performance ASIC device.<|reference_end|>
|
arxiv
|
@article{hokenmaier2024co-design,
title={Co-design of a novel CMOS highly parallel, low-power, multi-chip neural
network accelerator},
author={W Hokenmaier, R Jurasek, E Bowen, R Granger, D Odom},
journal={IEEE Microelectronics Design & Test Symposium (MDTS 2024)
https://ieeexplore.ieee.org/document/10570137},
year={2024},
doi={10.1109/MDTS61600.2024.10570137},
archivePrefix={arXiv},
eprint={2409.19389},
primaryClass={cs.DC cs.AI cs.AR}
}
|
hokenmaier2024co-design
|
arxiv-663119
|
2409.19390
|
Efficient Federated Intrusion Detection in 5G ecosystem using optimized BERT-based model
|
<|reference_start|>Efficient Federated Intrusion Detection in 5G ecosystem using optimized BERT-based model: The fifth-generation (5G) offers advanced services, supporting applications such as intelligent transportation, connected healthcare, and smart cities within the Internet of Things (IoT). However, these advancements introduce significant security challenges, with increasingly sophisticated cyber-attacks. This paper proposes a robust intrusion detection system (IDS) using federated learning and large language models (LLMs). The core of our IDS is based on BERT, a transformer model adapted to identify malicious network flows. We modified this transformer to optimize performance on edge devices with limited resources. Experiments were conducted in both centralized and federated learning contexts. In the centralized setup, the model achieved an inference accuracy of 97.79%. In a federated learning context, the model was trained across multiple devices using both IID (Independent and Identically Distributed) and non-IID data, based on various scenarios, ensuring data privacy and compliance with regulations. We also leveraged linear quantization to compress the model for deployment on edge devices. This reduction resulted in a slight decrease of 0.02% in accuracy for a model size reduction of 28.74%. The results underscore the viability of LLMs for deployment in IoT ecosystems, highlighting their ability to operate on devices with constrained computational and storage resources.<|reference_end|>
|
arxiv
|
@article{adjewa2024efficient,
title={Efficient Federated Intrusion Detection in 5G ecosystem using optimized
BERT-based model},
author={Frederic Adjewa, Moez Esseghir, Leila Merghem-Boulahia},
journal={arXiv preprint arXiv:2409.19390},
year={2024},
archivePrefix={arXiv},
eprint={2409.19390},
primaryClass={cs.CR cs.AI}
}
|
adjewa2024efficient
|
arxiv-663120
|
2409.19391
|
Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse Training
|
<|reference_start|>Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse Training: Deep Multi-agent Reinforcement Learning (MARL) relies on neural networks with numerous parameters in multi-agent scenarios, often incurring substantial computational overhead. Consequently, there is an urgent need to expedite training and enable model compression in MARL. This paper proposes the utilization of dynamic sparse training (DST), a technique proven effective in deep supervised learning tasks, to alleviate the computational burdens in MARL training. However, a direct adoption of DST fails to yield satisfactory MARL agents, leading to breakdowns in value learning within deep sparse value-based MARL models. Motivated by this challenge, we introduce an innovative Multi-Agent Sparse Training (MAST) framework aimed at simultaneously enhancing the reliability of learning targets and the rationality of sample distribution to improve value learning in sparse models. Specifically, MAST incorporates the Soft Mellowmax Operator with a hybrid TD-($\lambda$) schema to establish dependable learning targets. Additionally, it employs a dual replay buffer mechanism to enhance the distribution of training samples. Building upon these aspects, MAST utilizes gradient-based topology evolution to exclusively train multiple MARL agents using sparse networks. Our comprehensive experimental investigation across various value-based MARL algorithms on multiple benchmarks demonstrates, for the first time, significant reductions in redundancy of up to $20\times$ in Floating Point Operations (FLOPs) for both training and inference, with less than $3\%$ performance degradation.<|reference_end|>
|
arxiv
|
@article{hu2024value-based,
title={Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse
Training},
author={Pihe Hu, Shaolong Li, Zhuoran Li, Ling Pan, Longbo Huang},
journal={arXiv preprint arXiv:2409.19391},
year={2024},
archivePrefix={arXiv},
eprint={2409.19391},
primaryClass={cs.LG}
}
|
hu2024value-based
|
arxiv-663121
|
2409.19396
|
Canonical Correlation Guided Deep Neural Network
|
<|reference_start|>Canonical Correlation Guided Deep Neural Network: Learning representations of two views of data such that the resulting representations are highly linearly correlated is appealing in machine learning. In this paper, we present a canonical correlation guided learning framework, which allows to be realized by deep neural networks (CCDNN), to learn such a correlated representation. It is also a novel merging of multivariate analysis (MVA) and machine learning, which can be viewed as transforming MVA into end-to-end architectures with the aid of neural networks. Unlike the linear canonical correlation analysis (CCA), kernel CCA and deep CCA, in the proposed method, the optimization formulation is not restricted to maximize correlation, instead we make canonical correlation as a constraint, which preserves the correlated representation learning ability and focuses more on the engineering tasks endowed by optimization formulation, such as reconstruction, classification and prediction. Furthermore, to reduce the redundancy induced by correlation, a redundancy filter is designed. We illustrate the performance of CCDNN on various tasks. In experiments on MNIST dataset, the results show that CCDNN has better reconstruction performance in terms of mean squared error and mean absolute error than DCCA and DCCAE. Also, we present the application of the proposed network to industrial fault diagnosis and remaining useful life cases for the classification and prediction tasks accordingly. The proposed method demonstrates superior performance in both tasks when compared to existing methods. Extension of CCDNN to much more deeper with the aid of residual connection is also presented in appendix.<|reference_end|>
|
arxiv
|
@article{chen2024canonical,
title={Canonical Correlation Guided Deep Neural Network},
author={Zhiwen Chen, Siwen Mo, Haobin Ke, Steven X. Ding, Zhaohui Jiang,
Chunhua Yang, Weihua Gui},
journal={arXiv preprint arXiv:2409.19396},
year={2024},
archivePrefix={arXiv},
eprint={2409.19396},
primaryClass={cs.LG cs.CV cs.SY eess.SY}
}
|
chen2024canonical
|
arxiv-663122
|
2409.19398
|
Ning Cai: A Tribute to a Pioneering Scholar in Information Theory
|
<|reference_start|>Ning Cai: A Tribute to a Pioneering Scholar in Information Theory: It is with heavy hearts that we mourn the passing of Ning Cai, a luminary whose pioneering spirit illuminated the realms of network coding and beyond. On May 25, 2023, at the age of 75, Prof. Cai bid farewell, leaving behind a profound legacy that continues to resonate across generations of researchers. His contributions spanned a vast spectrum, from the groundbreaking explorations in network coding to the intricate realms of quantum information theory. Ning's indelible mark on the academic landscape is a testament to his unwavering dedication and relentless pursuit of knowledge. Among his many accolades, Ning's seminal works garnered widespread recognition, exemplified by the prestigious 2005 IEEE Information Theory Society Paper Award for his work "Linear Network Coding." Furthermore, his enduring impact was underscored by the 2018 ACM SIGMOBILE Test-of-Time Paper Award, bestowed upon his paper "Network Information Flow." In addition to his scholarly achievements, Ning's unwavering commitment to mentorship has left an indelible mark on countless aspiring scholars. His guidance and wisdom continue to inspire and guide future generations in their scholarly pursuits. As we bid farewell to a titan in the field, let us cherish the legacy of Ning Cai, whose brilliance and generosity of spirit will forever endure in the annals of academia.<|reference_end|>
|
arxiv
|
@article{althöfer2024ning,
title={Ning Cai: A Tribute to a Pioneering Scholar in Information Theory},
author={Ingo Alth"ofer, Holger Boche, Christian Deppe, Ulrich Tamm, Andreas
Winter, Raymond W. Yeung},
journal={arXiv preprint arXiv:2409.19398},
year={2024},
archivePrefix={arXiv},
eprint={2409.19398},
primaryClass={cs.IT math.IT}
}
|
althöfer2024ning
|
arxiv-663123
|
2409.19401
|
Crafting Personalized Agents through Retrieval-Augmented Generation on Editable Memory Graphs
|
<|reference_start|>Crafting Personalized Agents through Retrieval-Augmented Generation on Editable Memory Graphs: In the age of mobile internet, user data, often referred to as memories, is continuously generated on personal devices. Effectively managing and utilizing this data to deliver services to users is a compelling research topic. In this paper, we introduce a novel task of crafting personalized agents powered by large language models (LLMs), which utilize a user's smartphone memories to enhance downstream applications with advanced LLM capabilities. To achieve this goal, we introduce EMG-RAG, a solution that combines Retrieval-Augmented Generation (RAG) techniques with an Editable Memory Graph (EMG). This approach is further optimized using Reinforcement Learning to address three distinct challenges: data collection, editability, and selectability. Extensive experiments on a real-world dataset validate the effectiveness of EMG-RAG, achieving an improvement of approximately 10% over the best existing approach. Additionally, the personalized agents have been transferred into a real smartphone AI assistant, which leads to enhanced usability.<|reference_end|>
|
arxiv
|
@article{wang2024crafting,
title={Crafting Personalized Agents through Retrieval-Augmented Generation on
Editable Memory Graphs},
author={Zheng Wang, Zhongyang Li, Zeren Jiang, Dandan Tu, Wei Shi},
journal={arXiv preprint arXiv:2409.19401},
year={2024},
archivePrefix={arXiv},
eprint={2409.19401},
primaryClass={cs.CL cs.IR}
}
|
wang2024crafting
|
arxiv-663124
|
2409.19402
|
Projected Tensor-Tensor Products for Efficient Computation of Optimal Multiway Data Representations
|
<|reference_start|>Projected Tensor-Tensor Products for Efficient Computation of Optimal Multiway Data Representations: Tensor decompositions have become essential tools for feature extraction and compression of multiway data. Recent advances in tensor operators have enabled desirable properties of standard matrix algebra to be retained for multilinear factorizations. Behind this matrix-mimetic tensor operation is an invertible matrix whose size depends quadratically on certain dimensions of the data. As a result, for large-scale multiway data, the invertible matrix can be computationally demanding to apply and invert and can lead to inefficient tensor representations in terms of construction and storage costs. In this work, we propose a new projected tensor-tensor product that relaxes the invertibility restriction to reduce computational overhead and still preserves fundamental linear algebraic properties. The transformation behind the projected product is a tall-and-skinny matrix with unitary columns, which depends only linearly on certain dimensions of the data, thereby reducing computational complexity by an order of magnitude. We provide extensive theory to prove the matrix mimeticity and the optimality of compressed representations within the projected product framework. We further prove that projected-product-based approximations outperform a comparable, non-matrix-mimetic tensor factorization. We support the theoretical findings and demonstrate the practical benefits of projected products through numerical experiments on video and hyperspectral imaging data.<|reference_end|>
|
arxiv
|
@article{keegan2024projected,
title={Projected Tensor-Tensor Products for Efficient Computation of Optimal
Multiway Data Representations},
author={Katherine Keegan, Elizabeth Newman},
journal={arXiv preprint arXiv:2409.19402},
year={2024},
archivePrefix={arXiv},
eprint={2409.19402},
primaryClass={math.NA cs.CV cs.NA}
}
|
keegan2024projected
|
arxiv-663125
|
2409.19403
|
Restore Anything with Masks: Leveraging Mask Image Modeling for Blind All-in-One Image Restoration
|
<|reference_start|>Restore Anything with Masks: Leveraging Mask Image Modeling for Blind All-in-One Image Restoration: All-in-one image restoration aims to handle multiple degradation types using one model. This paper proposes a simple pipeline for all-in-one blind image restoration to Restore Anything with Masks (RAM). We focus on the image content by utilizing Mask Image Modeling to extract intrinsic image information rather than distinguishing degradation types like other methods. Our pipeline consists of two stages: masked image pre-training and fine-tuning with mask attribute conductance. We design a straightforward masking pre-training approach specifically tailored for all-in-one image restoration. This approach enhances networks to prioritize the extraction of image content priors from various degradations, resulting in a more balanced performance across different restoration tasks and achieving stronger overall results. To bridge the gap of input integrity while preserving learned image priors as much as possible, we selectively fine-tuned a small portion of the layers. Specifically, the importance of each layer is ranked by the proposed Mask Attribute Conductance (MAC), and the layers with higher contributions are selected for finetuning. Extensive experiments demonstrate that our method achieves state-of-the-art performance. Our code and model will be released at \href{https://github.com/Dragonisss/RAM}{https://github.com/Dragonisss/RAM}.<|reference_end|>
|
arxiv
|
@article{qin2024restore,
title={Restore Anything with Masks: Leveraging Mask Image Modeling for Blind
All-in-One Image Restoration},
author={Chu-Jie Qin, Rui-Qi Wu, Zikun Liu, Xin Lin, Chun-Le Guo, Hyun Hee
Park, and Chongyi Li},
journal={arXiv preprint arXiv:2409.19403},
year={2024},
archivePrefix={arXiv},
eprint={2409.19403},
primaryClass={cs.CV}
}
|
qin2024restore
|
arxiv-663126
|
2409.19405
|
G3R: Gradient Guided Generalizable Reconstruction
|
<|reference_start|>G3R: Gradient Guided Generalizable Reconstruction: Large scale 3D scene reconstruction is important for applications such as virtual reality and simulation. Existing neural rendering approaches (e.g., NeRF, 3DGS) have achieved realistic reconstructions on large scenes, but optimize per scene, which is expensive and slow, and exhibit noticeable artifacts under large view changes due to overfitting. Generalizable approaches or large reconstruction models are fast, but primarily work for small scenes/objects and often produce lower quality rendering results. In this work, we introduce G3R, a generalizable reconstruction approach that can efficiently predict high-quality 3D scene representations for large scenes. We propose to learn a reconstruction network that takes the gradient feedback signals from differentiable rendering to iteratively update a 3D scene representation, combining the benefits of high photorealism from per-scene optimization with data-driven priors from fast feed-forward prediction methods. Experiments on urban-driving and drone datasets show that G3R generalizes across diverse large scenes and accelerates the reconstruction process by at least 10x while achieving comparable or better realism compared to 3DGS, and also being more robust to large view changes.<|reference_end|>
|
arxiv
|
@article{chen2024g3r:,
title={G3R: Gradient Guided Generalizable Reconstruction},
author={Yun Chen, Jingkang Wang, Ze Yang, Sivabalan Manivasagam, Raquel
Urtasun},
journal={arXiv preprint arXiv:2409.19405},
year={2024},
archivePrefix={arXiv},
eprint={2409.19405},
primaryClass={cs.CV cs.RO}
}
|
chen2024g3r:
|
arxiv-663127
|
2409.19407
|
Brain-JEPA: Brain Dynamics Foundation Model with Gradient Positioning and Spatiotemporal Masking
|
<|reference_start|>Brain-JEPA: Brain Dynamics Foundation Model with Gradient Positioning and Spatiotemporal Masking: We introduce Brain-JEPA, a brain dynamics foundation model with the Joint-Embedding Predictive Architecture (JEPA). This pioneering model achieves state-of-the-art performance in demographic prediction, disease diagnosis/prognosis, and trait prediction through fine-tuning. Furthermore, it excels in off-the-shelf evaluations (e.g., linear probing) and demonstrates superior generalizability across different ethnic groups, surpassing the previous large model for brain activity significantly. Brain-JEPA incorporates two innovative techniques: Brain Gradient Positioning and Spatiotemporal Masking. Brain Gradient Positioning introduces a functional coordinate system for brain functional parcellation, enhancing the positional encoding of different Regions of Interest (ROIs). Spatiotemporal Masking, tailored to the unique characteristics of fMRI data, addresses the challenge of heterogeneous time-series patches. These methodologies enhance model performance and advance our understanding of the neural circuits underlying cognition. Overall, Brain-JEPA is paving the way to address pivotal questions of building brain functional coordinate system and masking brain activity at the AI-neuroscience interface, and setting a potentially new paradigm in brain activity analysis through downstream adaptation.<|reference_end|>
|
arxiv
|
@article{dong2024brain-jepa:,
title={Brain-JEPA: Brain Dynamics Foundation Model with Gradient Positioning
and Spatiotemporal Masking},
author={Zijian Dong, Ruilin Li, Yilei Wu, Thuan Tinh Nguyen, Joanna Su Xian
Chong, Fang Ji, Nathanael Ren Jie Tong, Christopher Li Hsian Chen, Juan Helen
Zhou},
journal={arXiv preprint arXiv:2409.19407},
year={2024},
archivePrefix={arXiv},
eprint={2409.19407},
primaryClass={q-bio.NC cs.AI cs.CV}
}
|
dong2024brain-jepa:
|
arxiv-663128
|
2409.19409
|
Co-investment with Payoff Sharing Benefit Operators and Users in Network Design
|
<|reference_start|>Co-investment with Payoff Sharing Benefit Operators and Users in Network Design: Network-based complex systems are inherently interconnected, with the design and performance of subnetworks being interdependent. However, the decisions of self-interested operators may lead to suboptimal outcomes for users. In this paper, we consider the question of what cooperative mechanisms can benefit both operators and users simultaneously. We address this question in a game theoretical setting, integrating both non-cooperative and cooperative game theory. During the non-cooperative stage, subnetwork decision-makers strategically design their local networks. In the cooperative stage, the co-investment mechanism and the payoff-sharing mechanism are developed to enlarge collective benefits and fairly distribute them. A case study of the Sioux Falls network is conducted to demonstrate the efficiency of the proposed framework. The impact of this interactive network design on environmental sustainability, social welfare and economic efficiency is evaluated, along with an examination of scenarios involving regions with heterogeneous characteristics.<|reference_end|>
|
arxiv
|
@article{he2024co-investment,
title={Co-investment with Payoff Sharing Benefit Operators and Users in Network
Design},
author={Mingjia He, Andrea Censi, Emilio Frazzoli, and Gioele Zardini},
journal={arXiv preprint arXiv:2409.19409},
year={2024},
archivePrefix={arXiv},
eprint={2409.19409},
primaryClass={eess.SY cs.SY}
}
|
he2024co-investment
|
arxiv-663129
|
2409.19410
|
Exact Algorithms for Clustered Planarity with Linear Saturators
|
<|reference_start|>Exact Algorithms for Clustered Planarity with Linear Saturators: We study Clustered Planarity with Linear Saturators, which is the problem of augmenting an $n$-vertex planar graph whose vertices are partitioned into independent sets (called clusters) with paths - one for each cluster - that connect all the vertices in each cluster while maintaining planarity. We show that the problem can be solved in time $2^{O(n)}$ for both the variable and fixed embedding case. Moreover, we show that it can be solved in subexponential time $2^{O(\sqrt{n}\log n)}$ in the fixed embedding case if additionally the input graph is connected. The latter time complexity is tight under the Exponential-Time Hypothesis. We also show that $n$ can be replaced with the vertex cover number of the input graph by providing a linear (resp. polynomial) kernel for the variable-embedding (resp. fixed-embedding) case; these results contrast the NP-hardness of the problem on graphs of bounded treewidth (and even on trees). Finally, we complement known lower bounds for the problem by showing that Clustered Planarity with Linear Saturators is NP-hard even when the number of clusters is at most $3$, thus excluding the algorithmic use of the number of clusters as a parameter.<|reference_end|>
|
arxiv
|
@article{da lozzo2024exact,
title={Exact Algorithms for Clustered Planarity with Linear Saturators},
author={Giordano Da Lozzo, Robert Ganian, Siddharth Gupta, Bojan Mohar,
Sebastian Ordyniak, Meirav Zehavi},
journal={arXiv preprint arXiv:2409.19410},
year={2024},
archivePrefix={arXiv},
eprint={2409.19410},
primaryClass={cs.DS cs.CG}
}
|
da lozzo2024exact
|
arxiv-663130
|
2409.19413
|
Membership Privacy Evaluation in Deep Spiking Neural Networks
|
<|reference_start|>Membership Privacy Evaluation in Deep Spiking Neural Networks: Artificial Neural Networks (ANNs), commonly mimicking neurons with non-linear functions to output floating-point numbers, consistently receive the same signals of a data point during its forward time. Unlike ANNs, Spiking Neural Networks (SNNs) get various input signals in the forward time of a data point and simulate neurons in a biologically plausible way, i.e., producing a spike (a binary value) if the accumulated membrane potential of a neuron is larger than a threshold. Even though ANNs have achieved remarkable success in multiple tasks, e.g., face recognition and object detection, SNNs have recently obtained attention due to their low power consumption, fast inference, and event-driven properties. While privacy threats against ANNs are widely explored, much less work has been done on SNNs. For instance, it is well-known that ANNs are vulnerable to the Membership Inference Attack (MIA), but whether the same applies to SNNs is not explored. In this paper, we evaluate the membership privacy of SNNs by considering eight MIAs, seven of which are inspired by MIAs against ANNs. Our evaluation results show that SNNs are more vulnerable (maximum 10% higher in terms of balanced attack accuracy) than ANNs when both are trained with neuromorphic datasets (with time dimension). On the other hand, when training ANNs or SNNs with static datasets (without time dimension), the vulnerability depends on the dataset used. If we convert ANNs trained with static datasets to SNNs, the accuracy of MIAs drops (maximum 11.5% with a reduction of 7.6% on the test accuracy of the target model). Next, we explore the impact factors of MIAs on SNNs by conducting a hyperparameter study. Finally, we show that the basic data augmentation method for static data and two recent data augmentation methods for neuromorphic data can considerably (maximum reduction of 25.7%) decrease MIAs' performance on SNNs.<|reference_end|>
|
arxiv
|
@article{li2024membership,
title={Membership Privacy Evaluation in Deep Spiking Neural Networks},
author={Jiaxin Li, Gorka Abad, Stjepan Picek, and Mauro Conti},
journal={arXiv preprint arXiv:2409.19413},
year={2024},
archivePrefix={arXiv},
eprint={2409.19413},
primaryClass={cs.CR cs.AI}
}
|
li2024membership
|
arxiv-663131
|
2409.19414
|
Sequential Signal Mixing Aggregation for Message Passing Graph Neural Networks
|
<|reference_start|>Sequential Signal Mixing Aggregation for Message Passing Graph Neural Networks: Message Passing Graph Neural Networks (MPGNNs) have emerged as the preferred method for modeling complex interactions across diverse graph entities. While the theory of such models is well understood, their aggregation module has not received sufficient attention. Sum-based aggregators have solid theoretical foundations regarding their separation capabilities. However, practitioners often prefer using more complex aggregations and mixtures of diverse aggregations. In this work, we unveil a possible explanation for this gap. We claim that sum-based aggregators fail to "mix" features belonging to distinct neighbors, preventing them from succeeding at downstream tasks. To this end, we introduce Sequential Signal Mixing Aggregation (SSMA), a novel plug-and-play aggregation for MPGNNs. SSMA treats the neighbor features as 2D discrete signals and sequentially convolves them, inherently enhancing the ability to mix features attributed to distinct neighbors. By performing extensive experiments, we show that when combining SSMA with well-established MPGNN architectures, we achieve substantial performance gains across various benchmarks, achieving new state-of-the-art results in many settings. We published our code at \url{https://almogdavid.github.io/SSMA/}<|reference_end|>
|
arxiv
|
@article{taraday2024sequential,
title={Sequential Signal Mixing Aggregation for Message Passing Graph Neural
Networks},
author={Mitchell Keren Taraday, Almog David, Chaim Baskin},
journal={arXiv preprint arXiv:2409.19414},
year={2024},
archivePrefix={arXiv},
eprint={2409.19414},
primaryClass={cs.LG eess.SP}
}
|
taraday2024sequential
|
arxiv-663132
|
2409.19415
|
Bridging the Gap in Hybrid Decision-Making Systems
|
<|reference_start|>Bridging the Gap in Hybrid Decision-Making Systems: We introduce BRIDGET, a novel human-in-the-loop system for hybrid decision-making, aiding the user to label records from an un-labeled dataset, attempting to ``bridge the gap'' between the two most popular Hybrid Decision-Making paradigms: those featuring the human in a leading position, and the other with a machine making most of the decisions. BRIDGET understands when either a machine or a human user should be in charge, dynamically switching between two statuses. In the different statuses, BRIDGET still fosters the human-AI interaction, either having a machine learning model assuming skeptical stances towards the user and offering them suggestions, or towards itself and calling the user back. We believe our proposal lays the groundwork for future synergistic systems involving a human and a machine decision-makers.<|reference_end|>
|
arxiv
|
@article{mazzoni2024bridging,
title={Bridging the Gap in Hybrid Decision-Making Systems},
author={Federico Mazzoni, Roberto Pellungrini, Riccardo Guidotti},
journal={arXiv preprint arXiv:2409.19415},
year={2024},
archivePrefix={arXiv},
eprint={2409.19415},
primaryClass={cs.AI cs.HC}
}
|
mazzoni2024bridging
|
arxiv-663133
|
2409.19416
|
Machine Learning Operations: A Mapping Study
|
<|reference_start|>Machine Learning Operations: A Mapping Study: Machine learning and AI have been recently embraced by many companies. Machine Learning Operations, (MLOps), refers to the use of continuous software engineering processes, such as DevOps, in the deployment of machine learning models to production. Nevertheless, not all machine learning initiatives successfully transition to the production stage owing to the multitude of intricate factors involved. This article discusses the issues that exist in several components of the MLOps pipeline, namely the data manipulation pipeline, model building pipeline, and deployment pipeline. A systematic mapping study is performed to identify the challenges that arise in the MLOps system categorized by different focus areas. Using this data, realistic and applicable recommendations are offered for tools or solutions that can be used for their implementation. The main value of this work is it maps distinctive challenges in MLOps along with the recommended solutions outlined in our study. These guidelines are not specific to any particular tool and are applicable to both research and industrial settings.<|reference_end|>
|
arxiv
|
@article{chakraborty2024machine,
title={Machine Learning Operations: A Mapping Study},
author={Abhijit Chakraborty, Suddhasvatta Das, Kevin Gary},
journal={arXiv preprint arXiv:2409.19416},
year={2024},
archivePrefix={arXiv},
eprint={2409.19416},
primaryClass={cs.SE cs.LG}
}
|
chakraborty2024machine
|
arxiv-663134
|
2409.19417
|
Subject Data Auditing via Source Inference Attack in Cross-Silo Federated Learning
|
<|reference_start|>Subject Data Auditing via Source Inference Attack in Cross-Silo Federated Learning: Source Inference Attack (SIA) in Federated Learning (FL) aims to identify which client used a target data point for local model training. It allows the central server to audit clients' data usage. In cross-silo FL, a client (silo) collects data from multiple subjects (e.g., individuals, writers, or devices), posing a risk of subject information leakage. Subject Membership Inference Attack (SMIA) targets this scenario and attempts to infer whether any client utilizes data points from a target subject in cross-silo FL. However, existing results on SMIA are limited and based on strong assumptions on the attack scenario. Therefore, we propose a Subject-Level Source Inference Attack (SLSIA) by removing critical constraints that only one client can use a target data point in SIA and imprecise detection of clients utilizing target subject data in SMIA. The attacker, positioned on the server side, controls a target data source and aims to detect all clients using data points from the target subject. Our strategy leverages a binary attack classifier to predict whether the embeddings returned by a local model on test data from the target subject include unique patterns that indicate a client trains the model with data from that subject. To achieve this, the attacker locally pre-trains models using data derived from the target subject and then leverages them to build a training set for the binary attack classifier. Our SLSIA significantly outperforms previous methods on three datasets. Specifically, SLSIA achieves a maximum average accuracy of 0.88 over 50 target subjects. Analyzing embedding distribution and input feature distance shows that datasets with sparse subjects are more susceptible to our attack. Finally, we propose to defend our SLSIA using item-level and subject-level differential privacy mechanisms.<|reference_end|>
|
arxiv
|
@article{li2024subject,
title={Subject Data Auditing via Source Inference Attack in Cross-Silo
Federated Learning},
author={Jiaxin Li, Marco Arazzi, Antonino Nocera, Mauro Conti},
journal={arXiv preprint arXiv:2409.19417},
year={2024},
archivePrefix={arXiv},
eprint={2409.19417},
primaryClass={cs.CR cs.AI}
}
|
li2024subject
|
arxiv-663135
|
2409.19420
|
Multi-sensor Learning Enables Information Transfer across Different Sensory Data and Augments Multi-modality Imaging
|
<|reference_start|>Multi-sensor Learning Enables Information Transfer across Different Sensory Data and Augments Multi-modality Imaging: Multi-modality imaging is widely used in clinical practice and biomedical research to gain a comprehensive understanding of an imaging subject. Currently, multi-modality imaging is accomplished by post hoc fusion of independently reconstructed images under the guidance of mutual information or spatially registered hardware, which limits the accuracy and utility of multi-modality imaging. Here, we investigate a data-driven multi-modality imaging (DMI) strategy for synergetic imaging of CT and MRI. We reveal two distinct types of features in multi-modality imaging, namely intra- and inter-modality features, and present a multi-sensor learning (MSL) framework to utilize the crossover inter-modality features for augmented multi-modality imaging. The MSL imaging approach breaks down the boundaries of traditional imaging modalities and allows for optimal hybridization of CT and MRI, which maximizes the use of sensory data. We showcase the effectiveness of our DMI strategy through synergetic CT-MRI brain imaging. The principle of DMI is quite general and holds enormous potential for various DMI applications across disciplines.<|reference_end|>
|
arxiv
|
@article{zhu2024multi-sensor,
title={Multi-sensor Learning Enables Information Transfer across Different
Sensory Data and Augments Multi-modality Imaging},
author={Lingting Zhu, Yizheng Chen, Lianli Liu, Lei Xing, Lequan Yu},
journal={arXiv preprint arXiv:2409.19420},
year={2024},
archivePrefix={arXiv},
eprint={2409.19420},
primaryClass={eess.IV cs.CV}
}
|
zhu2024multi-sensor
|
arxiv-663136
|
2409.19422
|
Identifiable Shared Component Analysis of Unpaired Multimodal Mixtures
|
<|reference_start|>Identifiable Shared Component Analysis of Unpaired Multimodal Mixtures: A core task in multi-modal learning is to integrate information from multiple feature spaces (e.g., text and audio), offering modality-invariant essential representations of data. Recent research showed that, classical tools such as {\it canonical correlation analysis} (CCA) provably identify the shared components up to minor ambiguities, when samples in each modality are generated from a linear mixture of shared and private components. Such identifiability results were obtained under the condition that the cross-modality samples are aligned/paired according to their shared information. This work takes a step further, investigating shared component identifiability from multi-modal linear mixtures where cross-modality samples are unaligned. A distribution divergence minimization-based loss is proposed, under which a suite of sufficient conditions ensuring identifiability of the shared components are derived. Our conditions are based on cross-modality distribution discrepancy characterization and density-preserving transform removal, which are much milder than existing studies relying on independent component analysis. More relaxed conditions are also provided via adding reasonable structural constraints, motivated by available side information in various applications. The identifiability claims are thoroughly validated using synthetic and real-world data.<|reference_end|>
|
arxiv
|
@article{timilsina2024identifiable,
title={Identifiable Shared Component Analysis of Unpaired Multimodal Mixtures},
author={Subash Timilsina, Sagar Shrestha, Xiao Fu},
journal={arXiv preprint arXiv:2409.19422},
year={2024},
archivePrefix={arXiv},
eprint={2409.19422},
primaryClass={cs.LG cs.AI stat.ML}
}
|
timilsina2024identifiable
|
arxiv-663137
|
2409.19425
|
From Unimodal to Multimodal: Scaling up Projectors to Align Modalities
|
<|reference_start|>From Unimodal to Multimodal: Scaling up Projectors to Align Modalities: Recent contrastive multimodal vision-language models like CLIP have demonstrated robust open-world semantic understanding, becoming the standard image backbones for vision-language applications due to their aligned latent space. However, this practice has left powerful unimodal encoders for both vision and language underutilized in multimodal applications which raises a key question: Is there a plausible way to connect unimodal backbones for zero-shot vision-language tasks? To this end, we propose a novel approach that aligns vision and language modalities using only projection layers on pretrained, frozen unimodal encoders. Our method exploits the high semantic similarity between embedding spaces of well-trained vision and language models. It involves selecting semantically similar encoders in the latent space, curating a concept-rich dataset of image-caption pairs, and training simple MLP projectors. We evaluated our approach on 12 zero-shot classification datasets and 2 image-text retrieval datasets. Our best model, utilizing DINOv2 and All-Roberta-Large text encoder, achieves 76\(\%\) accuracy on ImageNet with a 20-fold reduction in data and 65 fold reduction in compute requirements. The proposed framework enhances the accessibility of model development while enabling flexible adaptation across diverse scenarios, offering an efficient approach to building multimodal models by utilizing existing unimodal architectures. Code and datasets will be released soon.<|reference_end|>
|
arxiv
|
@article{maniparambil2024from,
title={From Unimodal to Multimodal: Scaling up Projectors to Align Modalities},
author={Mayug Maniparambil, Raiymbek Akshulakov, Yasser Abdelaziz Dahou
Djilali, Sanath Narayan, Ankit Singh, Noel E. O'Connor},
journal={arXiv preprint arXiv:2409.19425},
year={2024},
archivePrefix={arXiv},
eprint={2409.19425},
primaryClass={cs.CV}
}
|
maniparambil2024from
|
arxiv-663138
|
2409.19428
|
A Proximal Modified Quasi-Newton Method for Nonsmooth Regularized Optimization
|
<|reference_start|>A Proximal Modified Quasi-Newton Method for Nonsmooth Regularized Optimization: We develop R2N, a modified quasi-Newton method for minimizing the sum of a $\mathcal{C}^1$ function $f$ and a lower semi-continuous prox-bounded $h$. Both $f$ and $h$ may be nonconvex. At each iteration, our method computes a step by minimizing the sum of a quadratic model of $f$, a model of $h$, and an adaptive quadratic regularization term. A step may be computed by a variant of the proximal-gradient method. An advantage of R2N over trust-region (TR) methods is that proximal operators do not involve an extra TR indicator. We also develop the variant R2DH, in which the model Hessian is diagonal, which allows us to compute a step without relying on a subproblem solver when $h$ is separable. R2DH can be used as standalone solver, but also as subproblem solver inside R2N. We describe non-monotone variants of both R2N and R2DH. Global convergence of a first-order stationarity measure to zero holds without relying on local Lipschitz continuity of $\nabla f$, while allowing model Hessians to grow unbounded, an assumption particularly relevant to quasi-Newton models. Under Lipschitz-continuity of $\nabla f$, we establish a tight worst-case complexity bound of $O(1 / \epsilon^{2/(1 - p)})$ to bring said measure below $\epsilon > 0$, where $0 \leq p < 1$ controls the growth of model Hessians. The latter must not diverge faster than $|\mathcal{S}_k|^p$, where $\mathcal{S}_k$ is the set of successful iterations up to iteration $k$. When $p = 1$, we establish the tight exponential complexity bound $O(\exp(c \epsilon^{-2}))$ where $c > 0$ is a constant. We describe our Julia implementation and report numerical experience on a basis-pursuit problem, image denoising, minimum-rank matrix completion, and a nonlinear support vector machine. In particular, the minimum-rank problem cannot be solved directly at this time by a TR approach as corresponding proximal operators are not known analytically.<|reference_end|>
|
arxiv
|
@article{diouane2024a,
title={A Proximal Modified Quasi-Newton Method for Nonsmooth Regularized
Optimization},
author={Youssef Diouane and Mohamed Laghdaf Habiboullah and Dominique Orban},
journal={arXiv preprint arXiv:2409.19428},
year={2024},
doi={10.13140/RG.2.2.21140.51840},
number={Cahier du GERAD G-2024-64},
archivePrefix={arXiv},
eprint={2409.19428},
primaryClass={math.OC cs.LG}
}
|
diouane2024a
|
arxiv-663139
|
2409.19429
|
Fast Encoding and Decoding for Implicit Video Representation
|
<|reference_start|>Fast Encoding and Decoding for Implicit Video Representation: Despite the abundant availability and content richness for video data, its high-dimensionality poses challenges for video research. Recent advancements have explored the implicit representation for videos using neural networks, demonstrating strong performance in applications such as video compression and enhancement. However, the prolonged encoding time remains a persistent challenge for video Implicit Neural Representations (INRs). In this paper, we focus on improving the speed of video encoding and decoding within implicit representations. We introduce two key components: NeRV-Enc, a transformer-based hyper-network for fast encoding; and NeRV-Dec, a parallel decoder for efficient video loading. NeRV-Enc achieves an impressive speed-up of $\mathbf{10^4\times}$ by eliminating gradient-based optimization. Meanwhile, NeRV-Dec simplifies video decoding, outperforming conventional codecs with a loading speed $\mathbf{11\times}$ faster, and surpassing RAM loading with pre-decoded videos ($\mathbf{2.5\times}$ faster while being $\mathbf{65\times}$ smaller in size).<|reference_end|>
|
arxiv
|
@article{chen2024fast,
title={Fast Encoding and Decoding for Implicit Video Representation},
author={Hao Chen, Saining Xie, Ser-Nam Lim, Abhinav Shrivastava},
journal={arXiv preprint arXiv:2409.19429},
year={2024},
archivePrefix={arXiv},
eprint={2409.19429},
primaryClass={cs.CV}
}
|
chen2024fast
|
arxiv-663140
|
2409.19430
|
'Simulacrum of Stories': Examining Large Language Models as Qualitative Research Participants
|
<|reference_start|>'Simulacrum of Stories': Examining Large Language Models as Qualitative Research Participants: The recent excitement around generative models has sparked a wave of proposals suggesting the replacement of human participation and labor in research and development--e.g., through surveys, experiments, and interviews--with synthetic research data generated by large language models (LLMs). We conducted interviews with 19 qualitative researchers to understand their perspectives on this paradigm shift. Initially skeptical, researchers were surprised to see similar narratives emerge in the LLM-generated data when using the interview probe. However, over several conversational turns, they went on to identify fundamental limitations, such as how LLMs foreclose participants' consent and agency, produce responses lacking in palpability and contextual depth, and risk delegitimizing qualitative research methods. We argue that the use of LLMs as proxies for participants enacts the surrogate effect, raising ethical and epistemological concerns that extend beyond the technical limitations of current models to the core of whether LLMs fit within qualitative ways of knowing.<|reference_end|>
|
arxiv
|
@article{kapania2024'simulacrum,
title={'Simulacrum of Stories': Examining Large Language Models as Qualitative
Research Participants},
author={Shivani Kapania, William Agnew, Motahhare Eslami, Hoda Heidari, Sarah
Fox},
journal={arXiv preprint arXiv:2409.19430},
year={2024},
archivePrefix={arXiv},
eprint={2409.19430},
primaryClass={cs.HC cs.CL cs.LG}
}
|
kapania2024'simulacrum
|
arxiv-663141
|
2409.19431
|
Generalization Error of the Tilted Empirical Risk
|
<|reference_start|>Generalization Error of the Tilted Empirical Risk: The generalization error (risk) of a supervised statistical learning algorithm quantifies its prediction ability on previously unseen data. Inspired by exponential tilting, Li et al. (2021) proposed the tilted empirical risk as a non-linear risk metric for machine learning applications such as classification and regression problems. In this work, we examine the generalization error of the tilted empirical risk. In particular, we provide uniform and information-theoretic bounds on the tilted generalization error, defined as the difference between the population risk and the tilted empirical risk, with a convergence rate of $O(1/\sqrt{n})$ where $n$ is the number of training samples. Furthermore, we study the solution to the KL-regularized expected tilted empirical risk minimization problem and derive an upper bound on the expected tilted generalization error with a convergence rate of $O(1/n)$.<|reference_end|>
|
arxiv
|
@article{aminian2024generalization,
title={Generalization Error of the Tilted Empirical Risk},
author={Gholamali Aminian, Amir R. Asadi, Tian Li, Ahmad Beirami, Gesine
Reinert, Samuel N. Cohen},
journal={arXiv preprint arXiv:2409.19431},
year={2024},
archivePrefix={arXiv},
eprint={2409.19431},
primaryClass={stat.ML cs.IT cs.LG math.IT}
}
|
aminian2024generalization
|
arxiv-663142
|
2409.19432
|
MicroFlow: An Efficient Rust-Based Inference Engine for TinyML
|
<|reference_start|>MicroFlow: An Efficient Rust-Based Inference Engine for TinyML: MicroFlow is an open-source TinyML framework for the deployment of Neural Networks (NNs) on embedded systems using the Rust programming language, specifically designed for efficiency and robustness, which is suitable for applications in critical environments. To achieve these objectives, MicroFlow employs a compiler-based inference engine approach, coupled with Rust's memory safety and features. The proposed solution enables the successful deployment of NNs on highly resource-constrained devices, including bare-metal 8-bit microcontrollers with only 2kB of RAM. Furthermore, MicroFlow is able to use less Flash and RAM memory than other state-of-the-art solutions for deploying NN reference models (i.e. wake-word and person detection). It can also achieve faster inference compared to existing engines on medium-size NNs, and similar performance on bigger ones. The experimental results prove the efficiency and suitability of MicroFlow for the deployment of TinyML models in critical environments where resources are particularly limited.<|reference_end|>
|
arxiv
|
@article{carnelos2024microflow:,
title={MicroFlow: An Efficient Rust-Based Inference Engine for TinyML},
author={Matteo Carnelos, Francesco Pasti, Nicola Bellotto},
journal={arXiv preprint arXiv:2409.19432},
year={2024},
archivePrefix={arXiv},
eprint={2409.19432},
primaryClass={cs.LG cs.AI}
}
|
carnelos2024microflow:
|
arxiv-663143
|
2409.19433
|
RMLR: Extending Multinomial Logistic Regression into General Geometries
|
<|reference_start|>RMLR: Extending Multinomial Logistic Regression into General Geometries: Riemannian neural networks, which extend deep learning techniques to Riemannian spaces, have gained significant attention in machine learning. To better classify the manifold-valued features, researchers have started extending Euclidean multinomial logistic regression (MLR) into Riemannian manifolds. However, existing approaches suffer from limited applicability due to their strong reliance on specific geometric properties. This paper proposes a framework for designing Riemannian MLR over general geometries, referred to as RMLR. Our framework only requires minimal geometric properties, thus exhibiting broad applicability and enabling its use with a wide range of geometries. Specifically, we showcase our framework on the Symmetric Positive Definite (SPD) manifold and special orthogonal group, i.e., the set of rotation matrices. On the SPD manifold, we develop five families of SPD MLRs under five types of power-deformed metrics. On rotation matrices we propose Lie MLR based on the popular bi-invariant metric. Extensive experiments on different Riemannian backbone networks validate the effectiveness of our framework.<|reference_end|>
|
arxiv
|
@article{chen2024rmlr:,
title={RMLR: Extending Multinomial Logistic Regression into General Geometries},
author={Ziheng Chen, Yue Song, Rui Wang, Xiaojun Wu, Nicu Sebe},
journal={arXiv preprint arXiv:2409.19433},
year={2024},
archivePrefix={arXiv},
eprint={2409.19433},
primaryClass={cs.LG cs.AI}
}
|
chen2024rmlr:
|
arxiv-663144
|
2409.19434
|
Energy-Efficient Computation with DVFS using Deep Reinforcement Learning for Multi-Task Systems in Edge Computing
|
<|reference_start|>Energy-Efficient Computation with DVFS using Deep Reinforcement Learning for Multi-Task Systems in Edge Computing: Periodic soft real-time systems have broad applications in many areas, such as IoT. Finding an optimal energy-efficient policy that is adaptable to underlying edge devices while meeting deadlines for tasks has always been challenging. This research studies generalized systems with multi-task, multi-deadline scenarios with reinforcement learning-based DVFS for energy saving. This work addresses the limitation of previous work that models a periodic system as a single task and single-deadline scenario, which is too simplified to cope with complex situations. The method encodes time series information in the Linux kernel into information that is easy to use for reinforcement learning, allowing the system to generate DVFS policies to adapt system patterns based on the general workload. For encoding, we present two different methods for comparison. Both methods use only one performance counter: system utilization and the kernel only needs minimal information from the userspace. Our method is implemented on Jetson Nano Board (2GB) and is tested with three fixed multitask workloads, which are three, five, and eight tasks in the workload, respectively. For randomness and generalization, we also designed a random workload generator to build different multitask workloads to test. Based on the test results, our method could save 3%-10% power compared to Linux built-in governors.<|reference_end|>
|
arxiv
|
@article{li2024energy-efficient,
title={Energy-Efficient Computation with DVFS using Deep Reinforcement Learning
for Multi-Task Systems in Edge Computing},
author={Xinyi Li, Ti Zhou, Haoyu Wang, Man Lin},
journal={arXiv preprint arXiv:2409.19434},
year={2024},
archivePrefix={arXiv},
eprint={2409.19434},
primaryClass={cs.OS cs.LG}
}
|
li2024energy-efficient
|
arxiv-663145
|
2409.19435
|
Simulation-based inference with the Python Package sbijax
|
<|reference_start|>Simulation-based inference with the Python Package sbijax: Neural simulation-based inference (SBI) describes an emerging family of methods for Bayesian inference with intractable likelihood functions that use neural networks as surrogate models. Here we introduce sbijax, a Python package that implements a wide variety of state-of-the-art methods in neural simulation-based inference using a user-friendly programming interface. sbijax offers high-level functionality to quickly construct SBI estimators, and compute and visualize posterior distributions with only a few lines of code. In addition, the package provides functionality for conventional approximate Bayesian computation, to compute model diagnostics, and to automatically estimate summary statistics. By virtue of being entirely written in JAX, sbijax is extremely computationally efficient, allowing rapid training of neural networks and executing code automatically in parallel on both CPU and GPU.<|reference_end|>
|
arxiv
|
@article{dirmeier2024simulation-based,
title={Simulation-based inference with the Python Package sbijax},
author={Simon Dirmeier and Simone Ulzega and Antonietta Mira and Carlo Albert},
journal={arXiv preprint arXiv:2409.19435},
year={2024},
archivePrefix={arXiv},
eprint={2409.19435},
primaryClass={cs.LG stat.CO stat.ML}
}
|
dirmeier2024simulation-based
|
arxiv-663146
|
2409.19436
|
Introducing SDICE: An Index for Assessing Diversity of Synthetic Medical Datasets
|
<|reference_start|>Introducing SDICE: An Index for Assessing Diversity of Synthetic Medical Datasets: Advancements in generative modeling are pushing the state-of-the-art in synthetic medical image generation. These synthetic images can serve as an effective data augmentation method to aid the development of more accurate machine learning models for medical image analysis. While the fidelity of these synthetic images has progressively increased, the diversity of these images is an understudied phenomenon. In this work, we propose the SDICE index, which is based on the characterization of similarity distributions induced by a contrastive encoder. Given a synthetic dataset and a reference dataset of real images, the SDICE index measures the distance between the similarity score distributions of original and synthetic images, where the similarity scores are estimated using a pre-trained contrastive encoder. This distance is then normalized using an exponential function to provide a consistent metric that can be easily compared across domains. Experiments conducted on the MIMIC-chest X-ray and ImageNet datasets demonstrate the effectiveness of SDICE index in assessing synthetic medical dataset diversity.<|reference_end|>
|
arxiv
|
@article{alam2024introducing,
title={Introducing SDICE: An Index for Assessing Diversity of Synthetic Medical
Datasets},
author={Mohammed Talha Alam, Raza Imam, Mohammad Areeb Qazi, Asim Ukaye and
Karthik Nandakumar},
journal={arXiv preprint arXiv:2409.19436},
year={2024},
archivePrefix={arXiv},
eprint={2409.19436},
primaryClass={cs.CV}
}
|
alam2024introducing
|
arxiv-663147
|
2409.19437
|
Strongly-Polynomial Time and Validation Analysis of Policy Gradient Methods
|
<|reference_start|>Strongly-Polynomial Time and Validation Analysis of Policy Gradient Methods: Reinforcement learning lacks a principled measure of optimality, causing research to rely on algorithm-to-algorithm or baselines comparisons with no certificate of optimality. Focusing on finite state and action Markov decision processes (MDP), we develop a simple, computable gap function that provides both upper and lower bounds on the optimality gap. Therefore, convergence of the gap function is a stronger mode of convergence than convergence of the optimality gap, and it is equivalent to a new notion we call distribution-free convergence, where convergence is independent of any problem-dependent distribution. We show the basic policy mirror descent exhibits fast distribution-free convergence for both the deterministic and stochastic setting. We leverage the distribution-free convergence to a uncover a couple new results. First, the deterministic policy mirror descent can solve unregularized MDPs in strongly-polynomial time. Second, accuracy estimates can be obtained with no additional samples while running stochastic policy mirror descent and can be used as a termination criteria, which can be verified in the validation step.<|reference_end|>
|
arxiv
|
@article{ju2024strongly-polynomial,
title={Strongly-polynomial time and validation analysis of policy gradient
methods},
author={Caleb Ju, Guanghui Lan},
journal={arXiv preprint arXiv:2409.19437},
year={2024},
archivePrefix={arXiv},
eprint={2409.19437},
primaryClass={cs.LG cs.AI cs.DS math.OC}
}
|
ju2024strongly-polynomial
|
arxiv-663148
|
2409.19439
|
Contrastive ground-level image and remote sensing pre-training improves representation learning for natural world imagery
|
<|reference_start|>Contrastive ground-level image and remote sensing pre-training improves representation learning for natural world imagery: Multimodal image-text contrastive learning has shown that joint representations can be learned across modalities. Here, we show how leveraging multiple views of image data with contrastive learning can improve downstream fine-grained classification performance for species recognition, even when one view is absent. We propose ContRastive Image-remote Sensing Pre-training (CRISP)$\unicode{x2014}$a new pre-training task for ground-level and aerial image representation learning of the natural world$\unicode{x2014}$and introduce Nature Multi-View (NMV), a dataset of natural world imagery including $>3$ million ground-level and aerial image pairs for over 6,000 plant taxa across the ecologically diverse state of California. The NMV dataset and accompanying material are available at hf.co/datasets/andyvhuynh/NatureMultiView.<|reference_end|>
|
arxiv
|
@article{huynh2024contrastive,
title={Contrastive ground-level image and remote sensing pre-training improves
representation learning for natural world imagery},
author={Andy V. Huynh, Lauren E. Gillespie, Jael Lopez-Saucedo, Claire Tang,
Rohan Sikand, Mois'es Exp'osito-Alonso},
journal={arXiv preprint arXiv:2409.19439},
year={2024},
archivePrefix={arXiv},
eprint={2409.19439},
primaryClass={cs.CV}
}
|
huynh2024contrastive
|
arxiv-663149
|
2409.19442
|
Trigger-Based Fragile Model Watermarking for Image Transformation Networks
|
<|reference_start|>Trigger-Based Fragile Model Watermarking for Image Transformation Networks: In fragile watermarking, a sensitive watermark is embedded in an object in a manner such that the watermark breaks upon tampering. This fragile process can be used to ensure the integrity and source of watermarked objects. While fragile watermarking for model integrity has been studied in classification models, image transformation/generation models have yet to be explored. We introduce a novel, trigger-based fragile model watermarking system for image transformation/generation networks that takes advantage of properties inherent to image outputs. For example, manifesting watermarks as specific visual patterns, styles, or anomalies in the generated content when particular trigger inputs are used. Our approach, distinct from robust watermarking, effectively verifies the model's source and integrity across various datasets and attacks, outperforming baselines by 94%. We conduct additional experiments to analyze the security of this approach, the flexibility of the trigger and resulting watermark, and the sensitivity of the watermarking loss on performance. We also demonstrate the applicability of this approach on two different tasks (1 immediate task and 1 downstream task). This is the first work to consider fragile model watermarking for image transformation/generation networks.<|reference_end|>
|
arxiv
|
@article{robinette2024trigger-based,
title={Trigger-Based Fragile Model Watermarking for Image Transformation
Networks},
author={Preston K. Robinette, Dung T. Nguyen, Samuel Sasaki, Taylor T. Johnson},
journal={arXiv preprint arXiv:2409.19442},
year={2024},
archivePrefix={arXiv},
eprint={2409.19442},
primaryClass={cs.CR}
}
|
robinette2024trigger-based
|
arxiv-663150
|
2409.19445
|
HTML-LSTM: Information Extraction from HTML Tables in Web Pages using Tree-Structured LSTM
|
<|reference_start|>HTML-LSTM: Information Extraction from HTML Tables in Web Pages using Tree-Structured LSTM: In this paper, we propose a novel method for extracting information from HTML tables with similar contents but with a different structure. We aim to integrate multiple HTML tables into a single table for retrieval of information containing in various Web pages. The method is designed by extending tree-structured LSTM, the neural network for tree-structured data, in order to extract information that is both linguistic and structural information of HTML data. We evaluate the proposed method through experiments using real data published on the WWW.<|reference_end|>
|
arxiv
|
@article{kawamura2024html-lstm:,
title={HTML-LSTM: Information Extraction from HTML Tables in Web Pages using
Tree-Structured LSTM},
author={Kazuki Kawamura and Akihiro Yamamoto},
journal={Discovery Science. DS 2021. Lecture Notes in Computer Science, vol
12986},
year={2024},
doi={10.1007/978-3-030-88942-5_3},
archivePrefix={arXiv},
eprint={2409.19445},
primaryClass={cs.IR cs.LG}
}
|
kawamura2024html-lstm:
|
arxiv-663151
|
2409.19448
|
Advanced Clustering Techniques for Speech Signal Enhancement: A Review and Metanalysis of Fuzzy C-Means, K-Means, and Kernel Fuzzy C-Means Methods
|
<|reference_start|>Advanced Clustering Techniques for Speech Signal Enhancement: A Review and Metanalysis of Fuzzy C-Means, K-Means, and Kernel Fuzzy C-Means Methods: Speech signal processing is a cornerstone of modern communication technologies, tasked with improving the clarity and comprehensibility of audio data in noisy environments. The primary challenge in this field is the effective separation and recognition of speech from background noise, crucial for applications ranging from voice-activated assistants to automated transcription services. The quality of speech recognition directly impacts user experience and accessibility in technology-driven communication. This review paper explores advanced clustering techniques, particularly focusing on the Kernel Fuzzy C-Means (KFCM) method, to address these challenges. Our findings indicate that KFCM, compared to traditional methods like K-Means (KM) and Fuzzy C-Means (FCM), provides superior performance in handling non-linear and non-stationary noise conditions in speech signals. The most notable outcome of this review is the adaptability of KFCM to various noisy environments, making it a robust choice for speech enhancement applications. Additionally, the paper identifies gaps in current methodologies, such as the need for more dynamic clustering algorithms that can adapt in real time to changing noise conditions without compromising speech recognition quality. Key contributions include a detailed comparative analysis of current clustering algorithms and suggestions for further integrating hybrid models that combine KFCM with neural networks to enhance speech recognition accuracy. Through this review, we advocate for a shift towards more sophisticated, adaptive clustering techniques that can significantly improve speech enhancement and pave the way for more resilient speech processing systems.<|reference_end|>
|
arxiv
|
@article{abdullah2024advanced,
title={Advanced Clustering Techniques for Speech Signal Enhancement: A Review
and Metanalysis of Fuzzy C-Means, K-Means, and Kernel Fuzzy C-Means Methods},
author={Abdulhady Abas Abdullah, Aram Mahmood Ahmed, Tarik Rashid, Hadi Veisi,
Yassin Hussein Rassul, Bryar Hassan, Polla Fattah, Sabat Abdulhameed Ali,
Ahmed S. Shamsaldin},
journal={arXiv preprint arXiv:2409.19448},
year={2024},
archivePrefix={arXiv},
eprint={2409.19448},
primaryClass={cs.SD cs.AI eess.AS}
}
|
abdullah2024advanced
|
arxiv-663152
|
2409.19450
|
Secret Use of Large Language Models
|
<|reference_start|>Secret Use of Large Language Models: The advancements of Large Language Models (LLMs) have decentralized the responsibility for the transparency of AI usage. Specifically, LLM users are now encouraged or required to disclose the use of LLM-generated content for varied types of real-world tasks. However, an emerging phenomenon, users' secret use of LLM, raises challenges in ensuring end users adhere to the transparency requirement. Our study used mixed-methods with an exploratory survey (125 real-world secret use cases reported) and a controlled experiment among 300 users to investigate the contexts and causes behind the secret use of LLMs. We found that such secretive behavior is often triggered by certain tasks, transcending demographic and personality differences among users. Task types were found to affect users' intentions to use secretive behavior, primarily through influencing perceived external judgment regarding LLM usage. Our results yield important insights for future work on designing interventions to encourage more transparent disclosure of the use of LLMs or other AI technologies.<|reference_end|>
|
arxiv
|
@article{zhang2024secret,
title={Secret Use of Large Language Model (LLM)},
author={Zhiping Zhang, Chenxinran Shen, Bingsheng Yao, Dakuo Wang and Tianshi
Li},
journal={arXiv preprint arXiv:2409.19450},
year={2024},
archivePrefix={arXiv},
eprint={2409.19450},
primaryClass={cs.HC cs.AI}
}
|
zhang2024secret
|
arxiv-663153
|
2409.19454
|
See Where You Read with Eye Gaze Tracking and Large Language Model
|
<|reference_start|>See Where You Read with Eye Gaze Tracking and Large Language Model: Losing track of reading progress during line switching can be frustrating. Eye gaze tracking technology offers a potential solution by highlighting read paragraphs, aiding users in avoiding wrong line switches. However, the gap between gaze tracking accuracy (2-3 cm) and text line spacing (3-5 mm) makes direct application impractical. Existing methods leverage the linear reading pattern but fail during jump reading. This paper presents a reading tracking and highlighting system that supports both linear and jump reading. Based on experimental insights from the gaze nature study of 16 users, two gaze error models are designed to enable both jump reading detection and relocation. The system further leverages the large language model's contextual perception capability in aiding reading tracking. A reading tracking domain-specific line-gaze alignment opportunity is also exploited to enable dynamic and frequent calibration of the gaze results. Controlled experiments demonstrate reliable linear reading tracking, as well as 84% accuracy in tracking jump reading. Furthermore, real field tests with 18 volunteers demonstrated the system's effectiveness in tracking and highlighting read paragraphs, improving reading efficiency, and enhancing user experience.<|reference_end|>
|
arxiv
|
@article{yang2024see,
title={See Where You Read with Eye Gaze Tracking and Large Language Model},
author={Sikai Yang, Gang Yan, Wan Du},
journal={arXiv preprint arXiv:2409.19454},
year={2024},
archivePrefix={arXiv},
eprint={2409.19454},
primaryClass={cs.HC cs.AI cs.CV}
}
|
yang2024see
|
arxiv-663154
|
2409.19455
|
The Importance of Adaptive Decision-Making for Autonomous Long-Range Planetary Surface Mobility
|
<|reference_start|>The Importance of Adaptive Decision-Making for Autonomous Long-Range Planetary Surface Mobility: Long-distance driving is an important component of planetary surface exploration. Unforeseen events often require human operators to adjust mobility plans, but this approach does not scale and will be insufficient for future missions. Interest in self-reliant rovers is increasing, however the research community has not yet given significant attention to autonomous, adaptive decision-making. In this paper, we look back at specific planetary mobility operations where human-guided adaptive planning played an important role in mission safety and productivity. Inspired by the abilities of human experts, we identify shortcomings of existing autonomous mobility algorithms for robots operating in off-road environments like planetary surfaces. We advocate for adaptive decision-making capabilities such as unassisted learning from past experiences and more reliance on stochastic world models. The aim of this work is to highlight promising research avenues to enhance ground planning tools and, ultimately, long-range autonomy algorithms on board planetary rovers.<|reference_end|>
|
arxiv
|
@article{lamarre2024the,
title={The Importance of Adaptive Decision-Making for Autonomous Long-Range
Planetary Surface Mobility},
author={Olivier Lamarre, Jonathan Kelly},
journal={arXiv preprint arXiv:2409.19455},
year={2024},
archivePrefix={arXiv},
eprint={2409.19455},
primaryClass={cs.RO}
}
|
lamarre2024the
|
arxiv-663155
|
2409.19456
|
Jupyter Notebook Attacks Taxonomy: Ransomware, Data Exfiltration, and Security Misconfiguration
|
<|reference_start|>Jupyter Notebook Attacks Taxonomy: Ransomware, Data Exfiltration, and Security Misconfiguration: Open-science collaboration using Jupyter Notebooks may expose expensively trained AI models, high-performance computing resources, and training data to security vulnerabilities, such as unauthorized access, accidental deletion, or misuse. The ubiquitous deployments of Jupyter Notebooks (~11 million public notebooks on Github have transformed collaborative scientific computing by enabling reproducible research. Jupyter is the main HPC's science gateway interface between AI researchers and supercomputers at academic institutions, such as the National Center for Supercomputing Applications (NCSA), national labs, and the industry. An impactful attack targeting Jupyter could disrupt scientific missions and business operations. This paper describes the network-based attack taxonomy of Jupyter Notebooks, such as ransomware, data exfiltration, security misconfiguration, and resource abuse for cryptocurrency mining. The open nature of Jupyter (direct data access, arbitrary code execution in multiple programming languages kernels) and its vast attack interface (terminal, file browser, untrusted cells) also attract attacks attempting to misuse supercomputing resources and steal state-of-the-art research artifacts. Jupyter uses encrypted datagrams of rapidly evolving WebSocket protocols that challenge even the most state-of-the-art network observability tools, such as Zeek. We envisage even more sophisticated AI-driven attacks can be adapted to target Jupyter, where defenders have limited visibility. In addition, Jupyter's cryptographic design should be adapted to resist emerging quantum threats. On balance, this is the first paper to systematically describe the threat model against Jupyter Notebooks and lay out the design of auditing Jupyter to have better visibility against such attacks.<|reference_end|>
|
arxiv
|
@article{cao2024jupyter,
title={Jupyter Notebook Attacks Taxonomy: Ransomware, Data Exfiltration, and
Security Misconfiguration},
author={Phuong Cao},
journal={arXiv preprint arXiv:2409.19456},
year={2024},
archivePrefix={arXiv},
eprint={2409.19456},
primaryClass={cs.CR cs.NI}
}
|
cao2024jupyter
|
arxiv-663156
|
2409.19457
|
A Parameter-Efficient Tuning Framework for Language-guided Object Grounding and Robot Grasping
|
<|reference_start|>A Parameter-Efficient Tuning Framework for Language-guided Object Grounding and Robot Grasping: The language-guided robot grasping task requires a robot agent to integrate multimodal information from both visual and linguistic inputs to predict actions for target-driven grasping. While recent approaches utilizing Multimodal Large Language Models (MLLMs) have shown promising results, their extensive computation and data demands limit the feasibility of local deployment and customization. To address this, we propose a novel CLIP-based multimodal parameter-efficient tuning (PET) framework designed for three language-guided object grounding and grasping tasks: (1) Referring Expression Segmentation (RES), (2) Referring Grasp Synthesis (RGS), and (3) Referring Grasp Affordance (RGA). Our approach introduces two key innovations: a bi-directional vision-language adapter that aligns multimodal inputs for pixel-level language understanding and a depth fusion branch that incorporates geometric cues to facilitate robot grasping predictions. Experiment results demonstrate superior performance in the RES object grounding task compared with existing CLIP-based full-model tuning or PET approaches. In the RGS and RGA tasks, our model not only effectively interprets object attributes based on simple language descriptions but also shows strong potential for comprehending complex spatial reasoning scenarios, such as multiple identical objects present in the workspace.<|reference_end|>
|
arxiv
|
@article{yu2024a,
title={A Parameter-Efficient Tuning Framework for Language-guided Object
Grounding and Robot Grasping},
author={Houjian Yu, Mingen Li, Alireza Rezazadeh, Yang Yang and Changhyun Choi},
journal={arXiv preprint arXiv:2409.19457},
year={2024},
archivePrefix={arXiv},
eprint={2409.19457},
primaryClass={cs.RO}
}
|
yu2024a
|
arxiv-663157
|
2409.19458
|
Scalable Fine-tuning from Multiple Data Sources:A First-Order Approximation Approach
|
<|reference_start|>Scalable Fine-tuning from Multiple Data Sources:A First-Order Approximation Approach: We study the problem of fine-tuning a language model (LM) for a target task by optimally using the information from $n$ auxiliary tasks. This problem has broad applications in NLP, such as targeted instruction tuning and data selection in chain-of-thought fine-tuning. The key challenge of this problem is that not all auxiliary tasks are useful to improve the performance of the target task. Thus, choosing the right subset of auxiliary tasks is crucial. Conventional subset selection methods, such as forward & backward selection, are unsuitable for LM fine-tuning because they require repeated training on subsets of auxiliary tasks. This paper introduces a new algorithm to estimate model fine-tuning performances without repeated training. Our algorithm first performs multitask training using the data of all the tasks to obtain a meta initialization. Then, we approximate the model fine-tuning loss of a subset using functional values and gradients from the meta initialization. Empirically, we find that this gradient-based approximation holds with remarkable accuracy for twelve transformer-based LMs. Thus, we can now estimate fine-tuning performances on CPUs within a few seconds. We conduct extensive experiments to validate our approach, delivering a speedup of $30\times$ over conventional subset selection while incurring only $1\%$ error of the true fine-tuning performances. In downstream evaluations of instruction tuning and chain-of-thought fine-tuning, our approach improves over prior methods that utilize gradient or representation similarity for subset selection by up to $3.8\%$.<|reference_end|>
|
arxiv
|
@article{li2024scalable,
title={Scalable Fine-tuning from Multiple Data Sources:A First-Order
Approximation Approach},
author={Dongyue Li, Ziniu Zhang, Lu Wang, Hongyang R. Zhang},
journal={arXiv preprint arXiv:2409.19458},
year={2024},
archivePrefix={arXiv},
eprint={2409.19458},
primaryClass={cs.CL cs.LG}
}
|
li2024scalable
|
arxiv-663158
|
2409.19459
|
Language-guided Robust Navigation for Mobile Robots in Dynamically-changing Environments
|
<|reference_start|>Language-guided Robust Navigation for Mobile Robots in Dynamically-changing Environments: In this paper, we develop an embodied AI system for human-in-the-loop navigation with a wheeled mobile robot. We propose a direct yet effective method of monitoring the robot's current plan to detect changes in the environment that impact the intended trajectory of the robot significantly and then query a human for feedback. We also develop a means to parse human feedback expressed in natural language into local navigation waypoints and integrate it into a global planning system, by leveraging a map of semantic features and an aligned obstacle map. Extensive testing in simulation and physical hardware experiments with a resource-constrained wheeled robot tasked to navigate in a real-world environment validate the efficacy and robustness of our method. This work can support applications like precision agriculture and construction, where persistent monitoring of the environment provides a human with information about the environment state.<|reference_end|>
|
arxiv
|
@article{simons2024language-guided,
title={Language-guided Robust Navigation for Mobile Robots in
Dynamically-changing Environments},
author={Cody Simons, Zhichao Liu, Brandon Marcus, Amit K. Roy-Chowdhury,
Konstantinos Karydis},
journal={arXiv preprint arXiv:2409.19459},
year={2024},
archivePrefix={arXiv},
eprint={2409.19459},
primaryClass={cs.RO cs.CV}
}
|
simons2024language-guided
|
arxiv-663159
|
2409.19460
|
On the universality of neural encodings in CNNs
|
<|reference_start|>On the universality of neural encodings in CNNs: We explore the universality of neural encodings in convolutional neural networks trained on image classification tasks. We develop a procedure to directly compare the learned weights rather than their representations. It is based on a factorization of spatial and channel dimensions and measures the similarity of aligned weight covariances. We show that, for a range of layers of VGG-type networks, the learned eigenvectors appear to be universal across different natural image datasets. Our results suggest the existence of a universal neural encoding for natural images. They explain, at a more fundamental level, the success of transfer learning. Our work shows that, instead of aiming at maximizing the performance of neural networks, one can alternatively attempt to maximize the universality of the learned encoding, in order to build a principled foundation model.<|reference_end|>
|
arxiv
|
@article{guth2024on,
title={On the universality of neural encodings in CNNs},
author={Florentin Guth and Brice M'enard},
journal={arXiv preprint arXiv:2409.19460},
year={2024},
archivePrefix={arXiv},
eprint={2409.19460},
primaryClass={cs.LG cs.CV}
}
|
guth2024on
|
arxiv-663160
|
2409.19461
|
Accelerating Malware Classification: A Vision Transformer Solution
|
<|reference_start|>Accelerating Malware Classification: A Vision Transformer Solution: The escalating frequency and scale of recent malware attacks underscore the urgent need for swift and precise malware classification in the ever-evolving cybersecurity landscape. Key challenges include accurately categorizing closely related malware families. To tackle this evolving threat landscape, this paper proposes a novel architecture LeViT-MC which produces state-of-the-art results in malware detection and classification. LeViT-MC leverages a vision transformer-based architecture, an image-based visualization approach, and advanced transfer learning techniques. Experimental results on multi-class malware classification using the MaleVis dataset indicate LeViT-MC's significant advantage over existing models. This study underscores the critical importance of combining image-based and transfer learning techniques, with vision transformers at the forefront of the ongoing battle against evolving cyber threats. We propose a novel architecture LeViT-MC which not only achieves state of the art results on image classification but is also more time efficient.<|reference_end|>
|
arxiv
|
@article{bavishi2024accelerating,
title={Accelerating Malware Classification: A Vision Transformer Solution},
author={Shrey Bavishi, Shrey Modi},
journal={arXiv preprint arXiv:2409.19461},
year={2024},
archivePrefix={arXiv},
eprint={2409.19461},
primaryClass={cs.CR cs.CV cs.LG}
}
|
bavishi2024accelerating
|
arxiv-663161
|
2409.19464
|
Blown up by an equilateral: Poncelet triangles about the incircle and their degeneracies
|
<|reference_start|>Blown up by an equilateral: Poncelet triangles about the incircle and their degeneracies: We tour several harmonious Euclidean properties of Poncelet triangles inscribed in an ellipse and circumscribing the incircle. We also show that a number of degenerate behaviors are triggered by the presence of an equilateral triangle in the family.<|reference_end|>
|
arxiv
|
@article{helman2024blown,
title={Blown up by an equilateral: Poncelet triangles about the incircle and
their degeneracies},
author={Mark Helman, Ronaldo A. Garcia, Dan Reznik},
journal={arXiv preprint arXiv:2409.19464},
year={2024},
archivePrefix={arXiv},
eprint={2409.19464},
primaryClass={math.DS cs.CG}
}
|
helman2024blown
|
arxiv-663162
|
2409.19465
|
Construction of the Sparsest Maximally $r$-Robust Graphs
|
<|reference_start|>Construction of the Sparsest Maximally $r$-Robust Graphs: In recent years, the notion of r-robustness for the communication graph of the network has been introduced to address the challenge of achieving consensus in the presence of misbehaving agents. Higher r-robustness typically implies higher tolerance to malicious information towards achieving resilient consensus, but it also implies more edges for the communication graph. This in turn conflicts with the need to minimize communication due to limited resources in real-world applications (e.g., multi-robot networks). In this paper, our contributions are twofold. (a) We provide the necessary subgraph structures and tight lower bounds on the number of edges required for graphs with a given number of nodes to achieve maximum robustness. (b) We then use the results of (a) to introduce two classes of graphs that maintain maximum robustness with the least number of edges. Our work is validated through a series of simulations.<|reference_end|>
|
arxiv
|
@article{lee2024construction,
title={Construction of the Sparsest Maximally $r$-Robust Graphs},
author={Haejoon Lee and Dimitra Panagou},
journal={arXiv preprint arXiv:2409.19465},
year={2024},
archivePrefix={arXiv},
eprint={2409.19465},
primaryClass={eess.SY cs.SY}
}
|
lee2024construction
|
arxiv-663163
|
2409.19466
|
Robot Guided Evacuation with Viewpoint Constraints
|
<|reference_start|>Robot Guided Evacuation with Viewpoint Constraints: We present a viewpoint-based non-linear Model Predictive Control (MPC) for evacuation guiding robots. Specifically, the proposed MPC algorithm enables evacuation guiding robots to track and guide cooperative human targets in emergency scenarios. Our algorithm accounts for the environment layout as well as distances between the robot and human target and distance to the goal location. A key challenge for evacuation guiding robot is the trade-off between its planned motion for leading the target toward a goal position and staying in the target's viewpoint while maintaining line-of-sight for guiding. We illustrate the effectiveness of our proposed evacuation guiding algorithm in both simulated and real-world environments with an Unmanned Aerial Vehicle (UAV) guiding a human. Our results suggest that using the contextual information from the environment for motion planning, increases the visibility of the guiding UAV to the human while achieving faster total evacuation time.<|reference_end|>
|
arxiv
|
@article{chen2024robot,
title={Robot Guided Evacuation with Viewpoint Constraints},
author={Gong Chen, Malika Meghjani, Marcel Bartholomeus Prasetyo},
journal={arXiv preprint arXiv:2409.19466},
year={2024},
archivePrefix={arXiv},
eprint={2409.19466},
primaryClass={cs.RO}
}
|
chen2024robot
|
arxiv-663164
|
2409.19467
|
INSIGHTBUDDY-AI: Medication Extraction and Entity Linking using Large Language Models and Ensemble Learning
|
<|reference_start|>INSIGHTBUDDY-AI: Medication Extraction and Entity Linking using Large Language Models and Ensemble Learning: Medication Extraction and Mining play an important role in healthcare NLP research due to its practical applications in hospital settings, such as their mapping into standard clinical knowledge bases (SNOMED-CT, BNF, etc.). In this work, we investigate state-of-the-art LLMs in text mining tasks on medications and their related attributes such as dosage, route, strength, and adverse effects. In addition, we explore different ensemble learning methods (\textsc{Stack-Ensemble} and \textsc{Voting-Ensemble}) to augment the model performances from individual LLMs. Our ensemble learning result demonstrated better performances than individually fine-tuned base models BERT, RoBERTa, RoBERTa-L, BioBERT, BioClinicalBERT, BioMedRoBERTa, ClinicalBERT, and PubMedBERT across general and specific domains. Finally, we build up an entity linking function to map extracted medical terminologies into the SNOMED-CT codes and the British National Formulary (BNF) codes, which are further mapped to the Dictionary of Medicines and Devices (dm+d), and ICD. Our model's toolkit and desktop applications are publicly available at \url{https://github.com/HECTA-UoM/ensemble-NER}.<|reference_end|>
|
arxiv
|
@article{romero2024insightbuddy-ai:,
title={INSIGHTBUDDY-AI: Medication Extraction and Entity Linking using Large
Language Models and Ensemble Learning},
author={Pablo Romero and Lifeng Han and Goran Nenadic},
journal={arXiv preprint arXiv:2409.19467},
year={2024},
archivePrefix={arXiv},
eprint={2409.19467},
primaryClass={cs.CL cs.AI}
}
|
romero2024insightbuddy-ai:
|
arxiv-663165
|
2409.19471
|
SELP: Generating Safe and Efficient Task Plans for Robot Agents with Large Language Models
|
<|reference_start|>SELP: Generating Safe and Efficient Task Plans for Robot Agents with Large Language Models: Despite significant advancements in large language models (LLMs) that enhance robot agents' understanding and execution of natural language (NL) commands, ensuring the agents adhere to user-specified constraints remains challenging, particularly for complex commands and long-horizon tasks. To address this challenge, we present three key insights, equivalence voting, constrained decoding, and domain-specific fine-tuning, which significantly enhance LLM planners' capability in handling complex tasks. Equivalence voting ensures consistency by generating and sampling multiple Linear Temporal Logic (LTL) formulas from NL commands, grouping equivalent LTL formulas, and selecting the majority group of formulas as the final LTL formula. Constrained decoding then uses the generated LTL formula to enforce the autoregressive inference of plans, ensuring the generated plans conform to the LTL. Domain-specific fine-tuning customizes LLMs to produce safe and efficient plans within specific task domains. Our approach, Safe Efficient LLM Planner (SELP), combines these insights to create LLM planners to generate plans adhering to user commands with high confidence. We demonstrate the effectiveness and generalizability of SELP across different robot agents and tasks, including drone navigation and robot manipulation. For drone navigation tasks, SELP outperforms state-of-the-art planners by 10.8% in safety rate (i.e., finishing tasks conforming to NL commands) and by 19.8% in plan efficiency. For robot manipulation tasks, SELP achieves 20.4% improvement in safety rate. Our datasets for evaluating NL-to-LTL and robot task planning will be released in github.com/lt-asset/selp.<|reference_end|>
|
arxiv
|
@article{wu2024selp:,
title={SELP: Generating Safe and Efficient Task Plans for Robot Agents with
Large Language Models},
author={Yi Wu, Zikang Xiong, Yiran Hu, Shreyash S. Iyengar, Nan Jiang, Aniket
Bera, Lin Tan, Suresh Jagannathan},
journal={arXiv preprint arXiv:2409.19471},
year={2024},
archivePrefix={arXiv},
eprint={2409.19471},
primaryClass={cs.RO cs.AI cs.CL cs.FL}
}
|
wu2024selp:
|
arxiv-663166
|
2409.19472
|
Towards Croppable Implicit Neural Representations
|
<|reference_start|>Towards Croppable Implicit Neural Representations: Implicit Neural Representations (INRs) have peaked interest in recent years due to their ability to encode natural signals using neural networks. While INRs allow for useful applications such as interpolating new coordinates and signal compression, their black-box nature makes it difficult to modify them post-training. In this paper we explore the idea of editable INRs, and specifically focus on the widely used cropping operation. To this end, we present Local-Global SIRENs -- a novel INR architecture that supports cropping by design. Local-Global SIRENs are based on combining local and global feature extraction for signal encoding. What makes their design unique is the ability to effortlessly remove specific portions of an encoded signal, with a proportional weight decrease. This is achieved by eliminating the corresponding weights from the network, without the need for retraining. We further show how this architecture can be used to support the straightforward extension of previously encoded signals. Beyond signal editing, we examine how the Local-Global approach can accelerate training, enhance encoding of various signals, improve downstream performance, and be applied to modern INRs such as INCODE, highlighting its potential and flexibility. Code is available at https://github.com/maorash/Local-Global-INRs.<|reference_end|>
|
arxiv
|
@article{ashkenazi2024towards,
title={Towards Croppable Implicit Neural Representations},
author={Maor Ashkenazi, Eran Treister},
journal={arXiv preprint arXiv:2409.19472},
year={2024},
archivePrefix={arXiv},
eprint={2409.19472},
primaryClass={cs.CV cs.LG}
}
|
ashkenazi2024towards
|
arxiv-663167
|
2409.19474
|
FairPIVARA: Reducing and Assessing Biases in CLIP-Based Multimodal Models
|
<|reference_start|>FairPIVARA: Reducing and Assessing Biases in CLIP-Based Multimodal Models: Despite significant advancements and pervasive use of vision-language models, a paucity of studies has addressed their ethical implications. These models typically require extensive training data, often from hastily reviewed text and image datasets, leading to highly imbalanced datasets and ethical concerns. Additionally, models initially trained in English are frequently fine-tuned for other languages, such as the CLIP model, which can be expanded with more data to enhance capabilities but can add new biases. The CAPIVARA, a CLIP-based model adapted to Portuguese, has shown strong performance in zero-shot tasks. In this paper, we evaluate four different types of discriminatory practices within visual-language models and introduce FairPIVARA, a method to reduce them by removing the most affected dimensions of feature embeddings. The application of FairPIVARA has led to a significant reduction of up to 98% in observed biases while promoting a more balanced word distribution within the model. Our model and code are available at: https://github.com/hiaac-nlp/FairPIVARA.<|reference_end|>
|
arxiv
|
@article{moreira2024fairpivara:,
title={FairPIVARA: Reducing and Assessing Biases in CLIP-Based Multimodal
Models},
author={Diego A. B. Moreira, Alef Iury Ferreira, Jhessica Silva, Gabriel
Oliveira dos Santos, Luiz Pereira, Jo~ao Medrado Gondim, Gustavo Bonil,
Helena Maia, N'adia da Silva, Simone Tiemi Hashiguti, Jefersson A. dos
Santos, Helio Pedrini, Sandra Avila},
journal={arXiv preprint arXiv:2409.19474},
year={2024},
archivePrefix={arXiv},
eprint={2409.19474},
primaryClass={cs.CV cs.AI}
}
|
moreira2024fairpivara:
|
arxiv-663168
|
2409.19476
|
Overriding Safety protections of Open-source Models
|
<|reference_start|>Overriding Safety protections of Open-source Models: LLMs(Large Language Models) nowadays have widespread adoption as a tool for solving issues across various domain/tasks. These models since are susceptible to produce harmful or toxic results, inference-time adversarial attacks, therefore they do undergo safety alignment training and Red teaming for putting in safety guardrails. For using these models, usually fine-tuning is done for model alignment on the desired tasks, which can make model more aligned but also make it more susceptible to produce unsafe responses, if fine-tuned with harmful data.In this paper, we study how much of impact introduction of harmful data in fine-tuning can make, and if it can override the safety protection of those models. Conversely,it was also explored that if model is fine-tuned on safety data can make the model produce more safer responses. Further we explore if fine-tuning the model on harmful data makes it less helpful or less trustworthy because of increase in model uncertainty leading to knowledge drift. Our extensive experimental results shown that Safety protection in an open-source can be overridden, when fine-tuned with harmful data as observed by ASR increasing by 35% when compared to basemodel's ASR. Also, as observed, fine-tuning a model with harmful data made the harmful fine-tuned model highly uncertain with huge knowledge drift and less truthfulness in its responses. Furthermore, for the safe fine-tuned model, ASR decreases by 51.68% as compared to the basemodel, and Safe model also shown in minor drop in uncertainty and truthfulness as compared to basemodel. This paper's code is available at: https://github.com/techsachinkr/Overriding_Model_Safety_Protections<|reference_end|>
|
arxiv
|
@article{kumar2024overriding,
title={Overriding Safety protections of Open-source Models},
author={Sachin Kumar},
journal={arXiv preprint arXiv:2409.19476},
year={2024},
archivePrefix={arXiv},
eprint={2409.19476},
primaryClass={cs.CL cs.CR}
}
|
kumar2024overriding
|
arxiv-663169
|
2409.19477
|
Hedging and Approximate Truthfulness in Traditional Forecasting Competitions
|
<|reference_start|>Hedging and Approximate Truthfulness in Traditional Forecasting Competitions: In forecasting competitions, the traditional mechanism scores the predictions of each contestant against the outcome of each event, and the contestant with the highest total score wins. While it is well-known that this traditional mechanism can suffer from incentive issues, it is folklore that contestants will still be roughly truthful as the number of events grows. Yet thus far the literature lacks a formal analysis of this traditional mechanism. This paper gives the first such analysis. We first demonstrate that the ''long-run truthfulness'' folklore is false: even for arbitrary numbers of events, the best forecaster can have an incentive to hedge, reporting more moderate beliefs to increase their win probability. On the positive side, however, we show that two contestants will be approximately truthful when they have sufficient uncertainty over the relative quality of their opponent and the outcomes of the events, a case which may arise in practice.<|reference_end|>
|
arxiv
|
@article{monroe2024hedging,
title={Hedging and Approximate Truthfulness in Traditional Forecasting
Competitions},
author={Mary Monroe, Anish Thilagar, Melody Hsu, Rafael Frongillo},
journal={arXiv preprint arXiv:2409.19477},
year={2024},
archivePrefix={arXiv},
eprint={2409.19477},
primaryClass={cs.LG cs.GT}
}
|
monroe2024hedging
|
arxiv-663170
|
2409.19478
|
RTL2M$\mu$PATH: Multi-$\mu$PATH Synthesis with Applications to Hardware Security Verification
|
<|reference_start|>RTL2M$\mu$PATH: Multi-$\mu$PATH Synthesis with Applications to Hardware Security Verification: The Check tools automate formal memory consistency model and security verification of processors by analyzing abstract models of microarchitectures, called $\mu$SPEC models. Despite the efficacy of this approach, a verification gap between $\mu$SPEC models, which must be manually written, and RTL limits the Check tools' broad adoption. Our prior work, called RTL2$\mu$SPEC, narrows this gap by automatically synthesizing formally verified $\mu$SPEC models from SystemVerilog implementations of simple processors. But, RTL2$\mu$SPEC assumes input designs where an instruction (e.g., a load) cannot exhibit more than one microarchitectural execution path ($\mu$PATH, e.g., a cache hit or miss path) -- its single-execution-path assumption. In this paper, we first propose an automated approach and tool, called RTL2M$\mu$PATH, that resolves RTL2$\mu$SPEC's single-execution-path assumption. Given a SystemVerilog processor design, instruction encodings, and modest design metadata, RTL2M$\mu$PATH finds a complete set of formally verified $\mu$PATHs for each instruction. Next, we make an important observation: an instruction that can exhibit more than one $\mu$PATH strongly indicates the presence of a microarchitectural side channel in the input design. Based on this observation, we then propose an automated approach and tool, called SynthLC, that extends RTL2M$\mu$PATH with a symbolic information flow analysis to support synthesizing a variety of formally verified leakage contracts from SystemVerilog processor designs. Leakage contracts are foundational to state-of-the-art defenses against hardware side-channel attacks. SynthLC is the first automated methodology for formally verifying hardware adherence to them.<|reference_end|>
|
arxiv
|
@article{hsiao2024rtl2m$\mu$path:,
title={RTL2M$\mu$PATH: Multi-$\mu$PATH Synthesis with Applications to Hardware
Security Verification},
author={Yao Hsiao, Nikos Nikoleris, Artem Khyzha, Dominic P. Mulligan, Gustavo
Petri, Christopher W. Fletcher, and Caroline Trippel},
journal={arXiv preprint arXiv:2409.19478},
year={2024},
archivePrefix={arXiv},
eprint={2409.19478},
primaryClass={cs.CR cs.AR}
}
|
hsiao2024rtl2m$\mu$path:
|
arxiv-663171
|
2409.19479
|
Spatial Reasoning and Planning for Deep Embodied Agents
|
<|reference_start|>Spatial Reasoning and Planning for Deep Embodied Agents: Humans can perform complex tasks with long-term objectives by planning, reasoning, and forecasting outcomes of actions. For embodied agents to achieve similar capabilities, they must gain knowledge of the environment transferable to novel scenarios with a limited budget of additional trial and error. Learning-based approaches, such as deep RL, can discover and take advantage of inherent regularities and characteristics of the application domain from data, and continuously improve their performances, however at a cost of large amounts of training data. This thesis explores the development of data-driven techniques for spatial reasoning and planning tasks, focusing on enhancing learning efficiency, interpretability, and transferability across novel scenarios. Four key contributions are made. 1) CALVIN, a differential planner that learns interpretable models of the world for long-term planning. It successfully navigated partially observable 3D environments, such as mazes and indoor rooms, by learning the rewards and state transitions from expert demonstrations. 2) SOAP, an RL algorithm that discovers options unsupervised for long-horizon tasks. Options segment a task into subtasks and enable consistent execution of the subtask. SOAP showed robust performances on history-conditional corridor tasks as well as classical benchmarks such as Atari. 3) LangProp, a code optimisation framework using LLMs to solve embodied agent problems that require reasoning by treating code as learnable policies. The framework successfully generated interpretable code with comparable or superior performance to human-written experts in the CARLA autonomous driving benchmark. 4) Voggite, an embodied agent with a vision-to-action transformer backend that solves complex tasks in Minecraft. It achieved third place in the MineRL BASALT Competition by identifying action triggers to segment tasks into multiple stages.<|reference_end|>
|
arxiv
|
@article{ishida2024spatial,
title={Spatial Reasoning and Planning for Deep Embodied Agents},
author={Shu Ishida},
journal={arXiv preprint arXiv:2409.19479},
year={2024},
doi={10.5287/ora-0nexmdyo0},
archivePrefix={arXiv},
eprint={2409.19479},
primaryClass={cs.LG cs.AI}
}
|
ishida2024spatial
|
arxiv-663172
|
2409.19481
|
The Efficient Variable Time-stepping DLN Algorithms for the Allen-Cahn Model
|
<|reference_start|>The Efficient Variable Time-stepping DLN Algorithms for the Allen-Cahn Model: We consider a family of variable time-stepping Dahlquist-Liniger-Nevanlinna (DLN) schemes, which is unconditional non-linear stable and second order accurate, for the Allen-Cahn equation. The finite element methods are used for the spatial discretization. For the non-linear term, we combine the DLN scheme with two efficient temporal algorithms: partially implicit modified algorithm and scalar auxiliary variable algorithm. For both approaches, we prove the unconditional, long-term stability of the model energy under any arbitrary time step sequence. Moreover, we provide rigorous error analysis for the partially implicit modified algorithm with variable time-stepping. Efficient time adaptive algorithms based on these schemes are also proposed. Several one- and two-dimensional numerical tests are presented to verify the properties of the proposed time adaptive DLN methods.<|reference_end|>
|
arxiv
|
@article{chen2024the,
title={The Efficient Variable Time-stepping DLN Algorithms for the Allen-Cahn
Model},
author={YiMing Chen, Dianlun Luo, Wenlong Pei, and Yulong Xing},
journal={arXiv preprint arXiv:2409.19481},
year={2024},
archivePrefix={arXiv},
eprint={2409.19481},
primaryClass={math.NA cs.NA}
}
|
chen2024the
|
arxiv-663173
|
2409.19483
|
MedCLIP-SAMv2: Towards Universal Text-Driven Medical Image Segmentation
|
<|reference_start|>MedCLIP-SAMv2: Towards Universal Text-Driven Medical Image Segmentation: Segmentation of anatomical structures and pathological regions in medical images is essential for modern clinical diagnosis, disease research, and treatment planning. While significant advancements have been made in deep learning-based segmentation techniques, many of these methods still suffer from limitations in data efficiency, generalizability, and interactivity. As a result, developing precise segmentation methods that require fewer labeled datasets remains a critical challenge in medical image analysis. Recently, the introduction of foundation models like CLIP and Segment-Anything-Model (SAM), with robust cross-domain representations, has paved the way for interactive and universal image segmentation. However, further exploration of these models for data-efficient segmentation in medical imaging is still needed and highly relevant. In this paper, we introduce MedCLIP-SAMv2, a novel framework that integrates the CLIP and SAM models to perform segmentation on clinical scans using text prompts, in both zero-shot and weakly supervised settings. Our approach includes fine-tuning the BiomedCLIP model with a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss, and leveraging the Multi-modal Information Bottleneck (M2IB) to create visual prompts for generating segmentation masks from SAM in the zero-shot setting. We also investigate using zero-shot segmentation labels within a weakly supervised paradigm to enhance segmentation quality further. Extensive testing across four diverse segmentation tasks and medical imaging modalities (breast tumor ultrasound, brain tumor MRI, lung X-ray, and lung CT) demonstrates the high accuracy of our proposed framework. Our code is available at https://github.com/HealthX-Lab/MedCLIP-SAMv2.<|reference_end|>
|
arxiv
|
@article{koleilat2024medclip-samv2:,
title={MedCLIP-SAMv2: Towards Universal Text-Driven Medical Image Segmentation},
author={Taha Koleilat, Hojat Asgariandehkordi, Hassan Rivaz, Yiming Xiao},
journal={arXiv preprint arXiv:2409.19483},
year={2024},
archivePrefix={arXiv},
eprint={2409.19483},
primaryClass={cs.CV cs.CL}
}
|
koleilat2024medclip-samv2:
|
arxiv-663174
|
2409.19487
|
HealthQ: Unveiling Questioning Capabilities of LLM Chains in Healthcare Conversations
|
<|reference_start|>HealthQ: Unveiling Questioning Capabilities of LLM Chains in Healthcare Conversations: In digital healthcare, large language models (LLMs) have primarily been utilized to enhance question-answering capabilities and improve patient interactions. However, effective patient care necessitates LLM chains that can actively gather information by posing relevant questions. This paper presents HealthQ, a novel framework designed to evaluate the questioning capabilities of LLM healthcare chains. We implemented several LLM chains, including Retrieval-Augmented Generation (RAG), Chain of Thought (CoT), and reflective chains, and introduced an LLM judge to assess the relevance and informativeness of the generated questions. To validate HealthQ, we employed traditional Natural Language Processing (NLP) metrics such as Recall-Oriented Understudy for Gisting Evaluation (ROUGE) and Named Entity Recognition (NER)-based set comparison, and constructed two custom datasets from public medical note datasets, ChatDoctor and MTS-Dialog. Our contributions are threefold: we provide the first comprehensive study on the questioning capabilities of LLMs in healthcare conversations, develop a novel dataset generation pipeline, and propose a detailed evaluation methodology.<|reference_end|>
|
arxiv
|
@article{wang2024healthq:,
title={HealthQ: Unveiling Questioning Capabilities of LLM Chains in Healthcare
Conversations},
author={Ziyu Wang, Hao Li, Di Huang, Amir M. Rahmani},
journal={arXiv preprint arXiv:2409.19487},
year={2024},
archivePrefix={arXiv},
eprint={2409.19487},
primaryClass={cs.CL cs.LG}
}
|
wang2024healthq:
|
arxiv-663175
|
2409.19488
|
A House United Within Itself: SLO-Awareness for On-Premises Containerized ML Inference Clusters via Faro
|
<|reference_start|>A House United Within Itself: SLO-Awareness for On-Premises Containerized ML Inference Clusters via Faro: This paper tackles the challenge of running multiple ML inference jobs (models) under time-varying workloads, on a constrained on-premises production cluster. Our system Faro takes in latency Service Level Objectives (SLOs) for each job, auto-distills them into utility functions, "sloppifies" these utility functions to make them amenable to mathematical optimization, automatically predicts workload via probabilistic prediction, and dynamically makes implicit cross-job resource allocations, in order to satisfy cluster-wide objectives, e.g., total utility, fairness, and other hybrid variants. A major challenge Faro tackles is that using precise utilities and high-fidelity predictors, can be too slow (and in a sense too precise!) for the fast adaptation we require. Faro's solution is to "sloppify" (relax) its multiple design components to achieve fast adaptation without overly degrading solution quality. Faro is implemented in a stack consisting of Ray Serve running atop a Kubernetes cluster. Trace-driven cluster deployments show that Faro achieves 2.3$\times$-23$\times$ lower SLO violations compared to state-of-the-art systems.<|reference_end|>
|
arxiv
|
@article{jeon2024a,
title={A House United Within Itself: SLO-Awareness for On-Premises
Containerized ML Inference Clusters via Faro},
author={Beomyeol Jeon, Chen Wang, Diana Arroyo, Alaa Youssef, Indranil Gupta},
journal={arXiv preprint arXiv:2409.19488},
year={2024},
doi={10.1145/3689031.3696071},
archivePrefix={arXiv},
eprint={2409.19488},
primaryClass={cs.DC}
}
|
jeon2024a
|
arxiv-663176
|
2409.19490
|
KineDepth: Utilizing Robot Kinematics for Online Metric Depth Estimation
|
<|reference_start|>KineDepth: Utilizing Robot Kinematics for Online Metric Depth Estimation: Depth perception is essential for a robot's spatial and geometric understanding of its environment, with many tasks traditionally relying on hardware-based depth sensors like RGB-D or stereo cameras. However, these sensors face practical limitations, including issues with transparent and reflective objects, high costs, calibration complexity, spatial and energy constraints, and increased failure rates in compound systems. While monocular depth estimation methods offer a cost-effective and simpler alternative, their adoption in robotics is limited due to their output of relative rather than metric depth, which is crucial for robotics applications. In this paper, we propose a method that utilizes a single calibrated camera, enabling the robot to act as a ``measuring stick" to convert relative depth estimates into metric depth in real-time as tasks are performed. Our approach employs an LSTM-based metric depth regressor, trained online and refined through probabilistic filtering, to accurately restore the metric depth across the monocular depth map, particularly in areas proximal to the robot's motion. Experiments with real robots demonstrate that our method significantly outperforms current state-of-the-art monocular metric depth estimation techniques, achieving a 22.1% reduction in depth error and a 52% increase in success rate for a downstream task.<|reference_end|>
|
arxiv
|
@article{atar2024kinedepth:,
title={KineDepth: Utilizing Robot Kinematics for Online Metric Depth Estimation},
author={Soofiyan Atar, Yuheng Zhi, Florian Richter, and Michael Yip},
journal={arXiv preprint arXiv:2409.19490},
year={2024},
archivePrefix={arXiv},
eprint={2409.19490},
primaryClass={cs.RO cs.CV}
}
|
atar2024kinedepth:
|
arxiv-663177
|
2409.19492
|
MedHalu: Hallucinations in Responses to Healthcare Queries by Large Language Models
|
<|reference_start|>MedHalu: Hallucinations in Responses to Healthcare Queries by Large Language Models: The remarkable capabilities of large language models (LLMs) in language understanding and generation have not rendered them immune to hallucinations. LLMs can still generate plausible-sounding but factually incorrect or fabricated information. As LLM-empowered chatbots become popular, laypeople may frequently ask health-related queries and risk falling victim to these LLM hallucinations, resulting in various societal and healthcare implications. In this work, we conduct a pioneering study of hallucinations in LLM-generated responses to real-world healthcare queries from patients. We propose MedHalu, a carefully crafted first-of-its-kind medical hallucination dataset with a diverse range of health-related topics and the corresponding hallucinated responses from LLMs with labeled hallucination types and hallucinated text spans. We also introduce MedHaluDetect framework to evaluate capabilities of various LLMs in detecting hallucinations. We also employ three groups of evaluators -- medical experts, LLMs, and laypeople -- to study who are more vulnerable to these medical hallucinations. We find that LLMs are much worse than the experts. They also perform no better than laypeople and even worse in few cases in detecting hallucinations. To fill this gap, we propose expert-in-the-loop approach to improve hallucination detection through LLMs by infusing expert reasoning. We observe significant performance gains for all the LLMs with an average macro-F1 improvement of 6.3 percentage points for GPT-4.<|reference_end|>
|
arxiv
|
@article{agarwal2024medhalu:,
title={MedHalu: Hallucinations in Responses to Healthcare Queries by Large
Language Models},
author={Vibhor Agarwal, Yiqiao Jin, Mohit Chandra, Munmun De Choudhury, Srijan
Kumar, Nishanth Sastry},
journal={arXiv preprint arXiv:2409.19492},
year={2024},
archivePrefix={arXiv},
eprint={2409.19492},
primaryClass={cs.CL cs.AI}
}
|
agarwal2024medhalu:
|
arxiv-663178
|
2409.19494
|
OptiGrasp: Optimized Grasp Pose Detection Using RGB Images for Warehouse Picking Robots
|
<|reference_start|>OptiGrasp: Optimized Grasp Pose Detection Using RGB Images for Warehouse Picking Robots: In warehouse environments, robots require robust picking capabilities to manage a wide variety of objects. Effective deployment demands minimal hardware, strong generalization to new products, and resilience in diverse settings. Current methods often rely on depth sensors for structural information, which suffer from high costs, complex setups, and technical limitations. Inspired by recent advancements in computer vision, we propose an innovative approach that leverages foundation models to enhance suction grasping using only RGB images. Trained solely on a synthetic dataset, our method generalizes its grasp prediction capabilities to real-world robots and a diverse range of novel objects not included in the training set. Our network achieves an 82.3\% success rate in real-world applications. The project website with code and data will be available at http://optigrasp.github.io.<|reference_end|>
|
arxiv
|
@article{atar2024optigrasp:,
title={OptiGrasp: Optimized Grasp Pose Detection Using RGB Images for Warehouse
Picking Robots},
author={Soofiyan Atar, Yi Li, Markus Grotz, Michael Wolf, Dieter Fox, Joshua
Smith},
journal={arXiv preprint arXiv:2409.19494},
year={2024},
archivePrefix={arXiv},
eprint={2409.19494},
primaryClass={cs.RO cs.CV}
}
|
atar2024optigrasp:
|
arxiv-663179
|
2409.19499
|
Fast-UMI: A Scalable and Hardware-Independent Universal Manipulation Interface
|
<|reference_start|>Fast-UMI: A Scalable and Hardware-Independent Universal Manipulation Interface: Collecting real-world manipulation trajectory data involving robotic arms is essential for developing general-purpose action policies in robotic manipulation, yet such data remains scarce. Existing methods face limitations such as high costs, labor intensity, hardware dependencies, and complex setup requirements involving SLAM algorithms. In this work, we introduce Fast-UMI, an interface-mediated manipulation system comprising two key components: a handheld device operated by humans for data collection and a robot-mounted device used during policy inference. Our approach employs a decoupled design compatible with a wide range of grippers while maintaining consistent observation perspectives, allowing models trained on handheld-collected data to be directly applied to real robots. By directly obtaining the end-effector pose using existing commercial hardware products, we eliminate the need for complex SLAM deployment and calibration, streamlining data processing. Fast-UMI provides supporting software tools for efficient robot learning data collection and conversion, facilitating rapid, plug-and-play functionality. This system offers an efficient and user-friendly tool for robotic learning data acquisition.<|reference_end|>
|
arxiv
|
@article{wu2024fast-umi:,
title={Fast-UMI: A Scalable and Hardware-Independent Universal Manipulation
Interface},
author={Ziniu Wu, Tianyu Wang, Zhaxizhuoma, Chuyue Guan, Zhongjie Jia, Shuai
Liang, Haoming Song, Delin Qu, Dong Wang, Zhigang Wang, Nieqing Cao, Yan
Ding, Bin Zhao, Xuelong Li},
journal={arXiv preprint arXiv:2409.19499},
year={2024},
archivePrefix={arXiv},
eprint={2409.19499},
primaryClass={cs.RO}
}
|
wu2024fast-umi:
|
arxiv-663180
|
2409.19501
|
Learning Frame-Wise Emotion Intensity for Audio-Driven Talking-Head Generation
|
<|reference_start|>Learning Frame-Wise Emotion Intensity for Audio-Driven Talking-Head Generation: Human emotional expression is inherently dynamic, complex, and fluid, characterized by smooth transitions in intensity throughout verbal communication. However, the modeling of such intensity fluctuations has been largely overlooked by previous audio-driven talking-head generation methods, which often results in static emotional outputs. In this paper, we explore how emotion intensity fluctuates during speech, proposing a method for capturing and generating these subtle shifts for talking-head generation. Specifically, we develop a talking-head framework that is capable of generating a variety of emotions with precise control over intensity levels. This is achieved by learning a continuous emotion latent space, where emotion types are encoded within latent orientations and emotion intensity is reflected in latent norms. In addition, to capture the dynamic intensity fluctuations, we adopt an audio-to-intensity predictor by considering the speaking tone that reflects the intensity. The training signals for this predictor are obtained through our emotion-agnostic intensity pseudo-labeling method without the need of frame-wise intensity labeling. Extensive experiments and analyses validate the effectiveness of our proposed method in accurately capturing and reproducing emotion intensity fluctuations in talking-head generation, thereby significantly enhancing the expressiveness and realism of the generated outputs.<|reference_end|>
|
arxiv
|
@article{xu2024learning,
title={Learning Frame-Wise Emotion Intensity for Audio-Driven Talking-Head
Generation},
author={Jingyi Xu and Hieu Le and Zhixin Shu and Yang Wang and Yi-Hsuan Tsai
and Dimitris Samaras},
journal={arXiv preprint arXiv:2409.19501},
year={2024},
archivePrefix={arXiv},
eprint={2409.19501},
primaryClass={cs.SD cs.AI eess.AS}
}
|
xu2024learning
|
arxiv-663181
|
2409.19502
|
Numerical approximation of the insitu combustion model using the nonlinear mixed complementarity method
|
<|reference_start|>Numerical approximation of the insitu combustion model using the nonlinear mixed complementarity method: In this work, we will study a numerical method that allows finding an approximation of the exact solution for a in-situ combustion model using the nonlinear mixed complementary method, which is a variation of the Newtons method for solving nonlinear systems based on an implicit finite difference scheme and a nonlinear algorithm mixed complementarity, FDA-MNCP. The method has the advantage of provide a global convergence in relation to the finite difference method and method of Newton that only has local convergence. The theory is applied to model in-situ combustion, which can be rewritten in the form of mixed complementarity also we do a comparison with the FDA-NCP method<|reference_end|>
|
arxiv
|
@article{sangay2024numerical,
title={Numerical approximation of the insitu combustion model using the
nonlinear mixed complementarity method},
author={Julio Cesar Agustin Sangay, Alexis Rodriguez Carranza, George J.
Bautista, Juan Carlos Ponte Bejarano, Jose Luis Ponte Bejarano, and Eddy
Cristiam Miranda Ramos},
journal={arXiv preprint arXiv:2409.19502},
year={2024},
archivePrefix={arXiv},
eprint={2409.19502},
primaryClass={math.NA cs.NA}
}
|
sangay2024numerical
|
arxiv-663182
|
2409.19505
|
The Nature of NLP: Analyzing Contributions in NLP Papers
|
<|reference_start|>The Nature of NLP: Analyzing Contributions in NLP Papers: Natural Language Processing (NLP) is a dynamic, interdisciplinary field that integrates intellectual traditions from computer science, linguistics, social science, and more. Despite its established presence, the definition of what constitutes NLP research remains debated. In this work, we quantitatively investigate what constitutes NLP by examining research papers. For this purpose, we propose a taxonomy and introduce NLPContributions, a dataset of nearly $2k$ research paper abstracts, expertly annotated to identify scientific contributions and classify their types according to this taxonomy. We also propose a novel task to automatically identify these elements, for which we train a strong baseline on our dataset. We present experimental results from this task and apply our model to $\sim$$29k$ NLP research papers to analyze their contributions, aiding in the understanding of the nature of NLP research. Our findings reveal a rising involvement of machine learning in NLP since the early nineties, alongside a declining focus on adding knowledge about language or people; again, in post-2020, there has been a resurgence of focus on language and people. We hope this work will spark discussions on our community norms and inspire efforts to consciously shape the future.<|reference_end|>
|
arxiv
|
@article{pramanick2024the,
title={The Nature of NLP: Analyzing Contributions in NLP Papers},
author={Aniket Pramanick, Yufang Hou, Saif M. Mohammad, Iryna Gurevych},
journal={arXiv preprint arXiv:2409.19505},
year={2024},
archivePrefix={arXiv},
eprint={2409.19505},
primaryClass={cs.CL}
}
|
pramanick2024the
|
arxiv-663183
|
2409.19506
|
IWN: Image Watermarking Based on Idempotency
|
<|reference_start|>IWN: Image Watermarking Based on Idempotency: In the expanding field of digital media, maintaining the strength and integrity of watermarking technology is becoming increasingly challenging. This paper, inspired by the Idempotent Generative Network (IGN), explores the prospects of introducing idempotency into image watermark processing and proposes an innovative neural network model - the Idempotent Watermarking Network (IWN). The proposed model, which focuses on enhancing the recovery quality of color image watermarks, leverages idempotency to ensure superior image reversibility. This feature ensures that, even if color image watermarks are attacked or damaged, they can be effectively projected and mapped back to their original state. Therefore, the extracted watermarks have unquestionably increased quality. The IWN model achieves a balance between embedding capacity and robustness, alleviating to some extent the inherent contradiction between these two factors in traditional watermarking techniques and steganography methods.<|reference_end|>
|
arxiv
|
@article{deng2024iwn:,
title={IWN: Image Watermarking Based on Idempotency},
author={Kaixin Deng},
journal={arXiv preprint arXiv:2409.19506},
year={2024},
archivePrefix={arXiv},
eprint={2409.19506},
primaryClass={cs.MM cs.CV}
}
|
deng2024iwn:
|
arxiv-663184
|
2409.19507
|
A Critical Look at Meta-evaluating Summarisation Evaluation Metrics
|
<|reference_start|>A Critical Look at Meta-evaluating Summarisation Evaluation Metrics: Effective summarisation evaluation metrics enable researchers and practitioners to compare different summarisation systems efficiently. Estimating the effectiveness of an automatic evaluation metric, termed meta-evaluation, is a critically important research question. In this position paper, we review recent meta-evaluation practices for summarisation evaluation metrics and find that (1) evaluation metrics are primarily meta-evaluated on datasets consisting of examples from news summarisation datasets, and (2) there has been a noticeable shift in research focus towards evaluating the faithfulness of generated summaries. We argue that the time is ripe to build more diverse benchmarks that enable the development of more robust evaluation metrics and analyze the generalization ability of existing evaluation metrics. In addition, we call for research focusing on user-centric quality dimensions that consider the generated summary's communicative goal and the role of summarisation in the workflow.<|reference_end|>
|
arxiv
|
@article{dai2024a,
title={A Critical Look at Meta-evaluating Summarisation Evaluation Metrics},
author={Xiang Dai and Sarvnaz Karimi and Biaoyan Fang},
journal={arXiv preprint arXiv:2409.19507},
year={2024},
archivePrefix={arXiv},
eprint={2409.19507},
primaryClass={cs.CL}
}
|
dai2024a
|
arxiv-663185
|
2409.19508
|
Transforming Scholarly Landscapes: Influence of Large Language Models on Academic Fields beyond Computer Science
|
<|reference_start|>Transforming Scholarly Landscapes: Influence of Large Language Models on Academic Fields beyond Computer Science: Large Language Models (LLMs) have ushered in a transformative era in Natural Language Processing (NLP), reshaping research and extending NLP's influence to other fields of study. However, there is little to no work examining the degree to which LLMs influence other research fields. This work empirically and systematically examines the influence and use of LLMs in fields beyond NLP. We curate $106$ LLMs and analyze $\sim$$148k$ papers citing LLMs to quantify their influence and reveal trends in their usage patterns. Our analysis reveals not only the increasing prevalence of LLMs in non-CS fields but also the disparities in their usage, with some fields utilizing them more frequently than others since 2018, notably Linguistics and Engineering together accounting for $\sim$$45\%$ of LLM citations. Our findings further indicate that most of these fields predominantly employ task-agnostic LLMs, proficient in zero or few-shot learning without requiring further fine-tuning, to address their domain-specific problems. This study sheds light on the cross-disciplinary impact of NLP through LLMs, providing a better understanding of the opportunities and challenges.<|reference_end|>
|
arxiv
|
@article{pramanick2024transforming,
title={Transforming Scholarly Landscapes: Influence of Large Language Models on
Academic Fields beyond Computer Science},
author={Aniket Pramanick, Yufang Hou, Saif M. Mohammad, Iryna Gurevych},
journal={arXiv preprint arXiv:2409.19508},
year={2024},
archivePrefix={arXiv},
eprint={2409.19508},
primaryClass={cs.CL}
}
|
pramanick2024transforming
|
arxiv-663186
|
2409.19509
|
Heterogeneity-Aware Resource Allocation and Topology Design for Hierarchical Federated Edge Learning
|
<|reference_start|>Heterogeneity-Aware Resource Allocation and Topology Design for Hierarchical Federated Edge Learning: Federated Learning (FL) provides a privacy-preserving framework for training machine learning models on mobile edge devices. Traditional FL algorithms, e.g., FedAvg, impose a heavy communication workload on these devices. To mitigate this issue, Hierarchical Federated Edge Learning (HFEL) has been proposed, leveraging edge servers as intermediaries for model aggregation. Despite its effectiveness, HFEL encounters challenges such as a slow convergence rate and high resource consumption, particularly in the presence of system and data heterogeneity. However, existing works are mainly focused on improving training efficiency for traditional FL, leaving the efficiency of HFEL largely unexplored. In this paper, we consider a two-tier HFEL system, where edge devices are connected to edge servers and edge servers are interconnected through peer-to-peer (P2P) edge backhauls. Our goal is to enhance the training efficiency of the HFEL system through strategic resource allocation and topology design. Specifically, we formulate an optimization problem to minimize the total training latency by allocating the computation and communication resources, as well as adjusting the P2P connections. To ensure convergence under dynamic topologies, we analyze the convergence error bound and introduce a model consensus constraint into the optimization problem. The proposed problem is then decomposed into several subproblems, enabling us to alternatively solve it online. Our method facilitates the efficient implementation of large-scale FL at edge networks under data and system heterogeneity. Comprehensive experiment evaluation on benchmark datasets validates the effectiveness of the proposed method, demonstrating significant reductions in training latency while maintaining the model accuracy compared to various baselines.<|reference_end|>
|
arxiv
|
@article{gao2024heterogeneity-aware,
title={Heterogeneity-Aware Resource Allocation and Topology Design for
Hierarchical Federated Edge Learning},
author={Zhidong Gao, Yu Zhang, Yanmin Gong, Yuanxiong Guo},
journal={arXiv preprint arXiv:2409.19509},
year={2024},
archivePrefix={arXiv},
eprint={2409.19509},
primaryClass={cs.LG cs.AI cs.DC}
}
|
gao2024heterogeneity-aware
|
arxiv-663187
|
2409.19510
|
CoT-ST: Enhancing LLM-based Speech Translation with Multimodal Chain-of-Thought
|
<|reference_start|>CoT-ST: Enhancing LLM-based Speech Translation with Multimodal Chain-of-Thought: Speech Language Models (SLMs) have demonstrated impressive performance on speech translation tasks. However, existing research primarily focuses on direct instruction fine-tuning and often overlooks the inherent reasoning capabilities of SLMs. In this paper, we introduce a three-stage training framework designed to activate the chain-of-thought (CoT) capabilities of SLMs. We propose CoT-ST, a speech translation model that utilizes multimodal CoT to decompose speech translation into sequential steps of speech recognition and translation. We validated the effectiveness of our method on two datasets: the CoVoST-2 dataset and MuST-C dataset. The experimental results demonstrate that CoT-ST outperforms previous state-of-the-art methods, achieving higher BLEU scores (CoVoST-2 en-ja: 30.5->30.8, en-zh: 45.2->47.7, MuST-C en-zh: 19.6->21.2). This work is open sourced at https://github.com/X-LANCE/SLAM-LLM/tree/main/examples/st_covost2 .<|reference_end|>
|
arxiv
|
@article{du2024cot-st:,
title={CoT-ST: Enhancing LLM-based Speech Translation with Multimodal
Chain-of-Thought},
author={Yexing Du, Ziyang Ma, Yifan Yang, Keqi Deng, Xie Chen, Bo Yang, Yang
Xiang, Ming Liu, Bing Qin},
journal={arXiv preprint arXiv:2409.19510},
year={2024},
archivePrefix={arXiv},
eprint={2409.19510},
primaryClass={cs.CL}
}
|
du2024cot-st:
|
arxiv-663188
|
2409.19513
|
One Node Per User: Node-Level Federated Learning for Graph Neural Networks
|
<|reference_start|>One Node Per User: Node-Level Federated Learning for Graph Neural Networks: Graph Neural Networks (GNNs) training often necessitates gathering raw user data on a central server, which raises significant privacy concerns. Federated learning emerges as a solution, enabling collaborative model training without users directly sharing their raw data. However, integrating federated learning with GNNs presents unique challenges, especially when a client represents a graph node and holds merely a single feature vector. In this paper, we propose a novel framework for node-level federated graph learning. Specifically, we decouple the message-passing and feature vector transformation processes of the first GNN layer, allowing them to be executed separately on the user devices and the cloud server. Moreover, we introduce a graph Laplacian term based on the feature vector's latent representation to regulate the user-side model updates. The experiment results on multiple datasets show that our approach achieves better performance compared with baselines.<|reference_end|>
|
arxiv
|
@article{gao2024one,
title={One Node Per User: Node-Level Federated Learning for Graph Neural
Networks},
author={Zhidong Gao, Yuanxiong Guo, Yanmin Gong},
journal={arXiv preprint arXiv:2409.19513},
year={2024},
archivePrefix={arXiv},
eprint={2409.19513},
primaryClass={cs.LG cs.AI}
}
|
gao2024one
|
arxiv-663189
|
2409.19518
|
KODA: A Data-Driven Recursive Model for Time Series Forecasting and Data Assimilation using Koopman Operators
|
<|reference_start|>KODA: A Data-Driven Recursive Model for Time Series Forecasting and Data Assimilation using Koopman Operators: Approaches based on Koopman operators have shown great promise in forecasting time series data generated by complex nonlinear dynamical systems (NLDS). Although such approaches are able to capture the latent state representation of a NLDS, they still face difficulty in long term forecasting when applied to real world data. Specifically many real-world NLDS exhibit time-varying behavior, leading to nonstationarity that is hard to capture with such models. Furthermore they lack a systematic data-driven approach to perform data assimilation, that is, exploiting noisy measurements on the fly in the forecasting task. To alleviate the above issues, we propose a Koopman operator-based approach (named KODA - Koopman Operator with Data Assimilation) that integrates forecasting and data assimilation in NLDS. In particular we use a Fourier domain filter to disentangle the data into a physical component whose dynamics can be accurately represented by a Koopman operator, and residual dynamics that represents the local or time varying behavior that are captured by a flexible and learnable recursive model. We carefully design an architecture and training criterion that ensures this decomposition lead to stable and long-term forecasts. Moreover, we introduce a course correction strategy to perform data assimilation with new measurements at inference time. The proposed approach is completely data-driven and can be learned end-to-end. Through extensive experimental comparisons we show that KODA outperforms existing state of the art methods on multiple time series benchmarks such as electricity, temperature, weather, lorenz 63 and duffing oscillator demonstrating its superior performance and efficacy along the three tasks a) forecasting, b) data assimilation and c) state prediction.<|reference_end|>
|
arxiv
|
@article{singh2024koda:,
title={KODA: A Data-Driven Recursive Model for Time Series Forecasting and Data
Assimilation using Koopman Operators},
author={Ashutosh Singh, Ashish Singh, Tales Imbiriba, Deniz Erdogmus, Ricardo
Borsoi},
journal={arXiv preprint arXiv:2409.19518},
year={2024},
archivePrefix={arXiv},
eprint={2409.19518},
primaryClass={cs.LG cs.AI}
}
|
singh2024koda:
|
arxiv-663190
|
2409.19521
|
GenTel-Safe: A Unified Benchmark and Shielding Framework for Defending Against Prompt Injection Attacks
|
<|reference_start|>GenTel-Safe: A Unified Benchmark and Shielding Framework for Defending Against Prompt Injection Attacks: Large Language Models (LLMs) like GPT-4, LLaMA, and Qwen have demonstrated remarkable success across a wide range of applications. However, these models remain inherently vulnerable to prompt injection attacks, which can bypass existing safety mechanisms, highlighting the urgent need for more robust attack detection methods and comprehensive evaluation benchmarks. To address these challenges, we introduce GenTel-Safe, a unified framework that includes a novel prompt injection attack detection method, GenTel-Shield, along with a comprehensive evaluation benchmark, GenTel-Bench, which compromises 84812 prompt injection attacks, spanning 3 major categories and 28 security scenarios. To prove the effectiveness of GenTel-Shield, we evaluate it together with vanilla safety guardrails against the GenTel-Bench dataset. Empirically, GenTel-Shield can achieve state-of-the-art attack detection success rates, which reveals the critical weakness of existing safeguarding techniques against harmful prompts. For reproducibility, we have made the code and benchmarking dataset available on the project page at https://gentellab.github.io/gentel-safe.github.io/.<|reference_end|>
|
arxiv
|
@article{li2024gentel-safe:,
title={GenTel-Safe: A Unified Benchmark and Shielding Framework for Defending
Against Prompt Injection Attacks},
author={Rongchang Li and Minjie Chen and Chang Hu and Han Chen and Wenpeng
Xing and Meng Han},
journal={arXiv preprint arXiv:2409.19521},
year={2024},
archivePrefix={arXiv},
eprint={2409.19521},
primaryClass={cs.CR cs.LG}
}
|
li2024gentel-safe:
|
arxiv-663191
|
2409.19523
|
LANDeRMT: Detecting and Routing Language-Aware Neurons for Selectively Finetuning LLMs to Machine Translation
|
<|reference_start|>LANDeRMT: Detecting and Routing Language-Aware Neurons for Selectively Finetuning LLMs to Machine Translation: Recent advancements in large language models (LLMs) have shown promising results in multilingual translation even with limited bilingual supervision. The major challenges are catastrophic forgetting and parameter interference for finetuning LLMs when provided parallel training data. To address these challenges, we propose LANDeRMT, a \textbf{L}anguage-\textbf{A}ware \textbf{N}euron \textbf{De}tecting and \textbf{R}outing framework that selectively finetunes LLMs to \textbf{M}achine \textbf{T}ranslation with diverse translation training data. In LANDeRMT, we evaluate the awareness of neurons to MT tasks and categorize them into language-general and language-specific neurons. This categorization enables selective parameter updates during finetuning, mitigating parameter interference and catastrophic forgetting issues. For the detected neurons, we further propose a conditional awareness-based routing mechanism to dynamically adjust language-general and language-specific capacity within LLMs, guided by translation signals. Experimental results demonstrate that the proposed LANDeRMT is very effective in learning translation knowledge, significantly improving translation quality over various strong baselines for multiple language pairs.<|reference_end|>
|
arxiv
|
@article{zhu2024landermt:,
title={LANDeRMT: Detecting and Routing Language-Aware Neurons for Selectively
Finetuning LLMs to Machine Translation},
author={Shaolin Zhu, Leiyu Pan, Bo Li, Deyi Xiong},
journal={arXiv preprint arXiv:2409.19523},
year={2024},
archivePrefix={arXiv},
eprint={2409.19523},
primaryClass={cs.CL}
}
|
zhu2024landermt:
|
arxiv-663192
|
2409.19526
|
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats
|
<|reference_start|>Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats: Multimodal contrastive learning uses various data modalities to create high-quality features, but its reliance on extensive data sources on the Internet makes it vulnerable to backdoor attacks. These attacks insert malicious behaviors during training, which are activated by specific triggers during inference, posing significant security risks. Despite existing countermeasures through fine-tuning that reduce the malicious impacts of such attacks, these defenses frequently necessitate extensive training time and degrade clean accuracy. In this study, we propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning. This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities, known as Unlearn Backdoor Threats (UBT). We specifically use overfit training to improve backdoor shortcuts and accurately detect suspicious samples in the potential poisoning data set. Then, we select fewer unlearned samples from suspicious samples for rapid forgetting in order to eliminate the backdoor effect and thus improve backdoor defense efficiency. In the backdoor unlearning process, we present a novel token-based portion unlearning training regime. This technique focuses on the model's compromised elements, dissociating backdoor correlations while maintaining the model's overall integrity. Extensive experimental results show that our method effectively defends against various backdoor attack methods in the CLIP model. Compared to SoTA backdoor defense methods, UBT achieves the lowest attack success rate while maintaining a high clean accuracy of the model (attack success rate decreases by 19% compared to SOTA, while clean accuracy increases by 2.57%).<|reference_end|>
|
arxiv
|
@article{liu2024efficient,
title={Efficient Backdoor Defense in Multimodal Contrastive Learning: A
Token-Level Unlearning Method for Mitigating Threats},
author={Kuanrong Liu, Siyuan Liang, Jiawei Liang, Pengwen Dai, Xiaochun Cao},
journal={arXiv preprint arXiv:2409.19526},
year={2024},
archivePrefix={arXiv},
eprint={2409.19526},
primaryClass={cs.CR cs.AI cs.CV cs.LG}
}
|
liu2024efficient
|
arxiv-663193
|
2409.19527
|
BuildingView: Constructing Urban Building Exteriors Databases with Street View Imagery and Multimodal Large Language Mode
|
<|reference_start|>BuildingView: Constructing Urban Building Exteriors Databases with Street View Imagery and Multimodal Large Language Mode: Urban Building Exteriors are increasingly important in urban analytics, driven by advancements in Street View Imagery and its integration with urban research. Multimodal Large Language Models (LLMs) offer powerful tools for urban annotation, enabling deeper insights into urban environments. However, challenges remain in creating accurate and detailed urban building exterior databases, identifying critical indicators for energy efficiency, environmental sustainability, and human-centric design, and systematically organizing these indicators. To address these challenges, we propose BuildingView, a novel approach that integrates high-resolution visual data from Google Street View with spatial information from OpenStreetMap via the Overpass API. This research improves the accuracy of urban building exterior data, identifies key sustainability and design indicators, and develops a framework for their extraction and categorization. Our methodology includes a systematic literature review, building and Street View sampling, and annotation using the ChatGPT-4O API. The resulting database, validated with data from New York City, Amsterdam, and Singapore, provides a comprehensive tool for urban studies, supporting informed decision-making in urban planning, architectural design, and environmental policy. The code for BuildingView is available at https://github.com/Jasper0122/BuildingView.<|reference_end|>
|
arxiv
|
@article{li2024buildingview:,
title={BuildingView: Constructing Urban Building Exteriors Databases with
Street View Imagery and Multimodal Large Language Mode},
author={Zongrong Li and Yunlei Su and Chenyuan Zhu and Wufan Zhao},
journal={arXiv preprint arXiv:2409.19527},
year={2024},
archivePrefix={arXiv},
eprint={2409.19527},
primaryClass={cs.AI cs.CV cs.CY}
}
|
li2024buildingview:
|
arxiv-663194
|
2409.19528
|
FoAM: Foresight-Augmented Multi-Task Imitation Policy for Robotic Manipulation
|
<|reference_start|>FoAM: Foresight-Augmented Multi-Task Imitation Policy for Robotic Manipulation: Multi-task imitation learning (MTIL) has shown significant potential in robotic manipulation by enabling agents to perform various tasks using a unified policy. This simplifies the policy deployment and enhances the agent's adaptability across different contexts. However, key challenges remain, such as maintaining action reliability (e.g., avoiding abnormal action sequences that deviate from nominal task trajectories), distinguishing between similar tasks, and generalizing to unseen scenarios. To address these challenges, we introduce the Foresight-Augmented Manipulation Policy (FoAM), an innovative MTIL framework. FoAM not only learns to mimic expert actions but also predicts the visual outcomes of those actions to enhance decision-making. Additionally, it integrates multi-modal goal inputs, such as visual and language prompts, overcoming the limitations of single-conditioned policies. We evaluated FoAM across over 100 tasks in both simulation and real-world settings, demonstrating that it significantly improves IL policy performance, outperforming current state-of-the-art IL baselines by up to 41% in success rate. Furthermore, we released a simulation benchmark for robotic manipulation, featuring 10 task suites and over 80 challenging tasks designed for multi-task policy training and evaluation. See project homepage https://projFoAM.github.io/ for project details.<|reference_end|>
|
arxiv
|
@article{liu2024foam:,
title={FoAM: Foresight-Augmented Multi-Task Imitation Policy for Robotic
Manipulation},
author={Litao Liu, Wentao Wang, Yifan Han, Zhuoli Xie, Pengfei Yi, Junyan Li,
Yi Qin, Wenzhao Lian},
journal={arXiv preprint arXiv:2409.19528},
year={2024},
archivePrefix={arXiv},
eprint={2409.19528},
primaryClass={cs.RO}
}
|
liu2024foam:
|
arxiv-663195
|
2409.19531
|
Understanding Clinical Decision-Making in Traditional East Asian Medicine through Dimensionality Reduction: An Empirical Investigation
|
<|reference_start|>Understanding Clinical Decision-Making in Traditional East Asian Medicine through Dimensionality Reduction: An Empirical Investigation: This study examines the clinical decision-making processes in Traditional East Asian Medicine (TEAM) by reinterpreting pattern identification (PI) through the lens of dimensionality reduction. Focusing on the Eight Principle Pattern Identification (EPPI) system and utilizing empirical data from the Shang-Han-Lun, we explore the necessity and significance of prioritizing the Exterior-Interior pattern in diagnosis and treatment selection. We test three hypotheses: whether the Ext-Int pattern contains the most information about patient symptoms, represents the most abstract and generalizable symptom information, and facilitates the selection of appropriate herbal prescriptions. Employing quantitative measures such as the abstraction index, cross-conditional generalization performance, and decision tree regression, our results demonstrate that the Exterior-Interior pattern represents the most abstract and generalizable symptom information, contributing to the efficient mapping between symptom and herbal prescription spaces. This research provides an objective framework for understanding the cognitive processes underlying TEAM, bridging traditional medical practices with modern computational approaches. The findings offer insights into the development of AI-driven diagnostic tools in TEAM and conventional medicine, with the potential to advance clinical practice, education, and research.<|reference_end|>
|
arxiv
|
@article{bae2024understanding,
title={Understanding Clinical Decision-Making in Traditional East Asian
Medicine through Dimensionality Reduction: An Empirical Investigation},
author={Hyojin Bae, Bongsu Kang, and Chang-Eop Kim},
journal={arXiv preprint arXiv:2409.19531},
year={2024},
archivePrefix={arXiv},
eprint={2409.19531},
primaryClass={cs.LG cs.AI}
}
|
bae2024understanding
|
arxiv-663196
|
2409.19532
|
Video DataFlywheel: Resolving the Impossible Data Trinity in Video-Language Understanding
|
<|reference_start|>Video DataFlywheel: Resolving the Impossible Data Trinity in Video-Language Understanding: Recently, video-language understanding has achieved great success through large-scale pre-training. However, data scarcity remains a prevailing challenge. This study quantitatively reveals an "impossible trinity" among data quantity, diversity, and quality in pre-training datasets. Recent efforts seek to refine large-scale, diverse ASR datasets compromised by low quality through synthetic annotations. These methods successfully leverage useful information in multimodal video content (frames, tags, ASR transcripts, etc.) to refine the original annotations. Nevertheless, they struggle to mitigate noise within synthetic annotations and lack scalability as the dataset size expands. To address these issues, we introduce the Video DataFlywheel framework, which iteratively refines video annotations with improved noise control methods. For iterative refinement, we first leverage a video-language model to generate synthetic annotations, resulting in a refined dataset. Then, we pre-train on it and fine-tune on human refinement examples for a stronger model. These processes are repeated for continuous improvement. For noise control, we present AdaTaiLr, a novel noise control method that requires weaker assumptions on noise distribution, thereby proving more effective in large datasets with theoretical guarantees. The combination of iterative refinement and AdaTaiLr can achieve better scalability in video-language understanding. Extensive experiments show that our framework outperforms existing data refinement baselines, delivering a 3% performance boost and improving dataset quality with minimal diversity loss. Furthermore, our refined dataset facilitates significant improvements in various video-language understanding tasks, including video question answering and text-video retrieval.<|reference_end|>
|
arxiv
|
@article{wang2024video,
title={Video DataFlywheel: Resolving the Impossible Data Trinity in
Video-Language Understanding},
author={Xiao Wang, Jianlong Wu, Zijia Lin, Fuzheng Zhang, Di Zhang, and
Liqiang Nie},
journal={arXiv preprint arXiv:2409.19532},
year={2024},
archivePrefix={arXiv},
eprint={2409.19532},
primaryClass={cs.CV cs.CL cs.LG cs.MM}
}
|
wang2024video
|
arxiv-663197
|
2409.19533
|
Mixed Chain-of-Psychotherapies for Emotional Support Chatbot
|
<|reference_start|>Mixed Chain-of-Psychotherapies for Emotional Support Chatbot: In the realm of mental health support chatbots, it is vital to show empathy and encourage self-exploration to provide tailored solutions. However, current approaches tend to provide general insights or solutions without fully understanding the help-seeker's situation. Therefore, we propose PsyMix, a chatbot that integrates the analyses of the seeker's state from the perspective of a psychotherapy approach (Chain-of-Psychotherapies, CoP) before generating the response, and learns to incorporate the strength of various psychotherapies by fine-tuning on a mixture of CoPs. Through comprehensive evaluation, we found that PsyMix can outperform the ChatGPT baseline, and demonstrate a comparable level of empathy in its responses to that of human counselors.<|reference_end|>
|
arxiv
|
@article{chen2024mixed,
title={Mixed Chain-of-Psychotherapies for Emotional Support Chatbot},
author={Siyuan Chen, Cong Ming, Zhiling Zhang, Yanyi Chen, Kenny Q. Zhu,
Mengyue Wu},
journal={arXiv preprint arXiv:2409.19533},
year={2024},
archivePrefix={arXiv},
eprint={2409.19533},
primaryClass={cs.CL}
}
|
chen2024mixed
|
arxiv-663198
|
2409.19534
|
An evolutionary approach for discovering non-Gaussian stochastic dynamical systems based on nonlocal Kramers-Moyal formulas
|
<|reference_start|>An evolutionary approach for discovering non-Gaussian stochastic dynamical systems based on nonlocal Kramers-Moyal formulas: Discovering explicit governing equations of stochastic dynamical systems with both (Gaussian) Brownian noise and (non-Gaussian) L\'evy noise from data is chanllenging due to possible intricate functional forms and the inherent complexity of L\'evy motion. This present research endeavors to develop an evolutionary symbol sparse regression (ESSR) approach to extract non-Gaussian stochastic dynamical systems from sample path data, based on nonlocal Kramers-Moyal formulas, genetic programming, and sparse regression. More specifically, the genetic programming is employed to generate a diverse array of candidate functions, the sparse regression technique aims at learning the coefficients associated with these candidates, and the nonlocal Kramers-Moyal formulas serve as the foundation for constructing the fitness measure in genetic programming and the loss function in sparse regression. The efficacy and capabilities of this approach are showcased through its application to several illustrative models. This approach stands out as a potent instrument for deciphering non-Gaussian stochastic dynamics from available datasets, indicating a wide range of applications across different fields.<|reference_end|>
|
arxiv
|
@article{li2024an,
title={An evolutionary approach for discovering non-Gaussian stochastic
dynamical systems based on nonlocal Kramers-Moyal formulas},
author={Yang Li, Shengyuan Xu, Jinqiao Duan},
journal={arXiv preprint arXiv:2409.19534},
year={2024},
archivePrefix={arXiv},
eprint={2409.19534},
primaryClass={stat.ML cs.LG cs.NE math.DS}
}
|
li2024an
|
arxiv-663199
|
2409.19536
|
Joint Trajectory Replanning for Mars Ascent Vehicle under Propulsion System Faults: A Suboptimal Learning-Based Warm-Start Approach
|
<|reference_start|>Joint Trajectory Replanning for Mars Ascent Vehicle under Propulsion System Faults: A Suboptimal Learning-Based Warm-Start Approach: During the Mars ascent vehicle (MAV) launch missions, when encountering a thrust drop type of propulsion system fault problem, the general trajectory replanning methods relying on step-by-step judgments may fail to make timely decisions, potentially leading to mission failure. This paper proposes a suboptimal joint trajectory replanning (SJTR) method, which formulates the joint optimization problem of target orbit and flight trajectory after a fault within a convex optimization framework. By incorporating penalty coefficients for terminal constraints, the optimization solution adheres to the orbit redecision principle, thereby avoiding complex decision-making processes and resulting in a concise and rapid solution to the replanning problem. A learning-based warm-start scheme is proposed in conjunction with the designed SJTR method. Offline, a deep neural network (DNN) is trained using a dataset generated by the SJTR method. Online, the DNN provides initial guesses for the time optimization variables based on the current fault situation, enhancing the solving efficiency and reliability of the algorithm. Numerical simulations of the MAV flight scenario under the thrust drop faults are performed, and Monte Carlo experiments and case studies across all orbit types demonstrate the effectiveness of the proposed method.<|reference_end|>
|
arxiv
|
@article{li2024joint,
title={Joint Trajectory Replanning for Mars Ascent Vehicle under Propulsion
System Faults: A Suboptimal Learning-Based Warm-Start Approach},
author={Kun Li, Guangtao Ran, Yanning Guo, Ju H. Park, Yao Zhang},
journal={arXiv preprint arXiv:2409.19536},
year={2024},
archivePrefix={arXiv},
eprint={2409.19536},
primaryClass={eess.SY cs.SY}
}
|
li2024joint
|
arxiv-663200
|
2409.19540
|
LoRKD: Low-Rank Knowledge Decomposition for Medical Foundation Models
|
<|reference_start|>LoRKD: Low-Rank Knowledge Decomposition for Medical Foundation Models: The widespread adoption of large-scale pre-training techniques has significantly advanced the development of medical foundation models, enabling them to serve as versatile tools across a broad range of medical tasks. However, despite their strong generalization capabilities, medical foundation models pre-trained on large-scale datasets tend to suffer from domain gaps between heterogeneous data, leading to suboptimal performance on specific tasks compared to specialist models, as evidenced by previous studies. In this paper, we explore a new perspective called "Knowledge Decomposition" to improve the performance on specific medical tasks, which deconstructs the foundation model into multiple lightweight expert models, each dedicated to a particular anatomical region, with the aim of enhancing specialization and simultaneously reducing resource consumption. To accomplish the above objective, we propose a novel framework named Low-Rank Knowledge Decomposition (LoRKD), which explicitly separates gradients from different tasks by incorporating low-rank expert modules and efficient knowledge separation convolution. The low-rank expert modules resolve gradient conflicts between heterogeneous data from different anatomical regions, providing strong specialization at lower costs. The efficient knowledge separation convolution significantly improves algorithm efficiency by achieving knowledge separation within a single forward propagation. Extensive experimental results on segmentation and classification tasks demonstrate that our decomposed models not only achieve state-of-the-art performance but also exhibit superior transferability on downstream tasks, even surpassing the original foundation models in task-specific evaluations. The code is available at here.<|reference_end|>
|
arxiv
|
@article{li2024lorkd:,
title={LoRKD: Low-Rank Knowledge Decomposition for Medical Foundation Models},
author={Haolin Li, Yuhang Zhou, Ziheng Zhao, Siyuan Du, Jiangchao Yao, Weidi
Xie, Ya Zhang, Yanfeng Wang},
journal={arXiv preprint arXiv:2409.19540},
year={2024},
archivePrefix={arXiv},
eprint={2409.19540},
primaryClass={cs.CV}
}
|
li2024lorkd:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.