corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-664901
2410.02200
Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts
<|reference_start|>Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts: Prompt-based techniques, such as prompt-tuning and prefix-tuning, have gained prominence for their efficiency in fine-tuning large pre-trained models. Despite their widespread adoption, the theoretical foundations of these methods remain limited. For instance, in prefix-tuning, we observe that a key factor in achieving performance parity with full fine-tuning lies in the reparameterization strategy. However, the theoretical principles underpinning the effectiveness of this approach have yet to be thoroughly examined. Our study demonstrates that reparameterization is not merely an engineering trick but is grounded in deep theoretical foundations. Specifically, we show that the reparameterization strategy implicitly encodes a shared structure between prefix key and value vectors. Building on recent insights into the connection between prefix-tuning and mixture of experts models, we further illustrate that this shared structure significantly improves sample efficiency in parameter estimation compared to non-shared alternatives. The effectiveness of prefix-tuning across diverse tasks is empirically confirmed to be enhanced by the shared structure, through extensive experiments in both visual and language domains. Additionally, we uncover similar structural benefits in prompt-tuning, offering new perspectives on its success. Our findings provide theoretical and empirical contributions, advancing the understanding of prompt-based methods and their underlying mechanisms.<|reference_end|>
arxiv
@article{le2024revisiting, title={Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts}, author={Minh Le, Chau Nguyen, Huy Nguyen, Quyen Tran, Trung Le, Nhat Ho}, journal={arXiv preprint arXiv:2410.02200}, year={2024}, archivePrefix={arXiv}, eprint={2410.02200}, primaryClass={cs.LG} }
le2024revisiting
arxiv-664902
2410.02201
Remember and Recall: Associative-Memory-based Trajectory Prediction
<|reference_start|>Remember and Recall: Associative-Memory-based Trajectory Prediction: Trajectory prediction is a pivotal component of autonomous driving systems, enabling the application of accumulated movement experience to current scenarios. Although most existing methods concentrate on learning continuous representations to gain valuable experience, they often suffer from computational inefficiencies and struggle with unfamiliar situations. To address this issue, we propose the Fragmented-Memory-based Trajectory Prediction (FMTP) model, inspired by the remarkable learning capabilities of humans, particularly their ability to leverage accumulated experience and recall relevant memories in unfamiliar situations. The FMTP model employs discrete representations to enhance computational efficiency by reducing information redundancy while maintaining the flexibility to utilize past experiences. Specifically, we design a learnable memory array by consolidating continuous trajectory representations from the training set using defined quantization operations during the training phase. This approach further eliminates redundant information while preserving essential features in discrete form. Additionally, we develop an advanced reasoning engine based on language models to deeply learn the associative rules among these discrete representations. Our method has been evaluated on various public datasets, including ETH-UCY, inD, SDD, nuScenes, Waymo, and VTL-TP. The extensive experimental results demonstrate that our approach achieves significant performance and extracts more valuable experience from past trajectories to inform the current state.<|reference_end|>
arxiv
@article{guo2024remember, title={Remember and Recall: Associative-Memory-based Trajectory Prediction}, author={Hang Guo, Yuzhen Zhang, Tianci Gao, Junning Su, Pei Lv, Mingliang Xu}, journal={arXiv preprint arXiv:2410.02201}, year={2024}, archivePrefix={arXiv}, eprint={2410.02201}, primaryClass={cs.CV} }
guo2024remember
arxiv-664903
2410.02202
Can Language Models Take A Hint? Prompting for Controllable Contextualized Commonsense Inference
<|reference_start|>Can Language Models Take A Hint? Prompting for Controllable Contextualized Commonsense Inference: Generating commonsense assertions within a given story context remains a difficult task for modern language models. Previous research has addressed this problem by aligning commonsense inferences with stories and training language generation models accordingly. One of the challenges is determining which topic or entity in the story should be the focus of an inferred assertion. Prior approaches lack the ability to control specific aspects of the generated assertions. In this work, we introduce "hinting," a data augmentation technique that enhances contextualized commonsense inference. "Hinting" employs a prefix prompting strategy using both hard and soft prompts to guide the inference process. To demonstrate its effectiveness, we apply "hinting" to two contextual commonsense inference datasets: ParaCOMET and GLUCOSE, evaluating its impact on both general and context-specific inference. Furthermore, we evaluate "hinting" by incorporating synonyms and antonyms into the hints. Our results show that "hinting" does not compromise the performance of contextual commonsense inference while offering improved controllability.<|reference_end|>
arxiv
@article{colon-hernandez2024can, title={Can Language Models Take A Hint? Prompting for Controllable Contextualized Commonsense Inference}, author={Pedro Colon-Hernandez, Nanxi Liu, Chelsea Joe, Peter Chin, Claire Yin, Henry Lieberman, Yida Xin, Cynthia Breazeal}, journal={arXiv preprint arXiv:2410.02202}, year={2024}, archivePrefix={arXiv}, eprint={2410.02202}, primaryClass={cs.CL cs.AI} }
colon-hernandez2024can
arxiv-664904
2410.02203
GraphIC: A Graph-Based In-Context Example Retrieval Model for Multi-Step Reasoning
<|reference_start|>GraphIC: A Graph-Based In-Context Example Retrieval Model for Multi-Step Reasoning: In-context learning (ICL) enables large language models (LLMs) to generalize to new tasks by incorporating a few in-context examples (ICEs) directly in the input, without updating parameters. However, the effectiveness of ICL heavily relies on the selection of ICEs, and conventional text-based embedding methods are often inadequate for tasks that require multi-step reasoning, such as mathematical and logical problem solving. This is due to the bias introduced by shallow semantic similarities that fail to capture the deeper reasoning structures required for these tasks. We present GraphIC, a novel approach that leverages graph-based representations of reasoning processes, coupled with Bayesian Networks (BNs) to select ICEs. Graph structures inherently filter out shallow semantics while preserving the core reasoning structure. Importantly, BNs capture the dependency of a node's attributes on its parent nodes, closely mirroring the hierarchical nature of human cognition-where each thought is shaped by preceding ones. This makes BNs particularly well-suited for multi-step reasoning tasks, aligning the process more closely with human-like reasoning. Extensive experiments across three types of reasoning tasks (mathematical reasoning, code generation, and logical reasoning) demonstrate that GraphIC outperforms both training-free and training-based models in selecting ICEs, excelling in terms of both effectiveness and efficiency. We show that GraphIC enhances ICL's performance and interoperability, significantly advancing ICE selection for multi-step reasoning tasks.<|reference_end|>
arxiv
@article{fu2024graphic:, title={GraphIC: A Graph-Based In-Context Example Retrieval Model for Multi-Step Reasoning}, author={Jiale Fu, Yaqing Wang, Simeng Han, Jiaming Fan, Chen Si, Xu Yang}, journal={arXiv preprint arXiv:2410.02203}, year={2024}, archivePrefix={arXiv}, eprint={2410.02203}, primaryClass={cs.AI} }
fu2024graphic:
arxiv-664905
2410.02204
An Efficient Scaled spectral preconditioner for sequences of symmetric positive definite linear systems
<|reference_start|>An Efficient Scaled spectral preconditioner for sequences of symmetric positive definite linear systems: We explore a scaled spectral preconditioner for the efficient solution of sequences of symmetric and positive-definite linear systems. We design the scaled preconditioner not only as an approximation of the inverse of the linear system but also with consideration of its use within the conjugate gradient (CG) method. We propose three different strategies for selecting a scaling parameter, which aims to position the eigenvalues of the preconditioned matrix in a way that reduces the energy norm of the error, the quantity that CG monotonically decreases at each iteration. Our focus is on accelerating convergence especially in the early iterations, which is particularly important when CG is truncated due to computational cost constraints. Numerical experiments provide in data assimilation confirm that the scaled spectral preconditioner can significantly improve early CG convergence with negligible computational cost.<|reference_end|>
arxiv
@article{diouane2024an, title={An Efficient Scaled spectral preconditioner for sequences of symmetric positive definite linear systems}, author={Youssef Diouane and Selime G"urol and Oussama Mouhtal and Dominique Orban}, journal={arXiv preprint arXiv:2410.02204}, year={2024}, doi={10.13140/RG.2.2.28678.38725}, number={G-2024-66}, archivePrefix={arXiv}, eprint={2410.02204}, primaryClass={math.NA cs.NA math.OC} }
diouane2024an
arxiv-664906
2410.02205
Aligning with Logic: Measuring, Evaluating and Improving Logical Consistency in Large Language Models
<|reference_start|>Aligning with Logic: Measuring, Evaluating and Improving Logical Consistency in Large Language Models: Recent research in Large Language Models (LLMs) has shown promising progress related to LLM alignment with human preferences. LLM-empowered decision-making systems are expected to be predictable, reliable and trustworthy, which implies being free from paradoxes or contradictions that could undermine their credibility and validity. However, LLMs still exhibit inconsistent and biased behaviour when making decisions or judgements. In this work, we focus on studying logical consistency of LLMs as a prerequisite for more reliable and trustworthy systems. Logical consistency ensures that decisions are based on a stable and coherent understanding of the problem, reducing the risk of erratic or contradictory outputs. We first propose a universal framework to quantify the logical consistency via three fundamental proxies: transitivity, commutativity and negation invariance. We then evaluate logical consistency, using the defined measures, of a wide range of LLMs, demonstrating that it can serve as a strong proxy for overall robustness. Additionally, we introduce a data refinement and augmentation technique that enhances the logical consistency of LLMs without sacrificing alignment to human preferences. It augments noisy and sparse pairwise-comparison annotations by estimating a partially or totally ordered preference rankings using rank aggregation methods. Finally, we show that logical consistency impacts the performance of LLM-based logic-dependent algorithms, where LLMs serve as logical operators.<|reference_end|>
arxiv
@article{liu2024aligning, title={Aligning with Logic: Measuring, Evaluating and Improving Logical Consistency in Large Language Models}, author={Yinhong Liu, Zhijiang Guo, Tianya Liang, Ehsan Shareghi, Ivan Vuli'c, Nigel Collier}, journal={arXiv preprint arXiv:2410.02205}, year={2024}, archivePrefix={arXiv}, eprint={2410.02205}, primaryClass={cs.CL cs.AI cs.LO} }
liu2024aligning
arxiv-664907
2410.02207
Adapting Segment Anything Model to Melanoma Segmentation in Microscopy Slide Images
<|reference_start|>Adapting Segment Anything Model to Melanoma Segmentation in Microscopy Slide Images: Melanoma segmentation in Whole Slide Images (WSIs) is useful for prognosis and the measurement of crucial prognostic factors such as Breslow depth and primary invasive tumor size. In this paper, we present a novel approach that uses the Segment Anything Model (SAM) for automatic melanoma segmentation in microscopy slide images. Our method employs an initial semantic segmentation model to generate preliminary segmentation masks that are then used to prompt SAM. We design a dynamic prompting strategy that uses a combination of centroid and grid prompts to achieve optimal coverage of the super high-resolution slide images while maintaining the quality of generated prompts. To optimize for invasive melanoma segmentation, we further refine the prompt generation process by implementing in-situ melanoma detection and low-confidence region filtering. We select Segformer as the initial segmentation model and EfficientSAM as the segment anything model for parameter-efficient fine-tuning. Our experimental results demonstrate that this approach not only surpasses other state-of-the-art melanoma segmentation methods but also significantly outperforms the baseline Segformer by 9.1% in terms of IoU.<|reference_end|>
arxiv
@article{liu2024adapting, title={Adapting Segment Anything Model to Melanoma Segmentation in Microscopy Slide Images}, author={Qingyuan Liu and Avideh Zakhor}, journal={arXiv preprint arXiv:2410.02207}, year={2024}, archivePrefix={arXiv}, eprint={2410.02207}, primaryClass={cs.CV cs.AI cs.LG} }
liu2024adapting
arxiv-664908
2410.02208
Fast nonparametric feature selection with error control using integrated path stability selection
<|reference_start|>Fast nonparametric feature selection with error control using integrated path stability selection: Feature selection can greatly improve performance and interpretability in machine learning problems. However, existing nonparametric feature selection methods either lack theoretical error control or fail to accurately control errors in practice. Many methods are also slow, especially in high dimensions. In this paper, we introduce a general feature selection method that applies integrated path stability selection to thresholding to control false positives and the false discovery rate. The method also estimates q-values, which are better suited to high-dimensional data than p-values. We focus on two special cases of the general method based on gradient boosting (IPSSGB) and random forests (IPSSRF). Extensive simulations with RNA sequencing data show that IPSSGB and IPSSRF have better error control, detect more true positives, and are faster than existing methods. We also use both methods to detect microRNAs and genes related to ovarian cancer, finding that they make better predictions with fewer features than other methods.<|reference_end|>
arxiv
@article{melikechi2024fast, title={Fast nonparametric feature selection with error control using integrated path stability selection}, author={Omar Melikechi, David B. Dunson, and Jeffrey W. Miller}, journal={arXiv preprint arXiv:2410.02208}, year={2024}, archivePrefix={arXiv}, eprint={2410.02208}, primaryClass={stat.ML cs.LG stat.AP stat.ME} }
melikechi2024fast
arxiv-664909
2410.02209
Some three-weight linear codes and their complete weight enumerators and weight hierarchies
<|reference_start|>Some three-weight linear codes and their complete weight enumerators and weight hierarchies: Linear codes with a few weights can be applied to secrete sharing, authentication codes, association schemes and strongly regular graphs. For an odd prime power $q$, we construct a class of three-weight $\F_q$-linear codes from quadratic functions via a bivariate construction and then determine the complete weight enumerators and weight hierarchies of these linear codes completely. This paper generalizes some results in Li et al. (2022) and Hu et al. (2024).<|reference_end|>
arxiv
@article{li2024some, title={Some three-weight linear codes and their complete weight enumerators and weight hierarchies}, author={Xiumei Li, Zongxi Chen and Fei Li}, journal={arXiv preprint arXiv:2410.02209}, year={2024}, archivePrefix={arXiv}, eprint={2410.02209}, primaryClass={cs.IT math.IT math.NT} }
li2024some
arxiv-664910
2410.02210
Calibrate to Discriminate: Improve In-Context Learning with Label-Free Comparative Inference
<|reference_start|>Calibrate to Discriminate: Improve In-Context Learning with Label-Free Comparative Inference: While in-context learning with large language models (LLMs) has shown impressive performance, we have discovered a unique miscalibration behavior where both correct and incorrect predictions are assigned the same level of confidence. We refer to this phenomenon as indiscriminate miscalibration. We found that traditional calibration metrics, such as Expected Calibrated Errors (ECEs), are unable to capture this behavior effectively. To address this issue, we propose new metrics to measure the severity of indiscriminate miscalibration. Additionally, we develop a novel in-context comparative inference method to alleviate miscalibrations and improve classification performance. Through extensive experiments on five datasets, we demonstrate that our proposed method can achieve more accurate and calibrated predictions compared to regular zero-shot and few-shot prompting.<|reference_end|>
arxiv
@article{cheng2024calibrate, title={Calibrate to Discriminate: Improve In-Context Learning with Label-Free Comparative Inference}, author={Wei Cheng, Tianlu Wang, Yanmin Ji, Fan Yang, Keren Tan, Yiyu Zheng}, journal={arXiv preprint arXiv:2410.02210}, year={2024}, archivePrefix={arXiv}, eprint={2410.02210}, primaryClass={cs.CL cs.LG} }
cheng2024calibrate
arxiv-664911
2410.02212
Hard Negative Sample Mining for Whole Slide Image Classification
<|reference_start|>Hard Negative Sample Mining for Whole Slide Image Classification: Weakly supervised whole slide image (WSI) classification is challenging due to the lack of patch-level labels and high computational costs. State-of-the-art methods use self-supervised patch-wise feature representations for multiple instance learning (MIL). Recently, methods have been proposed to fine-tune the feature representation on the downstream task using pseudo labeling, but mostly focusing on selecting high-quality positive patches. In this paper, we propose to mine hard negative samples during fine-tuning. This allows us to obtain better feature representations and reduce the training cost. Furthermore, we propose a novel patch-wise ranking loss in MIL to better exploit these hard negative samples. Experiments on two public datasets demonstrate the efficacy of these proposed ideas. Our codes are available at https://github.com/winston52/HNM-WSI<|reference_end|>
arxiv
@article{huang2024hard, title={Hard Negative Sample Mining for Whole Slide Image Classification}, author={Wentao Huang, Xiaoling Hu, Shahira Abousamra, Prateek Prasanna, Chao Chen}, journal={arXiv preprint arXiv:2410.02212}, year={2024}, archivePrefix={arXiv}, eprint={2410.02212}, primaryClass={cs.CV} }
huang2024hard
arxiv-664912
2410.02217
Stochastic Sampling from Deterministic Flow Models
<|reference_start|>Stochastic Sampling from Deterministic Flow Models: Deterministic flow models, such as rectified flows, offer a general framework for learning a deterministic transport map between two distributions, realized as the vector field for an ordinary differential equation (ODE). However, they are sensitive to model estimation and discretization errors and do not permit different samples conditioned on an intermediate state, limiting their application. We present a general method to turn the underlying ODE of such flow models into a family of stochastic differential equations (SDEs) that have the same marginal distributions. This method permits us to derive families of \emph{stochastic samplers}, for fixed (e.g., previously trained) \emph{deterministic} flow models, that continuously span the spectrum of deterministic and stochastic sampling, given access to the flow field and the score function. Our method provides additional degrees of freedom that help alleviate the issues with the deterministic samplers and empirically outperforms them. We empirically demonstrate advantages of our method on a toy Gaussian setup and on the large scale ImageNet generation task. Further, our family of stochastic samplers provide an additional knob for controlling the diversity of generation, which we qualitatively demonstrate in our experiments.<|reference_end|>
arxiv
@article{singh2024stochastic, title={Stochastic Sampling from Deterministic Flow Models}, author={Saurabh Singh, Ian Fischer}, journal={arXiv preprint arXiv:2410.02217}, year={2024}, archivePrefix={arXiv}, eprint={2410.02217}, primaryClass={cs.LG cs.CV stat.ML} }
singh2024stochastic
arxiv-664913
2410.02219
Multi-modal clothing recommendation model based on large model and VAE enhancement
<|reference_start|>Multi-modal clothing recommendation model based on large model and VAE enhancement: Accurately recommending products has long been a subject requiring in-depth research. This study proposes a multimodal paradigm for clothing recommendations. Specifically, it designs a multimodal analysis method that integrates clothing description texts and images, utilizing a pre-trained large language model to deeply explore the hidden meanings of users and products. Additionally, a variational encoder is employed to learn the relationship between user information and products to address the cold start problem in recommendation systems. This study also validates the significant performance advantages of this method over various recommendation system methods through extensive ablation experiments, providing crucial practical guidance for the comprehensive optimization of recommendation systems.<|reference_end|>
arxiv
@article{huang2024multi-modal, title={Multi-modal clothing recommendation model based on large model and VAE enhancement}, author={Bingjie Huang, Qingyi Lu, Shuaishuai Huang, Xue-she Wang, Haowei Yang}, journal={arXiv preprint arXiv:2410.02219}, year={2024}, archivePrefix={arXiv}, eprint={2410.02219}, primaryClass={cs.IR cs.AI} }
huang2024multi-modal
arxiv-664914
2410.02220
Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation
<|reference_start|>Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation: Large language models (LLMs) are extensively adapted for downstream applications through a process known as "customization," with fine-tuning being a common method for integrating domain-specific expertise. However, recent studies have revealed a vulnerability that tuning LLMs with malicious samples can compromise their robustness and amplify harmful content, an attack known as "jailbreaking." To mitigate such attack, we propose an effective defensive framework utilizing data curation to revise commonsense texts and enhance their safety implication from the perspective of LLMs. The curated texts can mitigate jailbreaking attacks at every stage of the customization process: before customization to immunize LLMs against future jailbreak attempts, during customization to neutralize jailbreaking risks, or after customization to restore the compromised models. Since the curated data strengthens LLMs through the standard fine-tuning workflow, we do not introduce additional modules during LLM inference, thereby preserving the original customization process. Experimental results demonstrate a substantial reduction in jailbreaking effects, with up to a 100% success in generating responsible responses. Notably, our method is effective even with commonsense texts, which are often more readily available than safety-relevant data. With the every-stage defensive framework and supporting experimental performance, this work represents a significant advancement in mitigating jailbreaking risks and ensuring the secure customization of LLMs.<|reference_end|>
arxiv
@article{liu2024buckle, title={Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation}, author={Xiaoqun Liu, Jiacheng Liang, Luoxi Tang, Chenyu You, Muchao Ye, Zhaohan Xi}, journal={arXiv preprint arXiv:2410.02220}, year={2024}, archivePrefix={arXiv}, eprint={2410.02220}, primaryClass={cs.CR cs.AI} }
liu2024buckle
arxiv-664915
2410.02221
Capturing complex hand movements and object interactions using machine learning-powered stretchable smart textile gloves
<|reference_start|>Capturing complex hand movements and object interactions using machine learning-powered stretchable smart textile gloves: Accurate real-time tracking of dexterous hand movements and interactions has numerous applications in human-computer interaction, metaverse, robotics, and tele-health. Capturing realistic hand movements is challenging because of the large number of articulations and degrees of freedom. Here, we report accurate and dynamic tracking of articulated hand and finger movements using stretchable, washable smart gloves with embedded helical sensor yarns and inertial measurement units. The sensor yarns have a high dynamic range, responding to low 0.005 % to high 155 % strains, and show stability during extensive use and washing cycles. We use multi-stage machine learning to report average joint angle estimation root mean square errors of 1.21 and 1.45 degrees for intra- and inter-subjects cross-validation, respectively, matching accuracy of costly motion capture cameras without occlusion or field of view limitations. We report a data augmentation technique that enhances robustness to noise and variations of sensors. We demonstrate accurate tracking of dexterous hand movements during object interactions, opening new avenues of applications including accurate typing on a mock paper keyboard, recognition of complex dynamic and static gestures adapted from American Sign Language and object identification.<|reference_end|>
arxiv
@article{tashakori2024capturing, title={Capturing complex hand movements and object interactions using machine learning-powered stretchable smart textile gloves}, author={Arvin Tashakori, Zenan Jiang, Amir Servati, Saeid Soltanian, Harishkumar Narayana, Katherine Le, Caroline Nakayama, Chieh-ling Yang, Z. Jane Wang, Janice J. Eng, Peyman Servati}, journal={Nature Machine Intelligence 6 (2024) 106-118}, year={2024}, doi={10.1038/s42256-023-00780-9}, archivePrefix={arXiv}, eprint={2410.02221}, primaryClass={cs.HC cs.CV cs.LG cs.RO eess.SP} }
tashakori2024capturing
arxiv-664916
2410.02223
EmbedLLM: Learning Compact Representations of Large Language Models
<|reference_start|>EmbedLLM: Learning Compact Representations of Large Language Models: With hundreds of thousands of language models available on Huggingface today, efficiently evaluating and utilizing these models across various downstream, tasks has become increasingly critical. Many existing methods repeatedly learn task-specific representations of Large Language Models (LLMs), which leads to inefficiencies in both time and computational resources. To address this, we propose EmbedLLM, a framework designed to learn compact vector representations, of LLMs that facilitate downstream applications involving many models, such as model routing. We introduce an encoder-decoder approach for learning such embeddings, along with a systematic framework to evaluate their effectiveness. Empirical results show that EmbedLLM outperforms prior methods in model routing both in accuracy and latency. Additionally, we demonstrate that our method can forecast a model's performance on multiple benchmarks, without incurring additional inference cost. Extensive probing experiments validate that the learned embeddings capture key model characteristics, e.g. whether the model is specialized for coding tasks, even without being explicitly trained on them. We open source our dataset, code and embedder to facilitate further research and application.<|reference_end|>
arxiv
@article{zhuang2024embedllm:, title={EmbedLLM: Learning Compact Representations of Large Language Models}, author={Richard Zhuang, Tianhao Wu, Zhaojin Wen, Andrew Li, Jiantao Jiao, Kannan Ramchandran}, journal={arXiv preprint arXiv:2410.02223}, year={2024}, archivePrefix={arXiv}, eprint={2410.02223}, primaryClass={cs.CL cs.AI cs.LG} }
zhuang2024embedllm:
arxiv-664917
2410.02224
Efficient Semantic Segmentation via Lightweight Multiple-Information Interaction Network
<|reference_start|>Efficient Semantic Segmentation via Lightweight Multiple-Information Interaction Network: Recently, the integration of the local modeling capabilities of Convolutional Neural Networks (CNNs) with the global dependency strengths of Transformers has created a sensation in the semantic segmentation community. However, substantial computational workloads and high hardware memory demands remain major obstacles to their further application in real-time scenarios. In this work, we propose a lightweight multiple-information interaction network for real-time semantic segmentation, called LMIINet, which effectively combines CNNs and Transformers while reducing redundant computations and memory footprint. It features Lightweight Feature Interaction Bottleneck (LFIB) modules comprising efficient convolutions that enhance context integration. Additionally, improvements are made to the Flatten Transformer by enhancing local and global feature interaction to capture detailed semantic information. The incorporation of a combination coefficient learning scheme in both LFIB and Transformer blocks facilitates improved feature interaction. Extensive experiments demonstrate that LMIINet excels in balancing accuracy and efficiency. With only 0.72M parameters and 11.74G FLOPs, LMIINet achieves 72.0% mIoU at 100 FPS on the Cityscapes test set and 69.94% mIoU at 160 FPS on the CamVid test dataset using a single RTX2080Ti GPU.<|reference_end|>
arxiv
@article{qiu2024efficient, title={Efficient Semantic Segmentation via Lightweight Multiple-Information Interaction Network}, author={Yangyang Qiu, Guoan Xu, Guangwei Gao, Zhenhua Guo, Yi Yu, and Chia-Wen Lin}, journal={arXiv preprint arXiv:2410.02224}, year={2024}, archivePrefix={arXiv}, eprint={2410.02224}, primaryClass={cs.CV} }
qiu2024efficient
arxiv-664918
2410.02226
Doubly Optimal Policy Evaluation for Reinforcement Learning
<|reference_start|>Doubly Optimal Policy Evaluation for Reinforcement Learning: Policy evaluation estimates the performance of a policy by (1) collecting data from the environment and (2) processing raw data into a meaningful estimate. Due to the sequential nature of reinforcement learning, any improper data-collecting policy or data-processing method substantially deteriorates the variance of evaluation results over long time steps. Thus, policy evaluation often suffers from large variance and requires massive data to achieve the desired accuracy. In this work, we design an optimal combination of data-collecting policy and data-processing baseline. Theoretically, we prove our doubly optimal policy evaluation method is unbiased and guaranteed to have lower variance than previously best-performing methods. Empirically, compared with previous works, we show our method reduces variance substantially and achieves superior empirical performance.<|reference_end|>
arxiv
@article{liu2024doubly, title={Doubly Optimal Policy Evaluation for Reinforcement Learning}, author={Shuze Liu, Claire Chen, Shangtong Zhang}, journal={arXiv preprint arXiv:2410.02226}, year={2024}, archivePrefix={arXiv}, eprint={2410.02226}, primaryClass={cs.LG} }
liu2024doubly
arxiv-664919
2410.02228
The Role of piracy in quantum proofs
<|reference_start|>The Role of piracy in quantum proofs: A well-known feature of quantum information is that it cannot, in general, be cloned. Recently, a number of quantum-enabled information-processing tasks have demonstrated various forms of uncloneability; among these forms, piracy is an adversarial model that gives maximal power to the adversary, in controlling both a cloning-type attack, as well as the evaluation/verification stage. Here, we initiate the study of anti-piracy proof systems, which are proof systems that inherently prevent piracy attacks. We define anti-piracy proof systems, demonstrate such a proof system for an oracle problem, and also describe a candidate anti-piracy proof system for NP. We also study quantum proof systems that are cloneable and settle the famous QMA vs. QMA(2) debate in this setting. Lastly, we discuss how one can approach the QMA vs. QCMA question, by studying its cloneable variants.<|reference_end|>
arxiv
@article{broadbent2024the, title={The Role of piracy in quantum proofs}, author={Anne Broadbent and Alex B. Grilo and Supartha Podder and Jamie Sikora}, journal={arXiv preprint arXiv:2410.02228}, year={2024}, archivePrefix={arXiv}, eprint={2410.02228}, primaryClass={quant-ph cs.CC cs.CR} }
broadbent2024the
arxiv-664920
2410.02229
CodePMP: Scalable Preference Model Pretraining for Large Language Model Reasoning
<|reference_start|>CodePMP: Scalable Preference Model Pretraining for Large Language Model Reasoning: Large language models (LLMs) have made significant progress in natural language understanding and generation, driven by scalable pretraining and advanced finetuning. However, enhancing reasoning abilities in LLMs, particularly via reinforcement learning from human feedback (RLHF), remains challenging due to the scarcity of high-quality preference data, which is labor-intensive to annotate and crucial for reward model (RM) finetuning. To alleviate this issue, we introduce CodePMP, a scalable preference model pretraining (PMP) pipeline that utilizes a large corpus of synthesized code-preference pairs from publicly available high-quality source code. CodePMP improves RM finetuning efficiency by pretraining preference models on large-scale synthesized code-preference pairs. We evaluate CodePMP on mathematical reasoning tasks (GSM8K, MATH) and logical reasoning tasks (ReClor, LogiQA2.0), consistently showing significant improvements in reasoning performance of LLMs and highlighting the importance of scalable preference model pretraining for efficient reward modeling.<|reference_end|>
arxiv
@article{yu2024codepmp:, title={CodePMP: Scalable Preference Model Pretraining for Large Language Model Reasoning}, author={Huimu Yu, Xing Wu, Weidong Yin, Debing Zhang, Songlin Hu}, journal={arXiv preprint arXiv:2410.02229}, year={2024}, archivePrefix={arXiv}, eprint={2410.02229}, primaryClass={cs.AI cs.CL} }
yu2024codepmp:
arxiv-664921
2410.02230
Mitigating Downstream Model Risks via Model Provenance
<|reference_start|>Mitigating Downstream Model Risks via Model Provenance: Research and industry are rapidly advancing the innovation and adoption of foundation model-based systems, yet the tools for managing these models have not kept pace. Understanding the provenance and lineage of models is critical for researchers, industry, regulators, and public trust. While model cards and system cards were designed to provide transparency, they fall short in key areas: tracing model genealogy, enabling machine readability, offering reliable centralized management systems, and fostering consistent creation incentives. This challenge mirrors issues in software supply chain security, but AI/ML remains at an earlier stage of maturity. Addressing these gaps requires industry-standard tooling that can be adopted by foundation model publishers, open-source model innovators, and major distribution platforms. We propose a machine-readable model specification format to simplify the creation of model records, thereby reducing error-prone human effort, notably when a new model inherits most of its design from a foundation model. Our solution explicitly traces relationships between upstream and downstream models, enhancing transparency and traceability across the model lifecycle. To facilitate the adoption, we introduce the unified model record (UMR) repository , a semantically versioned system that automates the publication of model records to multiple formats (PDF, HTML, LaTeX) and provides a hosted web interface (https://modelrecord.com/). This proof of concept aims to set a new standard for managing foundation models, bridging the gap between innovation and responsible model management.<|reference_end|>
arxiv
@article{wang2024mitigating, title={Mitigating Downstream Model Risks via Model Provenance}, author={Keyu Wang, Abdullah Norozi Iranzad, Scott Schaffter, Doina Precup, Jonathan Lebensold}, journal={arXiv preprint arXiv:2410.02230}, year={2024}, archivePrefix={arXiv}, eprint={2410.02230}, primaryClass={cs.LG cs.CR} }
wang2024mitigating
arxiv-664922
2410.02231
SEAL: SEmantic-Augmented Imitation Learning via Language Model
<|reference_start|>SEAL: SEmantic-Augmented Imitation Learning via Language Model: Hierarchical Imitation Learning (HIL) is a promising approach for tackling long-horizon decision-making tasks. While it is a challenging task due to the lack of detailed supervisory labels for sub-goal learning, and reliance on hundreds to thousands of expert demonstrations. In this work, we introduce SEAL, a novel framework that leverages Large Language Models (LLMs)'s powerful semantic and world knowledge for both specifying sub-goal space and pre-labeling states to semantically meaningful sub-goal representations without prior knowledge of task hierarchies. SEAL employs a dual-encoder structure, combining supervised LLM-guided sub-goal learning with unsupervised Vector Quantization (VQ) for more robust sub-goal representations. Additionally, SEAL incorporates a transition-augmented low-level planner for improved adaptation to sub-goal transitions. Our experiments demonstrate that SEAL outperforms state-of-the-art HIL methods and LLM-based planning approaches, particularly in settings with small expert datasets and complex long-horizon tasks.<|reference_end|>
arxiv
@article{gu2024seal:, title={SEAL: SEmantic-Augmented Imitation Learning via Language Model}, author={Chengyang Gu, Yuxin Pan, Haotian Bai, Hui Xiong, Yize Chen}, journal={arXiv preprint arXiv:2410.02231}, year={2024}, archivePrefix={arXiv}, eprint={2410.02231}, primaryClass={cs.AI cs.LG cs.SY eess.SY} }
gu2024seal:
arxiv-664923
2410.02232
The Long Way to Deforestation (Technical Report): A Type Inference and Elaboration Technique for Removing Intermediate Data Structures
<|reference_start|>The Long Way to Deforestation (Technical Report): A Type Inference and Elaboration Technique for Removing Intermediate Data Structures: Deforestation is a compiler optimization that removes intermediate data structure allocations from functional programs to improve their efficiency. This is an old idea, but previous approaches have proved limited or impractical: they either only worked on compositions of predefined combinators (shortcut fusion), or involved the aggressive unfolding of recursive definitions until a depth limit was reached or a reoccurring pattern was found to tie the recursive knot, resulting in impractical algorithmic complexity and large amounts of code duplication. We present Lumberhack, a general-purpose deforestation approach for purely functional call-by-value programs. Lumberhack uses subtype inference to reason about data structure production and consumption and uses an elaboration pass to fuse the corresponding recursive definitions. It fuses large classes of mutually recursive definitions while avoiding much of the unproductive (and sometimes counter-productive) code duplication inherent in previous approaches. We prove the soundness of Lumberhack using logical relations and experimentally demonstrate significant speedups in the standard nofib benchmark suite.<|reference_end|>
arxiv
@article{chen2024the, title={The Long Way to Deforestation (Technical Report): A Type Inference and Elaboration Technique for Removing Intermediate Data Structures}, author={Yijia Chen and Lionel Parreaux}, journal={Proceedings of the ACM on Programming Languages, Volume 8, Issue ICFP (August 2024)}, year={2024}, doi={10.1145/3674634}, archivePrefix={arXiv}, eprint={2410.02232}, primaryClass={cs.PL} }
chen2024the
arxiv-664924
2410.02234
GORAM: Graph-oriented ORAM for Efficient Ego-centric Queries on Federated Graphs
<|reference_start|>GORAM: Graph-oriented ORAM for Efficient Ego-centric Queries on Federated Graphs: Ego-centric queries, focusing on a target vertex and its direct neighbors, are essential for various applications. Enabling such queries on graphs owned by mutually distrustful data providers, without breaching privacy, holds promise for more comprehensive results. In this paper, we propose GORAM, a graph-oriented data structure that enables efficient ego-centric queries on federated graphs with strong privacy guarantees. GORAM is built upon secure multi-party computation (MPC) and ensures that no single party can learn any sensitive information about the graph data or the querying keys during the process. However, achieving practical performance with privacy guaranteed presents a challenge. To overcome this, GORAM is designed to partition the federated graph and construct an Oblivious RAM(ORAM)-inspired index atop these partitions. This design enables each ego-centric query to process only a single partition, which can be accessed fast and securely. To evaluate the performance of GORAM, we developed a prototype querying engine on a real-world MPC framework. We conduct a comprehensive evaluation with five commonly used queries on both synthetic and real-world graphs. Our evaluation shows that all benchmark queries can be completed in just 58.1 milliseconds to 35.7 seconds, even on graphs with up to 41.6 million vertices and 1.4 billion edges. To the best of our knowledge, this represents the first instance of processing billion-scale graphs with practical performance on MPC.<|reference_end|>
arxiv
@article{fan2024goram:, title={GORAM: Graph-oriented ORAM for Efficient Ego-centric Queries on Federated Graphs}, author={Xiaoyu Fan, Kun Chen, Jiping Yu, Xiaowei Zhu, Yunyi Chen, Huanchen Zhang and Wei Xu}, journal={arXiv preprint arXiv:2410.02234}, year={2024}, archivePrefix={arXiv}, eprint={2410.02234}, primaryClass={cs.DB cs.DS} }
fan2024goram:
arxiv-664925
2410.02236
C-MORL: Multi-Objective Reinforcement Learning through Efficient Discovery of Pareto Front
<|reference_start|>C-MORL: Multi-Objective Reinforcement Learning through Efficient Discovery of Pareto Front: Multi-objective reinforcement learning (MORL) excels at handling rapidly changing preferences in tasks that involve multiple criteria, even for unseen preferences. However, previous dominating MORL methods typically generate a fixed policy set or preference-conditioned policy through multiple training iterations exclusively for sampled preference vectors, and cannot ensure the efficient discovery of the Pareto front. Furthermore, integrating preferences into the input of policy or value functions presents scalability challenges, in particular as the dimension of the state and preference space grow, which can complicate the learning process and hinder the algorithm's performance on more complex tasks. To address these issues, we propose a two-stage Pareto front discovery algorithm called Constrained MORL (C-MORL), which serves as a seamless bridge between constrained policy optimization and MORL. Concretely, a set of policies is trained in parallel in the initialization stage, with each optimized towards its individual preference over the multiple objectives. Then, to fill the remaining vacancies in the Pareto front, the constrained optimization steps are employed to maximize one objective while constraining the other objectives to exceed a predefined threshold. Empirically, compared to recent advancements in MORL methods, our algorithm achieves more consistent and superior performances in terms of hypervolume, expected utility, and sparsity on both discrete and continuous control tasks, especially with numerous objectives (up to nine objectives in our experiments).<|reference_end|>
arxiv
@article{liu2024c-morl:, title={C-MORL: Multi-Objective Reinforcement Learning through Efficient Discovery of Pareto Front}, author={Ruohong Liu, Yuxin Pan, Linjie Xu, Lei Song, Pengcheng You, Yize Chen, Jiang Bian}, journal={arXiv preprint arXiv:2410.02236}, year={2024}, archivePrefix={arXiv}, eprint={2410.02236}, primaryClass={cs.LG cs.SY eess.SY} }
liu2024c-morl:
arxiv-664926
2410.02237
Key-Grid: Unsupervised 3D Keypoints Detection using Grid Heatmap Features
<|reference_start|>Key-Grid: Unsupervised 3D Keypoints Detection using Grid Heatmap Features: Detecting 3D keypoints with semantic consistency is widely used in many scenarios such as pose estimation, shape registration and robotics. Currently, most unsupervised 3D keypoint detection methods focus on the rigid-body objects. However, when faced with deformable objects, the keypoints they identify do not preserve semantic consistency well. In this paper, we introduce an innovative unsupervised keypoint detector Key-Grid for both the rigid-body and deformable objects, which is an autoencoder framework. The encoder predicts keypoints and the decoder utilizes the generated keypoints to reconstruct the objects. Unlike previous work, we leverage the identified keypoint in formation to form a 3D grid feature heatmap called grid heatmap, which is used in the decoder section. Grid heatmap is a novel concept that represents the latent variables for grid points sampled uniformly in the 3D cubic space, where these variables are the shortest distance between the grid points and the skeleton connected by keypoint pairs. Meanwhile, we incorporate the information from each layer of the encoder into the decoder section. We conduct an extensive evaluation of Key-Grid on a list of benchmark datasets. Key-Grid achieves the state-of-the-art performance on the semantic consistency and position accuracy of keypoints. Moreover, we demonstrate the robustness of Key-Grid to noise and downsampling. In addition, we achieve SE-(3) invariance of keypoints though generalizing Key-Grid to a SE(3)-invariant backbone.<|reference_end|>
arxiv
@article{hou2024key-grid:, title={Key-Grid: Unsupervised 3D Keypoints Detection using Grid Heatmap Features}, author={Chengkai Hou and Zhengrong Xue and Bingyang Zhou and Jinghan Ke and Lin Shao and Huazhe Xu}, journal={arXiv preprint arXiv:2410.02237}, year={2024}, archivePrefix={arXiv}, eprint={2410.02237}, primaryClass={cs.CV} }
hou2024key-grid:
arxiv-664927
2410.02239
A Pilot Study of Applying Sequence-to-Sequence Voice Conversion to Evaluate the Intelligibility of L2 Speech Using a Native Speaker's Shadowings
<|reference_start|>A Pilot Study of Applying Sequence-to-Sequence Voice Conversion to Evaluate the Intelligibility of L2 Speech Using a Native Speaker's Shadowings: Utterances by L2 speakers can be unintelligible due to mispronunciation and improper prosody. In computer-aided language learning systems, textual feedback is often provided using a speech recognition engine. However, an ideal form of feedback for L2 speakers should be so fine-grained that it enables them to detect and diagnose unintelligible parts of L2 speakers' utterances. Inspired by language teachers who correct students' pronunciation through a voice-to-voice process, this pilot study utilizes a unique semi-parallel dataset composed of non-native speakers' (L2) reading aloud, shadowing of native speakers (L1) and their script-shadowing utterances. We explore the technical possibility of replicating the process of an L1 speaker's shadowing L2 speech using Voice Conversion techniques, to create a virtual shadower system. Experimental results demonstrate the feasibility of the VC system in simulating L1's shadowing behavior. The output of the virtual shadower system shows a reasonable similarity to the real L1 shadowing utterances in both linguistic and acoustic aspects.<|reference_end|>
arxiv
@article{geng2024a, title={A Pilot Study of Applying Sequence-to-Sequence Voice Conversion to Evaluate the Intelligibility of L2 Speech Using a Native Speaker's Shadowings}, author={Haopeng Geng, Daisuke Saito, Nobuaki Minematsu}, journal={arXiv preprint arXiv:2410.02239}, year={2024}, archivePrefix={arXiv}, eprint={2410.02239}, primaryClass={cs.SD cs.CL eess.AS} }
geng2024a
arxiv-664928
2410.02240
SCA: Highly Efficient Semantic-Consistent Unrestricted Adversarial Attack
<|reference_start|>SCA: Highly Efficient Semantic-Consistent Unrestricted Adversarial Attack: Unrestricted adversarial attacks typically manipulate the semantic content of an image (e.g., color or texture) to create adversarial examples that are both effective and photorealistic. Recent works have utilized the diffusion inversion process to map images into a latent space, where high-level semantics are manipulated by introducing perturbations. However, they often results in substantial semantic distortions in the denoised output and suffers from low efficiency. In this study, we propose a novel framework called Semantic-Consistent Unrestricted Adversarial Attacks (SCA), which employs an inversion method to extract edit-friendly noise maps and utilizes Multimodal Large Language Model (MLLM) to provide semantic guidance throughout the process. Under the condition of rich semantic information provided by MLLM, we perform the DDPM denoising process of each step using a series of edit-friendly noise maps, and leverage DPM Solver++ to accelerate this process, enabling efficient sampling with semantic consistency. Compared to existing methods, our framework enables the efficient generation of adversarial examples that exhibit minimal discernible semantic changes. Consequently, we for the first time introduce Semantic-Consistent Adversarial Examples (SCAE). Extensive experiments and visualizations have demonstrated the high efficiency of SCA, particularly in being on average 12 times faster than the state-of-the-art attacks. Our code can be found at https://github.com/Pan-Zihao/SCA.<|reference_end|>
arxiv
@article{pan2024sca:, title={SCA: Highly Efficient Semantic-Consistent Unrestricted Adversarial Attack}, author={Zihao Pan, Weibin Wu, Yuhang Cao, and Zibin Zheng}, journal={arXiv preprint arXiv:2410.02240}, year={2024}, archivePrefix={arXiv}, eprint={2410.02240}, primaryClass={cs.CV cs.AI} }
pan2024sca:
arxiv-664929
2410.02241
MIGA: Mixture-of-Experts with Group Aggregation for Stock Market Prediction
<|reference_start|>MIGA: Mixture-of-Experts with Group Aggregation for Stock Market Prediction: Stock market prediction has remained an extremely challenging problem for many decades owing to its inherent high volatility and low information noisy ratio. Existing solutions based on machine learning or deep learning demonstrate superior performance by employing a single model trained on the entire stock dataset to generate predictions across all types of stocks. However, due to the significant variations in stock styles and market trends, a single end-to-end model struggles to fully capture the differences in these stylized stock features, leading to relatively inaccurate predictions for all types of stocks. In this paper, we present MIGA, a novel Mixture of Expert with Group Aggregation framework designed to generate specialized predictions for stocks with different styles by dynamically switching between distinct style experts. To promote collaboration among different experts in MIGA, we propose a novel inner group attention architecture, enabling experts within the same group to share information and thereby enhancing the overall performance of all experts. As a result, MIGA significantly outperforms other end-to-end models on three Chinese Stock Index benchmarks including CSI300, CSI500, and CSI1000. Notably, MIGA-Conv reaches 24 % excess annual return on CSI300 benchmark, surpassing the previous state-of-the-art model by 8% absolute. Furthermore, we conduct a comprehensive analysis of mixture of experts for stock market prediction, providing valuable insights for future research.<|reference_end|>
arxiv
@article{yu2024miga:, title={MIGA: Mixture-of-Experts with Group Aggregation for Stock Market Prediction}, author={Zhaojian Yu, Yinghao Wu, Genesis Wang, Heming Weng}, journal={arXiv preprint arXiv:2410.02241}, year={2024}, archivePrefix={arXiv}, eprint={2410.02241}, primaryClass={cs.CE} }
yu2024miga:
arxiv-664930
2410.02242
Robust Weight Initialization for Tanh Neural Networks with Fixed Point Analysis
<|reference_start|>Robust Weight Initialization for Tanh Neural Networks with Fixed Point Analysis: As a neural network's depth increases, it can achieve strong generalization performance. Training, however, becomes challenging due to gradient issues. Theoretical research and various methods have been introduced to address this issues. However, research on weight initialization methods that can be effectively applied to tanh neural networks of varying sizes still needs to be completed. This paper presents a novel weight initialization method for Feedforward Neural Networks with tanh activation function. Based on an analysis of the fixed points of the function $\tanh(ax)$, our proposed method aims to determine values of $a$ that prevent the saturation of activations. A series of experiments on various classification datasets demonstrate that the proposed method is more robust to network size variations than the existing method. Furthermore, when applied to Physics-Informed Neural Networks, the method exhibits faster convergence and robustness to variations of the network size compared to Xavier initialization in problems of Partial Differential Equations.<|reference_end|>
arxiv
@article{lee2024robust, title={Robust Weight Initialization for Tanh Neural Networks with Fixed Point Analysis}, author={Hyunwoo Lee, Hayoung Choi, Hyunju Kim}, journal={arXiv preprint arXiv:2410.02242}, year={2024}, archivePrefix={arXiv}, eprint={2410.02242}, primaryClass={cs.LG cs.AI} }
lee2024robust
arxiv-664931
2410.02243
Approximate Degrees of Multisymmetric Properties with Application to Quantum Claw Detection
<|reference_start|>Approximate Degrees of Multisymmetric Properties with Application to Quantum Claw Detection: The claw problem is central in the fields of theoretical computer science as well as cryptography. The optimal quantum query complexity of the problem is known to be $\Omega\left(\sqrt{G}+(FG)^{1/3} \right)$ for input functions $f\colon [F]\to Z$ and $g\colon [G]\to Z$. However, the lower bound was proved when the range $Z$ is sufficiently large (i.e., $|{Z}|=\Omega(FG)$). The current paper proves the lower bound holds even for every smaller range $Z$ with $|{Z}|\ge F+G$. This implies that $\Omega\left(\sqrt{G}+(FG)^{1/3} \right)$ is tight for every such range. In addition, the lower bound $\Omega\left(\sqrt{G}+F^{1/3}G^{1/6}M^{1/6}\right)$ is provided for even smaller range $Z=[M]$ with every $M\in [2,F+G]$ by reducing the claw problem for $|{Z}|= F+G$. The proof technique is general enough to apply to any $k$-symmetric property (e.g., the $k$-claw problem), i.e., the Boolean function $\Phi$ on the set of $k$ functions with different-size domains and a common range such that $\Phi$ is invariant under the permutations over each domain and the permutations over the range. More concretely, it generalizes Ambainis's argument [Theory of Computing, 1(1):37-46] to the multiple-function case by using the notion of multisymmetric polynomials.<|reference_end|>
arxiv
@article{tani2024approximate, title={Approximate Degrees of Multisymmetric Properties with Application to Quantum Claw Detection}, author={Seiichiro Tani}, journal={arXiv preprint arXiv:2410.02243}, year={2024}, archivePrefix={arXiv}, eprint={2410.02243}, primaryClass={quant-ph cs.CC} }
tani2024approximate
arxiv-664932
2410.02244
Visual Prompting in LLMs for Enhancing Emotion Recognition
<|reference_start|>Visual Prompting in LLMs for Enhancing Emotion Recognition: Vision Large Language Models (VLLMs) are transforming the intersection of computer vision and natural language processing. Nonetheless, the potential of using visual prompts for emotion recognition in these models remains largely unexplored and untapped. Traditional methods in VLLMs struggle with spatial localization and often discard valuable global context. To address this problem, we propose a Set-of-Vision prompting (SoV) approach that enhances zero-shot emotion recognition by using spatial information, such as bounding boxes and facial landmarks, to mark targets precisely. SoV improves accuracy in face count and emotion categorization while preserving the enriched image context. Through a battery of experimentation and analysis of recent commercial or open-source VLLMs, we evaluate the SoV model's ability to comprehend facial expressions in natural environments. Our findings demonstrate the effectiveness of integrating spatial visual prompts into VLLMs for improving emotion recognition performance.<|reference_end|>
arxiv
@article{zhang2024visual, title={Visual Prompting in LLMs for Enhancing Emotion Recognition}, author={Qixuan Zhang, Zhifeng Wang, Dylan Zhang, Wenjia Niu, Sabrina Caldwell, Tom Gedeon, Yang Liu, Zhenyue Qin}, journal={arXiv preprint arXiv:2410.02244}, year={2024}, archivePrefix={arXiv}, eprint={2410.02244}, primaryClass={cs.CV} }
zhang2024visual
arxiv-664933
2410.02246
PFGuard: A Generative Framework with Privacy and Fairness Safeguards
<|reference_start|>PFGuard: A Generative Framework with Privacy and Fairness Safeguards: Generative models must ensure both privacy and fairness for Trustworthy AI. While these goals have been pursued separately, recent studies propose to combine existing privacy and fairness techniques to achieve both goals. However, naively combining these techniques can be insufficient due to privacy-fairness conflicts, where a sample in a minority group may be amplified for fairness, only to be suppressed for privacy. We demonstrate how these conflicts lead to adverse effects, such as privacy violations and unexpected fairness-utility tradeoffs. To mitigate these risks, we propose PFGuard, a generative framework with privacy and fairness safeguards, which simultaneously addresses privacy, fairness, and utility. By using an ensemble of multiple teacher models, PFGuard balances privacy-fairness conflicts between fair and private training stages and achieves high utility based on ensemble learning. Extensive experiments show that PFGuard successfully generates synthetic data on high-dimensional data while providing both fairness convergence and strict DP guarantees - the first of its kind to our knowledge.<|reference_end|>
arxiv
@article{kim2024pfguard:, title={PFGuard: A Generative Framework with Privacy and Fairness Safeguards}, author={Soyeon Kim, Yuji Roh, Geon Heo, Steven Euijong Whang}, journal={arXiv preprint arXiv:2410.02246}, year={2024}, archivePrefix={arXiv}, eprint={2410.02246}, primaryClass={cs.LG cs.AI} }
kim2024pfguard:
arxiv-664934
2410.02247
Theoretical Insights into Fine-Tuning Attention Mechanism: Generalization and Optimization
<|reference_start|>Theoretical Insights into Fine-Tuning Attention Mechanism: Generalization and Optimization: Large Language Models (LLMs), built on Transformer architectures, exhibit remarkable generalization across a wide range of tasks. However, fine-tuning these models for specific tasks remains resource-intensive due to their extensive parameterization. In this paper, we investigate two remarkable phenomena observed during the fine-tuning of LLMs, particularly focusing on the attention mechanism: (1) Different Impact, optimizing the $\mathbf{W}_v$ matrix significantly improves performance over optimizing the $\mathbf{W}_k$ matrix. Fine-tuning only the $\mathbf{W}_q$ and $\mathbf{W}_v$ matrices is computationally efficient, delivering results that are comparable to, or even better than, fine-tuning all three matrices $\mathbf{W}_q$, $\mathbf{W}_k$, and $\mathbf{W}_v$. (2) Efficient Convergence, employing distinct learning rates for these matrices is crucial for optimal performance, with a higher learning rate for the $\mathbf{W}_v$ matrix expediting convergence. However, theoretical analyses of these phenomena are still relatively limited. We present a theoretical analysis of these phenomena from two perspectives: (i) Generalization, where we demonstrate that fine-tuning only $\mathbf{W}_q$ and $\mathbf{W}_v$ improves generalization bounds, enhances memory efficiency, and (ii) Optimization, where we emphasize that the feature learning of the attention mechanism is efficient, particularly when using distinct learning rates for the matrices, which leads to more effective fine-tuning. Building on these insights, we propose a new strategy that improves fine-tuning efficiency in terms of both storage and time. Experimental results on benchmark datasets validate the effectiveness of this approach, supporting our theoretical findings. Our analysis lays the theoretical groundwork for configuring and improving lightweight algorithms in LLMs fine-tuning.<|reference_end|>
arxiv
@article{yao2024theoretical, title={Theoretical Insights into Fine-Tuning Attention Mechanism: Generalization and Optimization}, author={Xinhao Yao, Hongjin Qian, Xiaolin Hu, Gengze Xu, Yong Liu}, journal={arXiv preprint arXiv:2410.02247}, year={2024}, archivePrefix={arXiv}, eprint={2410.02247}, primaryClass={cs.LG} }
yao2024theoretical
arxiv-664935
2410.02249
Spiking Neural Network as Adaptive Event Stream Slicer
<|reference_start|>Spiking Neural Network as Adaptive Event Stream Slicer: Event-based cameras are attracting significant interest as they provide rich edge information, high dynamic range, and high temporal resolution. Many state-of-the-art event-based algorithms rely on splitting the events into fixed groups, resulting in the omission of crucial temporal information, particularly when dealing with diverse motion scenarios (e.g., high/low speed). In this work, we propose SpikeSlicer, a novel-designed plug-and-play event processing method capable of splitting events stream adaptively. SpikeSlicer utilizes a lightweight (0.41M) and low-energy spiking neural network (SNN) to trigger event slicing. To guide the SNN to fire spikes at optimal time steps, we propose the Spiking Position-aware Loss (SPA-Loss) to modulate the neuron's state. Additionally, we develop a Feedback-Update training strategy that refines the slicing decisions using feedback from the downstream artificial neural network (ANN). Extensive experiments demonstrate that our method yields significant performance improvements in event-based object tracking and recognition. Notably, SpikeSlicer provides a brand-new SNN-ANN cooperation paradigm, where the SNN acts as an efficient, low-energy data processor to assist the ANN in improving downstream performance, injecting new perspectives and potential avenues of exploration.<|reference_end|>
arxiv
@article{cao2024spiking, title={Spiking Neural Network as Adaptive Event Stream Slicer}, author={Jiahang Cao, Mingyuan Sun, Ziqing Wang, Hao Cheng, Qiang Zhang, Shibo Zhou, Renjing Xu}, journal={arXiv preprint arXiv:2410.02249}, year={2024}, archivePrefix={arXiv}, eprint={2410.02249}, primaryClass={cs.CV cs.NE} }
cao2024spiking
arxiv-664936
2410.02250
Probabilistic road classification in historical maps using synthetic data and deep learning
<|reference_start|>Probabilistic road classification in historical maps using synthetic data and deep learning: Historical maps are invaluable for analyzing long-term changes in transportation and spatial development, offering a rich source of data for evolutionary studies. However, digitizing and classifying road networks from these maps is often expensive and time-consuming, limiting their widespread use. Recent advancements in deep learning have made automatic road extraction from historical maps feasible, yet these methods typically require large amounts of labeled training data. To address this challenge, we introduce a novel framework that integrates deep learning with geoinformation, computer-based painting, and image processing methodologies. This framework enables the extraction and classification of roads from historical maps using only road geometries without needing road class labels for training. The process begins with training of a binary segmentation model to extract road geometries, followed by morphological operations, skeletonization, vectorization, and filtering algorithms. Synthetic training data is then generated by a painting function that artificially re-paints road segments using predefined symbology for road classes. Using this synthetic data, a deep ensemble is trained to generate pixel-wise probabilities for road classes to mitigate distribution shift. These predictions are then discretized along the extracted road geometries. Subsequently, further processing is employed to classify entire roads, enabling the identification of potential changes in road classes and resulting in a labeled road class dataset. Our method achieved completeness and correctness scores of over 94% and 92%, respectively, for road class 2, the most prevalent class in the two Siegfried Map sheets from Switzerland used for testing. This research offers a powerful tool for urban planning and transportation decision-making by efficiently extracting and classifying roads from historical maps.<|reference_end|>
arxiv
@article{mühlematter2024probabilistic, title={Probabilistic road classification in historical maps using synthetic data and deep learning}, author={Dominik J. M"uhlematter, Sebastian Schweizer, Chenjing Jiao, Xue Xia, Magnus Heitzler, Lorenz Hurni}, journal={arXiv preprint arXiv:2410.02250}, year={2024}, archivePrefix={arXiv}, eprint={2410.02250}, primaryClass={cs.CV cs.LG} }
mühlematter2024probabilistic
arxiv-664937
2410.02253
End-to-end Driving in High-Interaction Traffic Scenarios with Reinforcement Learning
<|reference_start|>End-to-end Driving in High-Interaction Traffic Scenarios with Reinforcement Learning: Dynamic and interactive traffic scenarios pose significant challenges for autonomous driving systems. Reinforcement learning (RL) offers a promising approach by enabling the exploration of driving policies beyond the constraints of pre-collected datasets and predefined conditions, particularly in complex environments. However, a critical challenge lies in effectively extracting spatial and temporal features from sequences of high-dimensional, multi-modal observations while minimizing the accumulation of errors over time. Additionally, efficiently guiding large-scale RL models to converge on optimal driving policies without frequent failures during the training process remains tricky. We propose an end-to-end model-based RL algorithm named Ramble to address these issues. Ramble processes multi-view RGB images and LiDAR point clouds into low-dimensional latent features to capture the context of traffic scenarios at each time step. A transformer-based architecture is then employed to model temporal dependencies and predict future states. By learning a dynamics model of the environment, Ramble can foresee upcoming traffic events and make more informed, strategic decisions. Our implementation demonstrates that prior experience in feature extraction and decision-making plays a pivotal role in accelerating the convergence of RL models toward optimal driving policies. Ramble achieves state-of-the-art performance regarding route completion rate and driving score on the CARLA Leaderboard 2.0, showcasing its effectiveness in managing complex and dynamic traffic situations.<|reference_end|>
arxiv
@article{li2024end-to-end, title={End-to-end Driving in High-Interaction Traffic Scenarios with Reinforcement Learning}, author={Yueyuan Li, Mingyang Jiang, Songan Zhang, Wei Yuan, Chunxiang Wang, and Ming Yang}, journal={arXiv preprint arXiv:2410.02253}, year={2024}, archivePrefix={arXiv}, eprint={2410.02253}, primaryClass={cs.AI cs.LG cs.RO} }
li2024end-to-end
arxiv-664938
2410.02254
MTDNS: Moving Target Defense for Resilient DNS Infrastructure
<|reference_start|>MTDNS: Moving Target Defense for Resilient DNS Infrastructure: One of the most critical components of the Internet that an attacker could exploit is the DNS (Domain Name System) protocol and infrastructure. Researchers have been constantly developing methods to detect and defend against the attacks against DNS, specifically DNS flooding attacks. However, most solutions discard packets for defensive approaches, which can cause legitimate packets to be dropped, making them highly dependable on detection strategies. In this paper, we propose MTDNS, a resilient MTD-based approach that employs Moving Target Defense techniques through Software Defined Networking (SDN) switches to redirect traffic to alternate DNS servers that are dynamically created and run under the Network Function Virtualization (NFV) framework. The proposed approach is implemented in a testbed environment by running our DNS servers as separate Virtual Network Functions, NFV Manager, SDN switches, and an SDN Controller. The experimental result shows that the MTDNS approach achieves a much higher success rate in resolving DNS queries and significantly reduces average latency even if there is a DNS flooding attack.<|reference_end|>
arxiv
@article{aydeger2024mtdns:, title={MTDNS: Moving Target Defense for Resilient DNS Infrastructure}, author={Abdullah Aydeger, Pei Zhou, Sanzida Hoque, Marco Carvalho, Engin Zeydan}, journal={arXiv preprint arXiv:2410.02254}, year={2024}, archivePrefix={arXiv}, eprint={2410.02254}, primaryClass={cs.NI cs.CR cs.DC cs.ET} }
aydeger2024mtdns:
arxiv-664939
2410.02258
Physics-Constrained Taylor Neural Networks for Learning and Control of Dynamical Systems
<|reference_start|>Physics-Constrained Taylor Neural Networks for Learning and Control of Dynamical Systems: Data-driven approaches are increasingly popular for identifying dynamical systems due to improved accuracy and availability of sensor data. However, relying solely on data for identification does not guarantee that the identified systems will maintain their physical properties or that the predicted models will generalize well. In this paper, we propose a novel method for system identification by integrating a neural network as the first-order derivative of a Taylor series expansion instead of learning a dynamical function directly. This approach, called Monotonic Taylor Neural Networks (MTNN), aims to ensure monotonic properties of dynamical systems by constraining the conditions for the output of the neural networks model to be either always non-positive or non-negative. These conditions are constructed in two ways: by designing a new neural network architecture or by regularizing the loss function for training. The proposed method demonstrates better performance compared to methods without constraints on the monotonic properties of the systems when tested with experimental data from two real-world systems, including HVAC and TCLab. Furthermore, MTNN shows good performance in an actual control application when using a model predictive controller for a nonlinear MIMO system, illustrating the practical applications of this method.<|reference_end|>
arxiv
@article{nguyen2024physics-constrained, title={Physics-Constrained Taylor Neural Networks for Learning and Control of Dynamical Systems}, author={Nam T. Nguyen and Juan C. Tique}, journal={arXiv preprint arXiv:2410.02258}, year={2024}, archivePrefix={arXiv}, eprint={2410.02258}, primaryClass={eess.SY cs.SY} }
nguyen2024physics-constrained
arxiv-664940
2410.02259
Alignment of Cybersecurity Incident Prioritisation with Incident Response Management Maturity Capabilities
<|reference_start|>Alignment of Cybersecurity Incident Prioritisation with Incident Response Management Maturity Capabilities: The increasing frequency and sophistication of cybersecurity incidents pose significant challenges to organisations, highlighting the critical need for robust incident response capabilities. This paper explores a possible utilisation of IR CMMs assessments to systematically prioritise incidents based on their impact, severity, and the incident response capabilities of an organisation in specific areas associated with human and organisational factors. The findings reveal common weaknesses in incident response, such as inadequate training and poor communication, and highlight best practices, including regular training programs, clear communication protocols, and well-documented response procedures. The analysis also emphasises the importance of organisational culture in enhancing incident response capabilities. By addressing the gap in understanding how the output of IRM assessments can be immediately utilised to prioritise high-risk incidents, this paper contributes valuable insights to academia and practice, offering a structured approach to enhancing organisational resilience against cybersecurity threats.<|reference_end|>
arxiv
@article{gulay2024alignment, title={Alignment of Cybersecurity Incident Prioritisation with Incident Response Management Maturity Capabilities}, author={Abdulaziz Gulay, Leandros Maglaras}, journal={arXiv preprint arXiv:2410.02259}, year={2024}, archivePrefix={arXiv}, eprint={2410.02259}, primaryClass={cs.CR} }
gulay2024alignment
arxiv-664941
2410.02260
FedScalar: A Communication efficient Federated Learning
<|reference_start|>FedScalar: A Communication efficient Federated Learning: Federated learning (FL) has gained considerable popularity for distributed machine learning due to its ability to preserve the privacy of participating agents by eliminating the need for data aggregation. Nevertheless, communication costs between agents and the central server in FL are substantial in large-scale problems and remain a limiting factor for this algorithm. This paper introduces an innovative algorithm, called \emph{FedScalar}, within the federated learning framework aimed at improving communication efficiency. Unlike traditional FL methods that require agents to send high-dimensional vectors to the server, \emph{FedScalar} enables agents to communicate updates using a single scalar. Each agent encodes its updated model parameters into a scalar through the inner product between its local update difference and a random vector, which is then transmitted to the server. The server decodes this information by projecting the averaged scalar values onto the random vector. Our method thereby significantly reduces communication overhead. Technically, we demonstrate that the proposed algorithm achieves a convergence rate of $O(1/\sqrt{K})$ to a stationary point for smooth, non-convex loss functions. Additionally, our analysis shows that altering the underlying distribution of the random vector generated by the server can reduce the variance during the aggregation step of the algorithm. Finally, we validate the performance and communication efficiency of our algorithm with numerical simulations.<|reference_end|>
arxiv
@article{rostami2024fedscalar:, title={FedScalar: A Communication efficient Federated Learning}, author={M. Rostami, S. S. Kia}, journal={arXiv preprint arXiv:2410.02260}, year={2024}, archivePrefix={arXiv}, eprint={2410.02260}, primaryClass={cs.LG} }
rostami2024fedscalar:
arxiv-664942
2410.02264
Can Capacitive Touch Images Enhance Mobile Keyboard Decoding?
<|reference_start|>Can Capacitive Touch Images Enhance Mobile Keyboard Decoding?: Capacitive touch sensors capture the two-dimensional spatial profile (referred to as a touch heatmap) of a finger's contact with a mobile touchscreen. However, the research and design of touchscreen mobile keyboards -- one of the most speed and accuracy demanding touch interfaces -- has focused on the location of the touch centroid derived from the touch image heatmap as the input, discarding the rest of the raw spatial signals. In this paper, we investigate whether touch heatmaps can be leveraged to further improve the tap decoding accuracy for mobile touchscreen keyboards. Specifically, we developed and evaluated machine-learning models that interpret user taps by using the centroids and/or the heatmaps as their input and studied the contribution of the heatmaps to model performance. The results show that adding the heatmap into the input feature set led to 21.4% relative reduction of character error rates on average, compared to using the centroid alone. Furthermore, we conducted a live user study with the centroid-based and heatmap-based decoders built into Pixel 6 Pro devices and observed lower error rate, faster typing speed, and higher self-reported satisfaction score based on the heatmap-based decoder than the centroid-based decoder. These findings underline the promise of utilizing touch heatmaps for improving typing experience in mobile keyboards.<|reference_end|>
arxiv
@article{lertvittayakumjorn2024can, title={Can Capacitive Touch Images Enhance Mobile Keyboard Decoding?}, author={Piyawat Lertvittayakumjorn, Shanqing Cai, Billy Dou, Cedric Ho, Shumin Zhai}, journal={arXiv preprint arXiv:2410.02264}, year={2024}, doi={10.1145/3654777.3676420}, archivePrefix={arXiv}, eprint={2410.02264}, primaryClass={cs.HC cs.LG} }
lertvittayakumjorn2024can
arxiv-664943
2410.02267
Unsupervised Meta-Learning via Dynamic Head and Heterogeneous Task Construction for Few-Shot Classification
<|reference_start|>Unsupervised Meta-Learning via Dynamic Head and Heterogeneous Task Construction for Few-Shot Classification: Meta-learning has been widely used in recent years in areas such as few-shot learning and reinforcement learning. However, the questions of why and when it is better than other algorithms in few-shot classification remain to be explored. In this paper, we perform pre-experiments by adjusting the proportion of label noise and the degree of task heterogeneity in the dataset. We use the metric of Singular Vector Canonical Correlation Analysis to quantify the representation stability of the neural network and thus to compare the behavior of meta-learning and classical learning algorithms. We find that benefiting from the bi-level optimization strategy, the meta-learning algorithm has better robustness to label noise and heterogeneous tasks. Based on the above conclusion, we argue a promising future for meta-learning in the unsupervised area, and thus propose DHM-UHT, a dynamic head meta-learning algorithm with unsupervised heterogeneous task construction. The core idea of DHM-UHT is to use DBSCAN and dynamic head to achieve heterogeneous task construction and meta-learn the whole process of unsupervised heterogeneous task construction. On several unsupervised zero-shot and few-shot datasets, DHM-UHT obtains state-of-the-art performance. The code is released at https://github.com/tuantuange/DHM-UHT.<|reference_end|>
arxiv
@article{guan2024unsupervised, title={Unsupervised Meta-Learning via Dynamic Head and Heterogeneous Task Construction for Few-Shot Classification}, author={Yunchuan Guan, Yu Liu, Ketong Liu, Ke Zhou, Zhiqi Shen}, journal={arXiv preprint arXiv:2410.02267}, year={2024}, archivePrefix={arXiv}, eprint={2410.02267}, primaryClass={cs.LG} }
guan2024unsupervised
arxiv-664944
2410.02268
Structural-Entropy-Based Sample Selection for Efficient and Effective Learning
<|reference_start|>Structural-Entropy-Based Sample Selection for Efficient and Effective Learning: Sample selection improves the efficiency and effectiveness of machine learning models by providing informative and representative samples. Typically, samples can be modeled as a sample graph, where nodes are samples and edges represent their similarities. Most existing methods are based on local information, such as the training difficulty of samples, thereby overlooking global information, such as connectivity patterns. This oversight can result in suboptimal selection because global information is crucial for ensuring that the selected samples well represent the structural properties of the graph. To address this issue, we employ structural entropy to quantify global information and losslessly decompose it from the whole graph to individual nodes using the Shapley value. Based on the decomposition, we present $\textbf{S}$tructural-$\textbf{E}$ntropy-based sample $\textbf{S}$election ($\textbf{SES}$), a method that integrates both global and local information to select informative and representative samples. SES begins by constructing a $k$NN-graph among samples based on their similarities. It then measures sample importance by combining structural entropy (global metric) with training difficulty (local metric). Finally, SES applies importance-biased blue noise sampling to select a set of diverse and representative samples. Comprehensive experiments on three learning scenarios -- supervised learning, active learning, and continual learning -- clearly demonstrate the effectiveness of our method.<|reference_end|>
arxiv
@article{xie2024structural-entropy-based, title={Structural-Entropy-Based Sample Selection for Efficient and Effective Learning}, author={Tianchi Xie, Jiangning Zhu, Guozu Ma, Minzhi Lin, Wei Chen, Weikai Yang, Shixia Liu}, journal={arXiv preprint arXiv:2410.02268}, year={2024}, archivePrefix={arXiv}, eprint={2410.02268}, primaryClass={cs.LG cs.AI cs.CL cs.CV} }
xie2024structural-entropy-based
arxiv-664945
2410.02269
Best-of-Both-Worlds Policy Optimization for CMDPs with Bandit Feedback
<|reference_start|>Best-of-Both-Worlds Policy Optimization for CMDPs with Bandit Feedback: We study online learning in constrained Markov decision processes (CMDPs) in which rewards and constraints may be either stochastic or adversarial. In such settings, Stradi et al.(2024) proposed the first best-of-both-worlds algorithm able to seamlessly handle stochastic and adversarial constraints, achieving optimal regret and constraint violation bounds in both cases. This algorithm suffers from two major drawbacks. First, it only works under full feedback, which severely limits its applicability in practice. Moreover, it relies on optimizing over the space of occupancy measures, which requires solving convex optimization problems, an highly inefficient task. In this paper, we provide the first best-of-both-worlds algorithm for CMDPs with bandit feedback. Specifically, when the constraints are stochastic, the algorithm achieves $\widetilde{\mathcal{O}}(\sqrt{T})$ regret and constraint violation, while, when they are adversarial, it attains $\widetilde{\mathcal{O}}(\sqrt{T})$ constraint violation and a tight fraction of the optimal reward. Moreover, our algorithm is based on a policy optimization approach, which is much more efficient than occupancy-measure-based methods.<|reference_end|>
arxiv
@article{stradi2024best-of-both-worlds, title={Best-of-Both-Worlds Policy Optimization for CMDPs with Bandit Feedback}, author={Francesco Emanuele Stradi, Anna Lunghi, Matteo Castiglioni, Alberto Marchesi, Nicola Gatti}, journal={arXiv preprint arXiv:2410.02269}, year={2024}, archivePrefix={arXiv}, eprint={2410.02269}, primaryClass={cs.LG} }
stradi2024best-of-both-worlds
arxiv-664946
2410.02271
CoLLAP: Contrastive Long-form Language-Audio Pretraining with Musical Temporal Structure Augmentation
<|reference_start|>CoLLAP: Contrastive Long-form Language-Audio Pretraining with Musical Temporal Structure Augmentation: Modeling temporal characteristics plays a significant role in the representation learning of audio waveform. We propose Contrastive Long-form Language-Audio Pretraining (\textbf{CoLLAP}) to significantly extend the perception window for both the input audio (up to 5 minutes) and the language descriptions (exceeding 250 words), while enabling contrastive learning across modalities and temporal dynamics. Leveraging recent Music-LLMs to generate long-form music captions for full-length songs, augmented with musical temporal structures, we collect 51.3K audio-text pairs derived from the large-scale AudioSet training dataset, where the average audio length reaches 288 seconds. We propose a novel contrastive learning architecture that fuses language representations with structured audio representations by segmenting each song into clips and extracting their embeddings. With an attention mechanism, we capture multimodal temporal correlations, allowing the model to automatically weigh and enhance the final fusion score for improved contrastive alignment. Finally, we develop two variants of the CoLLAP model with different types of backbone language models. Through comprehensive experiments on multiple long-form music-text retrieval datasets, we demonstrate consistent performance improvement in retrieval accuracy compared with baselines. We also show the pretrained CoLLAP models can be transferred to various music information retrieval tasks, with heterogeneous long-form multimodal contexts.<|reference_end|>
arxiv
@article{wu2024collap:, title={CoLLAP: Contrastive Long-form Language-Audio Pretraining with Musical Temporal Structure Augmentation}, author={Junda Wu, Warren Li, Zachary Novack, Amit Namburi, Carol Chen, Julian McAuley}, journal={arXiv preprint arXiv:2410.02271}, year={2024}, archivePrefix={arXiv}, eprint={2410.02271}, primaryClass={cs.SD cs.AI eess.AS} }
wu2024collap:
arxiv-664947
2410.02272
Optimal $H_\infty$ control based on stable manifold of discounted Hamilton-Jacobi-Isaacs equation
<|reference_start|>Optimal $H_\infty$ control based on stable manifold of discounted Hamilton-Jacobi-Isaacs equation: The optimal \(H_{\infty}\) control problem over an infinite time horizon, which incorporates a performance function with a discount factor \(e^{-\alpha t}\) (\(\alpha > 0\)), is important in various fields. Solving this optimal \(H_{\infty}\) control problem is equivalent to addressing a discounted Hamilton-Jacobi-Isaacs (HJI) partial differential equation. In this paper, we first provide a precise estimate for the discount factor \(\alpha\) that ensures the existence of a nonnegative stabilizing solution to the HJI equation. This stabilizing solution corresponds to the stable manifold of the characteristic system of the HJI equation, which is a contact Hamiltonian system due to the presence of the discount factor. Secondly, we demonstrate that approximating the optimal controller in a natural manner results in a closed-loop system with a finite \(L_2\)-gain that is nearly less than the gain of the original system. Thirdly, based on the theoretical results obtained, we propose a deep learning algorithm to approximate the optimal controller using the stable manifold of the contact Hamiltonian system associated with the HJI equation. Finally, we apply our method to the \(H_{\infty}\) control of the Allen-Cahn equation to illustrate its effectiveness.<|reference_end|>
arxiv
@article{chen2024optimal, title={Optimal $H_{\infty}$ control based on stable manifold of discounted Hamilton-Jacobi-Isaacs equation}, author={Guoyuan Chen, Yi Wang and Qinglong Zhou}, journal={arXiv preprint arXiv:2410.02272}, year={2024}, archivePrefix={arXiv}, eprint={2410.02272}, primaryClass={math.OC cs.SY eess.SY} }
chen2024optimal
arxiv-664948
2410.02273
Perfect Counterfactuals in Imperfect Worlds: Modelling Noisy Implementation of Actions in Sequential Algorithmic Recourse
<|reference_start|>Perfect Counterfactuals in Imperfect Worlds: Modelling Noisy Implementation of Actions in Sequential Algorithmic Recourse: Algorithmic recourse provides actions to individuals who have been adversely affected by automated decision-making and helps them achieve a desired outcome. Knowing the recourse, however, does not guarantee that users would implement it perfectly, either due to environmental variability or personal choices. Recourse generation should thus anticipate its sub-optimal or noisy implementation. While several approaches have constructed recourse that accounts for robustness to small perturbation (i.e., noisy recourse implementation), they assume an entire recourse to be implemented in a single step and thus apply one-off uniform noise to it. Such assumption is unrealistic since recourse often includes multiple sequential steps which becomes harder to implement and subject to more noise. In this work, we consider recourse under plausible noise that adapts to the local data geometry and accumulates at every step of the way. We frame this problem as a Markov Decision Process and demonstrate that the distribution of our plausible noise satisfies the Markov property. We then propose the RObust SEquential (ROSE) recourse generator to output a sequence of steps that will lead to the desired outcome even under imperfect implementation. Given our plausible modelling of sub-optimal human actions and greater recourse robustness to accumulated uncertainty, ROSE can grant users higher chances of success under low recourse costs. Empirical evaluation shows our algorithm manages the inherent trade-off between recourse robustness and costs more effectively while ensuring its low sparsity and fast computation.<|reference_end|>
arxiv
@article{xuan2024perfect, title={Perfect Counterfactuals in Imperfect Worlds: Modelling Noisy Implementation of Actions in Sequential Algorithmic Recourse}, author={Yueqing Xuan, Kacper Sokol, Mark Sanderson, Jeffrey Chan}, journal={arXiv preprint arXiv:2410.02273}, year={2024}, archivePrefix={arXiv}, eprint={2410.02273}, primaryClass={cs.LG} }
xuan2024perfect
arxiv-664949
2410.02274
MMS Approximations Under Additive Leveled Valuations
<|reference_start|>MMS Approximations Under Additive Leveled Valuations: We study the problem of fairly allocating indivisible goods to a set of agents with additive leveled valuations. A valuation function is called leveled if and only if bundles of larger size have larger value than bundles of smaller size. The economics literature has well studied such valuations. We use the maximin-share (MMS) and EFX as standard notions of fairness. We show that an algorithm introduced by Christodoulou et al. ([11]) constructs an allocation that is EFX and $\frac{\lfloor \frac{m}{n} \rfloor}{\lfloor \frac{m}{n} \rfloor + 1}\text{-MMS}$. In the paper, it was claimed that the allocation is EFX and $\frac{2}{3}\text{-MMS}$. However, the proof of the MMS-bound is incorrect. We give a counter-example to their proof and then prove a stronger approximation of MMS.<|reference_end|>
arxiv
@article{afshinmehr2024mms, title={MMS Approximations Under Additive Leveled Valuations}, author={Mahyar Afshinmehr, Mehrafarin Kazemi, Kurt Mehlhorn}, journal={arXiv preprint arXiv:2410.02274}, year={2024}, archivePrefix={arXiv}, eprint={2410.02274}, primaryClass={cs.GT} }
afshinmehr2024mms
arxiv-664950
2410.02275
Optimal Strong Regret and Violation in Constrained MDPs via Policy Optimization
<|reference_start|>Optimal Strong Regret and Violation in Constrained MDPs via Policy Optimization: We study online learning in \emph{constrained MDPs} (CMDPs), focusing on the goal of attaining sublinear strong regret and strong cumulative constraint violation. Differently from their standard (weak) counterparts, these metrics do not allow negative terms to compensate positive ones, raising considerable additional challenges. Efroni et al. (2020) were the first to propose an algorithm with sublinear strong regret and strong violation, by exploiting linear programming. Thus, their algorithm is highly inefficient, leaving as an open problem achieving sublinear bounds by means of policy optimization methods, which are much more efficient in practice. Very recently, Muller et al. (2024) have partially addressed this problem by proposing a policy optimization method that allows to attain $\widetilde{\mathcal{O}}(T^{0.93})$ strong regret/violation. This still leaves open the question of whether optimal bounds are achievable by using an approach of this kind. We answer such a question affirmatively, by providing an efficient policy optimization algorithm with $\widetilde{\mathcal{O}}(\sqrt{T})$ strong regret/violation. Our algorithm implements a primal-dual scheme that employs a state-of-the-art policy optimization approach for adversarial (unconstrained) MDPs as primal algorithm, and a UCB-like update for dual variables.<|reference_end|>
arxiv
@article{stradi2024optimal, title={Optimal Strong Regret and Violation in Constrained MDPs via Policy Optimization}, author={Francesco Emanuele Stradi, Matteo Castiglioni, Alberto Marchesi, Nicola Gatti}, journal={arXiv preprint arXiv:2410.02275}, year={2024}, archivePrefix={arXiv}, eprint={2410.02275}, primaryClass={cs.LG} }
stradi2024optimal
arxiv-664951
2410.02279
On Lai's Upper Confidence Bound in Multi-Armed Bandits
<|reference_start|>On Lai's Upper Confidence Bound in Multi-Armed Bandits: In this memorial paper, we honor Tze Leung Lai's seminal contributions to the topic of multi-armed bandits, with a specific focus on his pioneering work on the upper confidence bound. We establish sharp non-asymptotic regret bounds for an upper confidence bound index with a constant level of exploration for Gaussian rewards. Furthermore, we establish a non-asymptotic regret bound for the upper confidence bound index of Lai (1987) which employs an exploration function that decreases with the sample size of the corresponding arm. The regret bounds have leading constants that match the Lai-Robbins lower bound. Our results highlight an aspect of Lai's seminal works that deserves more attention in the machine learning literature.<|reference_end|>
arxiv
@article{ren2024on, title={On Lai's Upper Confidence Bound in Multi-Armed Bandits}, author={Huachen Ren and Cun-Hui Zhang}, journal={arXiv preprint arXiv:2410.02279}, year={2024}, archivePrefix={arXiv}, eprint={2410.02279}, primaryClass={stat.ML cs.LG math.ST stat.TH} }
ren2024on
arxiv-664952
2410.02281
Annotation Guidelines for Corpus Novelties: Part 1 -- Named Entity Recognition
<|reference_start|>Annotation Guidelines for Corpus Novelties: Part 1 -- Named Entity Recognition: The Novelties corpus is a collection of novels (and parts of novels) annotated for Named Entity Recognition (NER) among other tasks. This document describes the guidelines applied during its annotation. It contains the instructions used by the annotators, as well as a number of examples retrieved from the annotated novels, and illustrating expressions that should be marked as entities as well as expressions that should not.<|reference_end|>
arxiv
@article{amalvy2024annotation, title={Annotation Guidelines for Corpus Novelties: Part 1 -- Named Entity Recognition}, author={Arthur Amalvy (LIA), Vincent Labatut (LIA)}, journal={arXiv preprint arXiv:2410.02281}, year={2024}, archivePrefix={arXiv}, eprint={2410.02281}, primaryClass={cs.CL} }
amalvy2024annotation
arxiv-664953
2410.02283
Morphological evaluation of subwords vocabulary used by BETO language model
<|reference_start|>Morphological evaluation of subwords vocabulary used by BETO language model: Subword tokenization algorithms used by Large Language Models are significantly more efficient and can independently build the necessary vocabulary of words and subwords without human intervention. However, those subwords do not always align with real morphemes, potentially impacting the models' performance, though it remains uncertain when this might occur. In previous research, we proposed a method to assess the morphological quality of vocabularies, focusing on the overlap between these vocabularies and the morphemes of a given language. Our evaluation method was built on three quality measures, relevance, cohesion, and morphological accuracy, and a procedure for their assessment. By applying this method to vocabularies created by three subword tokenization algorithms, BPE, Wordpiece, and Unigram, we concluded that these vocabularies generally exhibit very low morphological quality. In this article, we apply this evaluation to the tokenizer of BETO, a BERT language model trained on large Spanish corpora. This evaluation, along with our previous results, helped us conclude that its vocabulary has a low morphological quality, and we also found that training the tokenizer in a larger corpus does not improve the morphological quality of the generated vocabulary. Additionally, this evaluation helps clarify the algorithm used by the tokenizer, that is, Wordpiece, given the inconsistencies between the authors' claims and the model's configuration.<|reference_end|>
arxiv
@article{garcía-sierra2024morphological, title={Morphological evaluation of subwords vocabulary used by BETO language model}, author={'Oscar Garc'ia-Sierra and Ana Fern'andez-Pampill'on Cesteros and Miguel Ortega-Mart'in}, journal={arXiv preprint arXiv:2410.02283}, year={2024}, archivePrefix={arXiv}, eprint={2410.02283}, primaryClass={cs.CL cs.AI} }
garcía-sierra2024morphological
arxiv-664954
2410.02284
Correlation and Navigation in the Vocabulary Key Representation Space of Language Models
<|reference_start|>Correlation and Navigation in the Vocabulary Key Representation Space of Language Models: Language model (LM) decoding is based on the next-token prediction (NTP) probability distribution. For neural LMs (e.g., Transformer-based), NTP distribution is essentially a softmax-regularized dot product between an encoded input context (query) and fixed vocabulary representations (keys). In this paper, we study the effect of the key distribution on the NTP distribution, with a focus on whether the similarity between keys will trigger spurious correlations in NTP. Through knowledge-probing tasks, we show that in the NTP distribution, the few top-ranked tokens are typically accurate. However, the middle-ranked prediction is highly biased towards the tokens that are distributionally (not necessarily semantically) similar to these top ones. For instance, if "P" is predicted as the top-1 token, "A"-"Z" will all be ranked high in NTP, no matter whether they can lead to correct decoding results. This hurts the sampling diversity and makes the sampling of correct, long-tail results hopeless and noisy. We attempt to alleviate this issue via a novel in-context method that iteratively pushes the query representation away from explored regions. Specifically, we include the explored decoding results in the context and prompt the LM to generate something else, which encourages the LM to produce a query representation that has small dot products with explored keys. Experiments on knowledge-probing tasks show that our method leads to efficient navigation away from explored keys to correct new keys. We further extend our method to open-ended and chain-of-thought (for reasoning) generation. Experiment results show that ICN contributes to better generation diversity and improved self-consistency voting performance. Finally, we discuss potential training issues caused by the fixed key space together with the challenges and possible ways to address them in future research.<|reference_end|>
arxiv
@article{peng2024correlation, title={Correlation and Navigation in the Vocabulary Key Representation Space of Language Models}, author={Letian Peng, Chenyang An, Jingbo Shang}, journal={arXiv preprint arXiv:2410.02284}, year={2024}, archivePrefix={arXiv}, eprint={2410.02284}, primaryClass={cs.CL} }
peng2024correlation
arxiv-664955
2410.02288
Computer-aided Colorization State-of-the-science: A Survey
<|reference_start|>Computer-aided Colorization State-of-the-science: A Survey: This paper reviews published research in the field of computer-aided colorization technology. We argue that the colorization task originates from computer graphics, prospers by introducing computer vision, and tends to the fusion of vision and graphics, so we put forward our taxonomy and organize the whole paper chronologically. We extend the existing reconstruction-based colorization evaluation techniques, considering that aesthetic assessment of colored images should be introduced to ensure that colorization satisfies human visual-related requirements and emotions more closely. We perform the colorization aesthetic assessment on seven representative unconditional colorization models and discuss the difference between our assessment and the existing reconstruction-based metrics. Finally, this paper identifies unresolved issues and proposes fruitful areas for future research and development. Access to the project associated with this survey can be obtained at https://github.com/DanielCho-HK/Colorization.<|reference_end|>
arxiv
@article{cao2024computer-aided, title={Computer-aided Colorization State-of-the-science: A Survey}, author={Yu Cao, Xin Duan, Xiangqiao Meng, P. Y. Mok, Ping Li, and Tong-Yee Lee}, journal={arXiv preprint arXiv:2410.02288}, year={2024}, archivePrefix={arXiv}, eprint={2410.02288}, primaryClass={cs.CV} }
cao2024computer-aided
arxiv-664956
2410.02290
Density based Spatial Clustering of Lines via Probabilistic Generation of Neighbourhood
<|reference_start|>Density based Spatial Clustering of Lines via Probabilistic Generation of Neighbourhood: Density based spatial clustering of points in $\mathbb{R}^n$ has a myriad of applications in a variety of industries. We generalise this problem to the density based clustering of lines in high-dimensional spaces, keeping in mind there exists no valid distance measure that follows the triangle inequality for lines. In this paper, we design a clustering algorithm that generates a customised neighbourhood for a line of a fixed volume (given as a parameter), based on an optional parameter as a continuous probability density function. This algorithm is not sensitive to the outliers and can effectively identify the noise in the data using a cardinality parameter. One of the pivotal applications of this algorithm is clustering data points in $\mathbb{R}^n$ with missing entries, while utilising the domain knowledge of the respective data. In particular, the proposed algorithm is able to cluster $n$-dimensional data points that contain at least $(n-1)$-dimensional information. We illustrate the neighbourhoods for the standard probability distributions with continuous probability density functions and demonstrate the effectiveness of our algorithm on various synthetic and real-world datasets (e.g., rail and road networks). The experimental results also highlight its application in clustering incomplete data.<|reference_end|>
arxiv
@article{das2024density, title={Density based Spatial Clustering of Lines via Probabilistic Generation of Neighbourhood}, author={Akanksha Das, Malay Bhattacharyya}, journal={arXiv preprint arXiv:2410.02290}, year={2024}, archivePrefix={arXiv}, eprint={2410.02290}, primaryClass={cs.LG} }
das2024density
arxiv-664957
2410.02293
Efficient Second-Order Neural Network Optimization via Adaptive Trust Region Methods
<|reference_start|>Efficient Second-Order Neural Network Optimization via Adaptive Trust Region Methods: Second-order optimization methods offer notable advantages in training deep neural networks by utilizing curvature information to achieve faster convergence. However, traditional second-order techniques are computationally prohibitive, primarily due to the large matrix inversions and high memory demands they require. While adaptive trust-region methods have been developed to mitigate these issues, their performance is often hindered by conservative estimates of key parameters, such as the Lipschitz constant of the Hessian, resulting in suboptimal outcomes. In this paper, we introduce SecondOrderAdaptiveAdam (SOAA), a novel optimization algorithm designed to overcome these limitations. SOAA approximates the Fisher information matrix using a diagonal representation, reducing computational complexity from \(O(n^{2})\) to \(O(n)\), thereby making it suitable for large-scale deep learning models, including large language models (LLMs). Additionally, the algorithm integrates an adaptive trust-region mechanism that dynamically adjusts the trust region size based on observed loss reduction, ensuring both robust convergence and computational efficiency. We empirically demonstrate that SOAA achieves faster and more stable convergence compared to first-order optimizers, such as Adam, under similar computational constraints. However, the diagonal approximation of the Fisher information matrix may be less effective in capturing higher-order interactions between gradients, suggesting potential areas for further refinement and future research.<|reference_end|>
arxiv
@article{vo2024efficient, title={Efficient Second-Order Neural Network Optimization via Adaptive Trust Region Methods}, author={James Vo}, journal={arXiv preprint arXiv:2410.02293}, year={2024}, archivePrefix={arXiv}, eprint={2410.02293}, primaryClass={cs.LG cs.CL} }
vo2024efficient
arxiv-664958
2410.02295
Selection Guidelines for Geographical SMR Protocols: A Communication Pattern-based Latency Modeling Approach
<|reference_start|>Selection Guidelines for Geographical SMR Protocols: A Communication Pattern-based Latency Modeling Approach: State machine replication (SMR) is a replication technique that ensures fault tolerance by duplicating a service. Geographical SMR can enhance its robustness against disasters by distributing replicas in separate geographical locations. Several geographical SMR protocols have been proposed in the literature, each of which tailored to specific requirements; for example, protocols designed to meet the requirement of latency reduction by either sacrificing a part of their fault tolerance or limiting the content of responses to clients. However, this diversity complicates the decision-making process for selecting the best protocol for a particular service. In this study, we introduce a latency estimation model for these SMR protocols based on the communication patterns of the protocols and perform simulations for various cases. Based on the simulation results and an experimental evaluation, we present five selection guidelines for geographical SMR protocols based on their log management policy, distances between replicas, number of replicas, frequency of slow paths, and client distribution. These selection guidelines enable determining the best geographical SMR protocol for each situation.<|reference_end|>
arxiv
@article{shiozaki2024selection, title={Selection Guidelines for Geographical SMR Protocols: A Communication Pattern-based Latency Modeling Approach}, author={Kohya Shiozaki and Junya Nakamura}, journal={arXiv preprint arXiv:2410.02295}, year={2024}, archivePrefix={arXiv}, eprint={2410.02295}, primaryClass={cs.DC} }
shiozaki2024selection
arxiv-664959
2410.02296
Language Models are Graph Learners
<|reference_start|>Language Models are Graph Learners: Language Models (LMs) are increasingly challenging the dominance of domain-specific models, including Graph Neural Networks (GNNs) and Graph Transformers (GTs), in graph learning tasks. Following this trend, we propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art GNNs on node classification tasks, without requiring any architectural modification. By preserving the LM's original architecture, our approach retains a key benefit of LM instruction tuning: the ability to jointly train on diverse datasets, fostering greater flexibility and efficiency. To achieve this, we introduce two key augmentation strategies: (1) Enriching LMs' input using topological and semantic retrieval methods, which provide richer contextual information, and (2) guiding the LMs' classification process through a lightweight GNN classifier that effectively prunes class candidates. Our experiments on real-world datasets show that backbone Flan-T5 models equipped with these augmentation strategies outperform state-of-the-art text-output node classifiers and are comparable to top-performing vector-output node classifiers. By bridging the gap between specialized task-specific node classifiers and general LMs, this work paves the way for more versatile and widely applicable graph learning models. We will open-source the code upon publication.<|reference_end|>
arxiv
@article{xu2024language, title={Language Models are Graph Learners}, author={Zhe Xu, Kaveh Hassani, Si Zhang, Hanqing Zeng, Michihiro Yasunaga, Limei Wang, Dongqi Fu, Ning Yao, Bo Long, Hanghang Tong}, journal={arXiv preprint arXiv:2410.02296}, year={2024}, archivePrefix={arXiv}, eprint={2410.02296}, primaryClass={cs.CL} }
xu2024language
arxiv-664960
2410.02297
Make Compound Sentences Simple to Analyze: Learning to Split Sentences for Aspect-based Sentiment Analysis
<|reference_start|>Make Compound Sentences Simple to Analyze: Learning to Split Sentences for Aspect-based Sentiment Analysis: In the domain of Aspect-Based Sentiment Analysis (ABSA), generative methods have shown promising results and achieved substantial advancements. However, despite these advancements, the tasks of extracting sentiment quadruplets, which capture the nuanced sentiment expressions within a sentence, remain significant challenges. In particular, compound sentences can potentially contain multiple quadruplets, making the extraction task increasingly difficult as sentence complexity grows. To address this issue, we are focusing on simplifying sentence structures to facilitate the easier recognition of these elements and crafting a model that integrates seamlessly with various ABSA tasks. In this paper, we propose Aspect Term Oriented Sentence Splitter (ATOSS), which simplifies compound sentence into simpler and clearer forms, thereby clarifying their structure and intent. As a plug-and-play module, this approach retains the parameters of the ABSA model while making it easier to identify essential intent within input sentences. Extensive experimental results show that utilizing ATOSS outperforms existing methods in both ASQP and ACOS tasks, which are the primary tasks for extracting sentiment quadruplets.<|reference_end|>
arxiv
@article{seo2024make, title={Make Compound Sentences Simple to Analyze: Learning to Split Sentences for Aspect-based Sentiment Analysis}, author={Yongsik Seo, Sungwon Song, Ryang Heo, Jieyong Kim, Dongha Lee}, journal={arXiv preprint arXiv:2410.02297}, year={2024}, archivePrefix={arXiv}, eprint={2410.02297}, primaryClass={cs.CL} }
seo2024make
arxiv-664961
2410.02298
Jailbreak Antidote: Runtime Safety-Utility Balance via Sparse Representation Adjustment in Large Language Models
<|reference_start|>Jailbreak Antidote: Runtime Safety-Utility Balance via Sparse Representation Adjustment in Large Language Models: As large language models (LLMs) become integral to various applications, ensuring both their safety and utility is paramount. Jailbreak attacks, which manipulate LLMs into generating harmful content, pose significant challenges to this balance. Existing defenses, such as prompt engineering and safety fine-tuning, often introduce computational overhead, increase inference latency, and lack runtime flexibility. Moreover, overly restrictive safety measures can degrade model utility by causing refusals of benign queries. In this paper, we introduce Jailbreak Antidote, a method that enables real-time adjustment of LLM safety preferences by manipulating a sparse subset of the model's internal states during inference. By shifting the model's hidden representations along a safety direction with varying strengths, we achieve flexible control over the safety-utility balance without additional token overhead or inference delays. Our analysis reveals that safety-related information in LLMs is sparsely distributed; adjusting approximately 5% of the internal state is as effective as modifying the entire state. Extensive experiments on nine LLMs (ranging from 2 billion to 72 billion parameters), evaluated against ten jailbreak attack methods and compared with six defense strategies, validate the effectiveness and efficiency of our approach. By directly manipulating internal states during reasoning, Jailbreak Antidote offers a lightweight, scalable solution that enhances LLM safety while preserving utility, opening new possibilities for real-time safety mechanisms in widely-deployed AI systems.<|reference_end|>
arxiv
@article{shen2024jailbreak, title={Jailbreak Antidote: Runtime Safety-Utility Balance via Sparse Representation Adjustment in Large Language Models}, author={Guobin Shen, Dongcheng Zhao, Yiting Dong, Xiang He, Yi Zeng}, journal={arXiv preprint arXiv:2410.02298}, year={2024}, archivePrefix={arXiv}, eprint={2410.02298}, primaryClass={cs.CR cs.CL} }
shen2024jailbreak
arxiv-664962
2410.02301
Large Language Model Aided Multi-objective Evolutionary Algorithm: a Low-cost Adaptive Approach
<|reference_start|>Large Language Model Aided Multi-objective Evolutionary Algorithm: a Low-cost Adaptive Approach: Multi-objective optimization is a common problem in practical applications, and multi-objective evolutionary algorithm (MOEA) is considered as one of the effective methods to solve these problems. However, their randomness sometimes prevents algorithms from rapidly converging to global optimization, and the design of their genetic operators often requires complicated manual tuning. To overcome this challenge, this study proposes a new framework that combines a large language model (LLM) with traditional evolutionary algorithms to enhance the algorithm's search capability and generalization performance.In our framework, we employ adaptive and hybrid mechanisms to integrate the LLM with the MOEA, thereby accelerating algorithmic convergence. Specifically, we leverage an auxiliary evaluation function and automated prompt construction within the adaptive mechanism to flexibly adjust the utilization of the LLM, generating high-quality solutions that are further refined and optimized through genetic operators.Concurrently, the hybrid mechanism aims to minimize interaction costs with the LLM as much as possible.<|reference_end|>
arxiv
@article{liu2024large, title={Large Language Model Aided Multi-objective Evolutionary Algorithm: a Low-cost Adaptive Approach}, author={Wanyi Liu, Long Chen, Zhenzhou Tang}, journal={arXiv preprint arXiv:2410.02301}, year={2024}, archivePrefix={arXiv}, eprint={2410.02301}, primaryClass={cs.NE} }
liu2024large
arxiv-664963
2410.02303
Semantic Communication and Control Co-Design for Multi-Objective Correlated Dynamics
<|reference_start|>Semantic Communication and Control Co-Design for Multi-Objective Correlated Dynamics: This letter introduces a machine-learning approach to learning the semantic dynamics of correlated systems with different control rules and dynamics. By leveraging the Koopman operator in an autoencoder (AE) framework, the system's state evolution is linearized in the latent space using a dynamic semantic Koopman (DSK) model, capturing the baseline semantic dynamics. Signal temporal logic (STL) is incorporated through a logical semantic Koopman (LSK) model to encode system-specific control rules. These models form the proposed logical Koopman AE framework that reduces communication costs while improving state prediction accuracy and control performance, showing a 91.65% reduction in communication samples and significant performance gains in simulation.<|reference_end|>
arxiv
@article{girgis2024semantic, title={Semantic Communication and Control Co-Design for Multi-Objective Correlated Dynamics}, author={Abanoub M. Girgis, Hyowoon Seo, and Mehdi Bennis}, journal={arXiv preprint arXiv:2410.02303}, year={2024}, archivePrefix={arXiv}, eprint={2410.02303}, primaryClass={cs.RO cs.LG cs.SY eess.SY} }
girgis2024semantic
arxiv-664964
2410.02304
A Novel Method for Accurate & Real-time Food Classification: The Synergistic Integration of EfficientNetB7, CBAM, Transfer Learning, and Data Augmentation
<|reference_start|>A Novel Method for Accurate & Real-time Food Classification: The Synergistic Integration of EfficientNetB7, CBAM, Transfer Learning, and Data Augmentation: Integrating artificial intelligence into modern society is profoundly transformative, significantly enhancing productivity by streamlining various daily tasks. AI-driven recognition systems provide notable advantages in the food sector, including improved nutrient tracking, tackling food waste, and boosting food production and consumption efficiency. Accurate food classification is a crucial initial step in utilizing advanced AI models, as the effectiveness of this process directly influences the success of subsequent operations; therefore, achieving high accuracy at a reasonable speed is essential. Despite existing research efforts, a gap persists in improving performance while ensuring rapid processing times, prompting researchers to pursue cost-effective and precise models. This study addresses this gap by employing the state-of-the-art EfficientNetB7 architecture, enhanced through transfer learning, data augmentation, and the CBAM attention module. This methodology results in a robust model that surpasses previous studies in accuracy while maintaining rapid processing suitable for real-world applications. The Food11 dataset from Kaggle was utilized, comprising 16643 imbalanced images across 11 diverse classes with significant intra-category diversities and inter-category similarities. Furthermore, the proposed methodology, bolstered by various deep learning techniques, consistently achieves an impressive average accuracy of 96.40%. Notably, it can classify over 60 images within one second during inference on unseen data, demonstrating its ability to deliver high accuracy promptly. This underscores its potential for practical applications in accurate food classification and enhancing efficiency in subsequent processes.<|reference_end|>
arxiv
@article{rokhva2024a, title={A Novel Method for Accurate & Real-time Food Classification: The Synergistic Integration of EfficientNetB7, CBAM, Transfer Learning, and Data Augmentation}, author={Shayan Rokhva, Babak Teimourpour}, journal={arXiv preprint arXiv:2410.02304}, year={2024}, archivePrefix={arXiv}, eprint={2410.02304}, primaryClass={cs.CV} }
rokhva2024a
arxiv-664965
2410.02305
The Comparison of Individual Cat Recognition Using Neural Networks
<|reference_start|>The Comparison of Individual Cat Recognition Using Neural Networks: Facial recognition using deep learning has been widely used in social life for applications such as authentication, smart door locks, and photo grouping, etc. More and more networks have been developed to facilitate computer vision tasks, such as ResNet, DenseNet, EfficientNet, ConvNeXt, and Siamese networks. However, few studies have systematically compared the advantages and disadvantages of such neural networks in identifying individuals from images, especially for pet animals like cats. In the present study, by systematically comparing the efficacy of different neural networks in cat recognition, we found traditional CNNs trained with transfer learning have better performance than models trained with the fine-tuning method or Siamese networks in individual cat recognition. In addition, ConvNeXt and DenseNet yield significant results which could be further optimized for individual cat recognition in pet stores and in the wild. These results provide a method to improve cat management in pet stores and monitoring of cats in the wild.<|reference_end|>
arxiv
@article{li2024the, title={The Comparison of Individual Cat Recognition Using Neural Networks}, author={Mingxuan Li and Kai Zhou}, journal={arXiv preprint arXiv:2410.02305}, year={2024}, archivePrefix={arXiv}, eprint={2410.02305}, primaryClass={cs.CV} }
li2024the
arxiv-664966
2410.02307
Model-guided Fuzzing of Distributed Systems
<|reference_start|>Model-guided Fuzzing of Distributed Systems: We present a coverage-guided testing algorithm for distributed systems implementations. Our main innovation is the use of an abstract formal model of the system that is used to define coverage. Such abstract models are frequently developed in early phases of protocol design and verification but are infrequently used at testing time. We show that guiding random test generation using model coverage can be effective in covering interesting points in the implementation state space. We have implemented a fuzzer for distributed system implementations and abstract models written in TLA+. Our algorithm shows better coverage over purely random exploration as well as random exploration guided by different notions of scheduler coverage and mutation. In particular, we show consistently higher coverage and detect bugs faster on implementations of distributed consensus protocols such as those in Etcd-raft and RedisRaft. Moreover, we discovered 13 previously unknown bugs in their implementations, four of which could only be detected by model-guided fuzzing.<|reference_end|>
arxiv
@article{gulcan2024model-guided, title={Model-guided Fuzzing of Distributed Systems}, author={Ege Berkay Gulcan, Burcu Kulahcioglu Ozkan, Rupak Majumdar, Srinidhi Nagendra}, journal={arXiv preprint arXiv:2410.02307}, year={2024}, archivePrefix={arXiv}, eprint={2410.02307}, primaryClass={cs.SE cs.DC} }
gulcan2024model-guided
arxiv-664967
2410.02308
Traffic Light or Light Traffic? Investigating Phrasal Semantics in Large Language Models
<|reference_start|>Traffic Light or Light Traffic? Investigating Phrasal Semantics in Large Language Models: Phrases are fundamental linguistic units through which humans convey semantics. This study critically examines the capacity of API-based large language models (LLMs) to comprehend phrase semantics, utilizing three human-annotated datasets. We assess the performance of LLMs in executing phrase semantic reasoning tasks guided by natural language instructions and explore the impact of common prompting techniques, including few-shot demonstrations and Chain-of-Thought reasoning. Our findings reveal that LLMs greatly outperform traditional embedding methods across the datasets; however, they do not show a significant advantage over fine-tuned methods. The effectiveness of advanced prompting strategies shows variability. We conduct detailed error analyses to interpret the limitations faced by LLMs in comprehending phrase semantics. Code and data can be found at https://github.com/memray/llm_phrase_semantics.<|reference_end|>
arxiv
@article{meng2024traffic, title={Traffic Light or Light Traffic? Investigating Phrasal Semantics in Large Language Models}, author={Rui Meng, Ye Liu, Lifu Tu, Daqing He, Yingbo Zhou, Semih Yavuz}, journal={arXiv preprint arXiv:2410.02308}, year={2024}, archivePrefix={arXiv}, eprint={2410.02308}, primaryClass={cs.CL} }
meng2024traffic
arxiv-664968
2410.02309
Decoupling Layout from Glyph in Online Chinese Handwriting Generation
<|reference_start|>Decoupling Layout from Glyph in Online Chinese Handwriting Generation: Text plays a crucial role in the transmission of human civilization, and teaching machines to generate online handwritten text in various styles presents an interesting and significant challenge. However, most prior work has concentrated on generating individual Chinese fonts, leaving {complete text line generation largely unexplored}. In this paper, we identify that text lines can naturally be divided into two components: layout and glyphs. Based on this division, we designed a text line layout generator coupled with a diffusion-based stylized font synthesizer to address this challenge hierarchically. More concretely, the layout generator performs in-context-like learning based on the text content and the provided style references to generate positions for each glyph autoregressively. Meanwhile, the font synthesizer which consists of a character embedding dictionary, a multi-scale calligraphy style encoder, and a 1D U-Net based diffusion denoiser will generate each font on its position while imitating the calligraphy style extracted from the given style references. Qualitative and quantitative experiments on the CASIA-OLHWDB demonstrate that our method is capable of generating structurally correct and indistinguishable imitation samples.<|reference_end|>
arxiv
@article{ren2024decoupling, title={Decoupling Layout from Glyph in Online Chinese Handwriting Generation}, author={Min-Si Ren, Yan-Ming Zhang, Yi Chen}, journal={arXiv preprint arXiv:2410.02309}, year={2024}, archivePrefix={arXiv}, eprint={2410.02309}, primaryClass={cs.CV} }
ren2024decoupling
arxiv-664969
2410.02311
A novel neural network-based approach to derive a geomagnetic baseline for robust characterization of geomagnetic indices at mid-latitude
<|reference_start|>A novel neural network-based approach to derive a geomagnetic baseline for robust characterization of geomagnetic indices at mid-latitude: Geomagnetic indices derived from ground magnetic measurements characterize the intensity of solar-terrestrial interaction. The \textit{Kp} index derived from multiple magnetic observatories at mid-latitude has commonly been used for space weather operations. Yet, its temporal cadence is low and its intensity scale is crude. To derive a new generation of geomagnetic indices, it is desirable to establish a geomagnetic `baseline' that defines the quiet-level of activity without solar-driven perturbations. We present a new approach for deriving a baseline that represents the time-dependent quiet variations focusing on data from Chambon-la-For\^et, France. Using a filtering technique, the measurements are first decomposed into the above-diurnal variation and the sum of 24h, 12h, 8h, and 6h filters, called the daily variation. Using correlation tools and SHapley Additive exPlanations, we identify parameters that dominantly correlate with the daily variation. Here, we predict the daily `quiet' variation using a long short-term memory neural network trained using at least 11 years of data at 1h cadence. This predicted daily quiet variation is combined with linear extrapolation of the secular trend associated with the intrinsic geomagnetic variability, which dominates the above-diurnal variation, to yield a new geomagnetic baseline. Unlike the existing baselines, our baseline is insensitive to geomagnetic storms. It is thus suitable for defining geomagnetic indices that accurately reflect the intensity of solar-driven perturbations. Our methodology is quick to implement and scalable, making it suitable for real-time operation. Strategies for operational forecasting of our geomagnetic baseline 1 day and 27 days in advance are presented.<|reference_end|>
arxiv
@article{kieokaew2024a, title={A novel neural network-based approach to derive a geomagnetic baseline for robust characterization of geomagnetic indices at mid-latitude}, author={Rungployphan Kieokaew, Veronika Haberle, Aur'elie Marchaudon, Pierre-Louis Blelly, Aude Chambodut}, journal={arXiv preprint arXiv:2410.02311}, year={2024}, archivePrefix={arXiv}, eprint={2410.02311}, primaryClass={physics.space-ph astro-ph.EP cs.LG physics.geo-ph} }
kieokaew2024a
arxiv-664970
2410.02312
Federated Reinforcement Learning to Optimize Teleoperated Driving Networks
<|reference_start|>Federated Reinforcement Learning to Optimize Teleoperated Driving Networks: Several sixth generation (6G) use cases have tight requirements in terms of reliability and latency, in particular teleoperated driving (TD). To address those requirements, Predictive Quality of Service (PQoS), possibly combined with reinforcement learning (RL), has emerged as a valid approach to dynamically adapt the configuration of the TD application (e.g., the level of compression of automotive data) to the experienced network conditions. In this work, we explore different classes of RL algorithms for PQoS, namely MAB (stateless), SARSA (stateful on-policy), Q-Learning (stateful off-policy), and DSARSA and DDQN (with Neural Network (NN) approximation). We trained the agents in a federated learning (FL) setup to improve the convergence time and fairness, and to promote privacy and security. The goal is to optimize the trade-off between Quality of Service (QoS), measured in terms of the end-to-end latency, and Quality of Experience (QoE), measured in terms of the quality of the resulting compression operation. We show that Q-Learning uses a small number of learnable parameters, and is the best approach to perform PQoS in the TD scenario in terms of average reward, convergence, and computational cost.<|reference_end|>
arxiv
@article{bragato2024federated, title={Federated Reinforcement Learning to Optimize Teleoperated Driving Networks}, author={Filippo Bragato, Marco Giordani, Michele Zorzi}, journal={arXiv preprint arXiv:2410.02312}, year={2024}, archivePrefix={arXiv}, eprint={2410.02312}, primaryClass={cs.NI} }
bragato2024federated
arxiv-664971
2410.02314
An Efficient Inference Frame for SMLM (Single-Molecule Localization Microscopy)
<|reference_start|>An Efficient Inference Frame for SMLM (Single-Molecule Localization Microscopy): Single-molecule localization microscopy (SMLM) surpasses the diffraction limit, achieving subcellular resolution. Traditional SMLM analysis methods often rely on point spread function (PSF) model fitting, limiting the application of complex PSF models. In recent years, deep learning approaches have significantly improved SMLM algorithms, yielding promising results. However, limitations in inference speed and model size have restricted the widespread adoption of deep learning in practical applications. To address these challenges, this paper proposes an efficient model deployment framework and introduces a lightweight neural network, DilatedLoc, aimed at enhancing both image reconstruction quality and inference speed. Compared to leading network models, DilatedLoc reduces network parameters to under 100 MB and achieves a 50% improvement in inference speed, with superior GPU utilization through a novel deployment architecture compatible with various network models.<|reference_end|>
arxiv
@article{luo2024an, title={An Efficient Inference Frame for SMLM (Single-Molecule Localization Microscopy)}, author={Tingdan Luo}, journal={arXiv preprint arXiv:2410.02314}, year={2024}, archivePrefix={arXiv}, eprint={2410.02314}, primaryClass={q-bio.QM cs.CE eess.IV} }
luo2024an
arxiv-664972
2410.02316
CTARR: A fast and robust method for identifying anatomical regions on CT images via atlas registration
<|reference_start|>CTARR: A fast and robust method for identifying anatomical regions on CT images via atlas registration: Medical image analysis tasks often focus on regions or structures located in a particular location within the patient's body. Often large parts of the image may not be of interest for the image analysis task. When using deep-learning based approaches, this causes an unnecessary increases the computational burden during inference and raises the chance of errors. In this paper, we introduce CTARR, a novel generic method for CT Anatomical Region Recognition. The method serves as a pre-processing step for any deep learning-based CT image analysis pipeline by automatically identifying the pre-defined anatomical region that is relevant for the follow-up task and removing the rest. It can be used in (i) image segmentation to prevent false positives in anatomically implausible regions and speeding up the inference, (ii) image classification to produce image crops that are consistent in their anatomical context, and (iii) image registration by serving as a fast pre-registration step. Our proposed method is based on atlas registration and provides a fast and robust way to crop any anatomical region encoded as one or multiple bounding box(es) from any unlabeled CT scan of the brain, chest, abdomen and/or pelvis. We demonstrate the utility and robustness of the proposed method in the context of medical image segmentation by evaluating it on six datasets of public segmentation challenges. The foreground voxels in the regions of interest are preserved in the vast majority of cases and tasks (97.45-100%) while taking only fractions of a seconds to compute (0.1-0.21s) on a deep learning workstation and greatly reducing the segmentation runtime (2.0-12.7x). Our code is available at https://github.com/ThomasBudd/ctarr.<|reference_end|>
arxiv
@article{buddenkotte2024ctarr:, title={CTARR: A fast and robust method for identifying anatomical regions on CT images via atlas registration}, author={Thomas Buddenkotte, Roland Opfer, Julia Kr"uger, Alessa Hering, Mireia Crispin-Ortuzar}, journal={Machine.Learning.for.Biomedical.Imaging. 2 (2024)}, year={2024}, doi={10.59275/j.melba.2024-f5fc}, archivePrefix={arXiv}, eprint={2410.02316}, primaryClass={cs.CV cs.AI cs.LG} }
buddenkotte2024ctarr:
arxiv-664973
2410.02317
Polynomial approximation of noisy functions
<|reference_start|>Polynomial approximation of noisy functions: Approximating a univariate function on the interval $[-1,1]$ with a polynomial is among the most classical problems in numerical analysis. When the function evaluations come with noise, a least-squares fit is known to reduce the effect of noise as more samples are taken. The generic algorithm for the least-squares problem requires $O(Nn^2)$ operations, where $N+1$ is the number of sample points and $n$ is the degree of the polynomial approximant. This algorithm is unstable when $n$ is large, for example $n\gg \sqrt{N}$ for equispaced sample points. In this study, we blend numerical analysis and statistics to introduce a stable and fast $O(N\log N)$ algorithm called NoisyChebtrunc based on the Chebyshev interpolation. It has the same error reduction effect as least-squares and the convergence is spectral until the error reaches $O(\sigma \sqrt{{n}/{N}})$, where $\sigma$ is the noise level, after which the error continues to decrease at the Monte-Carlo $O(1/\sqrt{N})$ rate. To determine the polynomial degree, NoisyChebtrunc employs a statistical criterion, namely Mallows' $C_p$. We analyze NoisyChebtrunc in terms of the variance and concentration in the infinity norm to the underlying noiseless function. These results show that with high probability the infinity-norm error is bounded by a small constant times $\sigma \sqrt{{n}/{N}}$, when the noise {is} independent and follows a subgaussian or subexponential distribution. We illustrate the performance of NoisyChebtrunc with numerical experiments.<|reference_end|>
arxiv
@article{matsuda2024polynomial, title={Polynomial approximation of noisy functions}, author={Takeru Matsuda, Yuji Nakatsukasa}, journal={arXiv preprint arXiv:2410.02317}, year={2024}, archivePrefix={arXiv}, eprint={2410.02317}, primaryClass={math.NA cs.NA math.ST stat.CO stat.TH} }
matsuda2024polynomial
arxiv-664974
2410.02319
QDGset: A Large Scale Grasping Dataset Generated with Quality-Diversity
<|reference_start|>QDGset: A Large Scale Grasping Dataset Generated with Quality-Diversity: Recent advances in AI have led to significant results in robotic learning, but skills like grasping remain partially solved. Many recent works exploit synthetic grasping datasets to learn to grasp unknown objects. However, those datasets were generated using simple grasp sampling methods using priors. Recently, Quality-Diversity (QD) algorithms have been proven to make grasp sampling significantly more efficient. In this work, we extend QDG-6DoF, a QD framework for generating object-centric grasps, to scale up the production of synthetic grasping datasets. We propose a data augmentation method that combines the transformation of object meshes with transfer learning from previous grasping repertoires. The conducted experiments show that this approach reduces the number of required evaluations per discovered robust grasp by up to 20%. We used this approach to generate QDGset, a dataset of 6DoF grasp poses that contains about 3.5 and 4.5 times more grasps and objects, respectively, than the previous state-of-the-art. Our method allows anyone to easily generate data, eventually contributing to a large-scale collaborative dataset of synthetic grasps.<|reference_end|>
arxiv
@article{huber2024qdgset:, title={QDGset: A Large Scale Grasping Dataset Generated with Quality-Diversity}, author={Johann Huber, Franc{c}ois H'el'enon, Mathilde Kappel, Ignacio de Loyola P'aez-Ubieta, Santiago T. Puente, Pablo Gil, Fa"iz Ben Amar, St'ephane Doncieux}, journal={arXiv preprint arXiv:2410.02319}, year={2024}, archivePrefix={arXiv}, eprint={2410.02319}, primaryClass={cs.RO cs.LG} }
huber2024qdgset:
arxiv-664975
2410.02320
Post-edits Are Preferences Too
<|reference_start|>Post-edits Are Preferences Too: Preference Optimization (PO) techniques are currently one of the state of the art techniques for fine-tuning large language models (LLMs) on pairwise preference feedback from human annotators. However, in machine translation, this sort of feedback can be difficult to solicit. Additionally, Kreutzer et al. (2018) have shown that, for machine translation, pairwise preferences are less reliable than other forms of human feedback, such as 5-point ratings. We examine post-edits to see if they can be a source of reliable human preferences by construction. In PO, a human annotator is shown sequences $s_1$ and $s_2$ and asked for a preference judgment, %$s_1 > s_2$; while for post-editing, editors create $s_1$ and know that it should be better than $s_2$. We attempt to use these implicit preferences for PO and show that it helps the model move towards post-edit-like hypotheses and away from machine translation-like hypotheses. Furthermore, we show that best results are obtained by pre-training the model with supervised fine-tuning (SFT) on post-edits in order to promote post-edit-like hypotheses to the top output ranks.<|reference_end|>
arxiv
@article{berger2024post-edits, title={Post-edits Are Preferences Too}, author={Nathaniel Berger and Stefan Riezler and Miriam Exel and Matthias Huck}, journal={arXiv preprint arXiv:2410.02320}, year={2024}, archivePrefix={arXiv}, eprint={2410.02320}, primaryClass={cs.CL cs.AI cs.LG} }
berger2024post-edits
arxiv-664976
2410.02321
Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis
<|reference_start|>Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis: Diffusion models have achieved great success in generating high-dimensional samples across various applications. While the theoretical guarantees for continuous-state diffusion models have been extensively studied, the convergence analysis of the discrete-state counterparts remains under-explored. In this paper, we study the theoretical aspects of score-based discrete diffusion models under the Continuous Time Markov Chain (CTMC) framework. We introduce a discrete-time sampling algorithm in the general state space $[S]^d$ that utilizes score estimators at predefined time points. We derive convergence bounds for the Kullback-Leibler (KL) divergence and total variation (TV) distance between the generated sample distribution and the data distribution, considering both scenarios with and without early stopping under specific assumptions. Notably, our KL divergence bounds are nearly linear in dimension $d$, aligning with state-of-the-art results for diffusion models. Our convergence analysis employs a Girsanov-based method and establishes key properties of the discrete score function, which are essential for characterizing the discrete-time sampling process.<|reference_end|>
arxiv
@article{zhang2024convergence, title={Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis}, author={Zikun Zhang, Zixiang Chen, Quanquan Gu}, journal={arXiv preprint arXiv:2410.02321}, year={2024}, archivePrefix={arXiv}, eprint={2410.02321}, primaryClass={cs.LG stat.ML} }
zhang2024convergence
arxiv-664977
2410.02323
RESSCAL3D++: Joint Acquisition and Semantic Segmentation of 3D Point Clouds
<|reference_start|>RESSCAL3D++: Joint Acquisition and Semantic Segmentation of 3D Point Clouds: 3D scene understanding is crucial for facilitating seamless interaction between digital devices and the physical world. Real-time capturing and processing of the 3D scene are essential for achieving this seamless integration. While existing approaches typically separate acquisition and processing for each frame, the advent of resolution-scalable 3D sensors offers an opportunity to overcome this paradigm and fully leverage the otherwise wasted acquisition time to initiate processing. In this study, we introduce VX-S3DIS, a novel point cloud dataset accurately simulating the behavior of a resolution-scalable 3D sensor. Additionally, we present RESSCAL3D++, an important improvement over our prior work, RESSCAL3D, by incorporating an update module and processing strategy. By applying our method to the new dataset, we practically demonstrate the potential of joint acquisition and semantic segmentation of 3D point clouds. Our resolution-scalable approach significantly reduces scalability costs from 2% to just 0.2% in mIoU while achieving impressive speed-ups of 15.6 to 63.9% compared to the non-scalable baseline. Furthermore, our scalable approach enables early predictions, with the first one occurring after only 7% of the total inference time of the baseline. The new VX-S3DIS dataset is available at https://github.com/remcoroyen/vx-s3dis.<|reference_end|>
arxiv
@article{royen2024resscal3d++:, title={RESSCAL3D++: Joint Acquisition and Semantic Segmentation of 3D Point Clouds}, author={Remco Royen, Kostas Pataridis, Ward van der Tempel, Adrian Munteanu}, journal={arXiv preprint arXiv:2410.02323}, year={2024}, doi={10.1109/ICIP51287.2024.10647742}, archivePrefix={arXiv}, eprint={2410.02323}, primaryClass={cs.CV} }
royen2024resscal3d++:
arxiv-664978
2410.02324
Automated Tone Transcription and Clustering with Tone2Vec
<|reference_start|>Automated Tone Transcription and Clustering with Tone2Vec: Lexical tones play a crucial role in Sino-Tibetan languages. However, current phonetic fieldwork relies on manual effort, resulting in substantial time and financial costs. This is especially challenging for the numerous endangered languages that are rapidly disappearing, often compounded by limited funding. In this paper, we introduce pitch-based similarity representations for tone transcription, named Tone2Vec. Experiments on dialect clustering and variance show that Tone2Vec effectively captures fine-grained tone variation. Utilizing Tone2Vec, we develop the first automatic approach for tone transcription and clustering by presenting a novel representation transformation for transcriptions. Additionally, these algorithms are systematically integrated into an open-sourced and easy-to-use package, ToneLab, which facilitates automated fieldwork and cross-regional, cross-lexical analysis for tonal languages. Extensive experiments were conducted to demonstrate the effectiveness of our methods.<|reference_end|>
arxiv
@article{yang2024automated, title={Automated Tone Transcription and Clustering with Tone2Vec}, author={Yi Yang, Yiming Wang, ZhiQiang Tang, Jiahong Yuan}, journal={arXiv preprint arXiv:2410.02324}, year={2024}, archivePrefix={arXiv}, eprint={2410.02324}, primaryClass={cs.LG} }
yang2024automated
arxiv-664979
2410.02326
Autonomous Self-Trained Channel State Prediction Method for mmWave Vehicular Communications
<|reference_start|>Autonomous Self-Trained Channel State Prediction Method for mmWave Vehicular Communications: Establishing and maintaining 5G mmWave vehicular connectivity poses a significant challenge due to high user mobility that necessitates frequent triggering of beam switching procedures. Departing from reactive beam switching based on the user device channel state feedback, proactive beam switching prepares in advance for upcoming beam switching decisions by exploiting accurate channel state information (CSI) prediction. In this paper, we develop a framework for autonomous self-trained CSI prediction for mmWave vehicular users where a base station (gNB) collects and labels a dataset that it uses for training recurrent neural network (RNN)-based CSI prediction model. The proposed framework exploits the CSI feedback from vehicular users combined with overhearing the C-V2X cooperative awareness messages (CAMs) they broadcast. We implement and evaluate the proposed framework using deepMIMO dataset generation environment and demonstrate its capability to provide accurate CSI prediction for 5G mmWave vehicular users. CSI prediction model is trained and its capability to provide accurate CSI predictions from various input features are investigated.<|reference_end|>
arxiv
@article{orimogunje2024autonomous, title={Autonomous Self-Trained Channel State Prediction Method for mmWave Vehicular Communications}, author={Abidemi Orimogunje and Vukan Ninkovic and Evariste Twahirwa and Gaspard Gashema and Dejan Vukobratovic}, journal={arXiv preprint arXiv:2410.02326}, year={2024}, archivePrefix={arXiv}, eprint={2410.02326}, primaryClass={eess.SP cs.AI} }
orimogunje2024autonomous
arxiv-664980
2410.02329
AirTags for Human Localization, Not Just Objects
<|reference_start|>AirTags for Human Localization, Not Just Objects: Indoor localization has become increasingly important due to its wide-ranging applications in indoor navigation, emergency services, the Internet of Things (IoT), and accessibility for individuals with special needs. Traditional localization systems often require extensive calibration to achieve high accuracy. We introduce UbiLoc, an innovative, calibration-free indoor localization system that leverages Apple AirTags in a novel way to localize users instead of tracking objects. By utilizing the ubiquitous presence of AirTags and their Ultra-Wideband (UWB) technology, UbiLoc achieves centimeter-level accuracy, surpassing traditional WiFi and Bluetooth Low Energy (BLE) systems. UbiLoc addresses key challenges, including ranging errors caused by multipath and noise, through a novel AirTag selection technique. The system operates without the need for manual calibration, ensuring robustness and self-maintenance. Deployed on various Apple devices and tested in real-world environments, UbiLoc achieved median localization errors as low as 26 cm in a campus building and 31.5 cm in an apartment setting. These results demonstrate that UbiLoc is the first system to offer reliable, cm-level accuracy using widely available technology without requiring calibration, making it a promising solution for next-generation indoor localization systems.<|reference_end|>
arxiv
@article{hany2024airtags, title={AirTags for Human Localization, Not Just Objects}, author={Mohamed I. Hany, Hamada Rizk, Moustafa Youssef}, journal={arXiv preprint arXiv:2410.02329}, year={2024}, archivePrefix={arXiv}, eprint={2410.02329}, primaryClass={cs.NI} }
hany2024airtags
arxiv-664981
2410.02330
Llama SLayer 8B: Shallow Layers Hold the Key to Knowledge Injection
<|reference_start|>Llama SLayer 8B: Shallow Layers Hold the Key to Knowledge Injection: As a manner to augment pre-trained large language models (LLM), knowledge injection is critical to develop vertical domain large models and has been widely studied. Although most current approaches, including parameter-efficient fine-tuning (PEFT) and block expansion methods, uniformly apply knowledge across all LLM layers, it raises the question: are all layers equally crucial for knowledge injection? We begin by evaluating the importance of each layer in finding the optimal layer range for knowledge injection. Intuitively, the more important layers should play a more critical role in knowledge injection and deserve a denser injection. We observe performance dips in question-answering benchmarks after the removal or expansion of the shallow layers, and the degradation shrinks as the layer gets deeper, indicating that the shallow layers hold the key to knowledge injection. This insight leads us to propose the S strategy, a post-pretraining strategy of selectively enhancing shallow layers while pruning the less effective deep ones. Based on this strategy, we introduce Llama Slayer-8B and Llama Slayer-8B-Instruct. We experimented on the corpus of code $\&$ math and demonstrated the effectiveness of our strategy. Further experiments across different LLM, Mistral-7B, and a legal corpus confirmed the general applicability of the approach, underscoring its wide-ranging efficacy. Our code is available at: \https://github.com/txchen-USTC/Llama-Slayer<|reference_end|>
arxiv
@article{chen2024llama, title={Llama SLayer 8B: Shallow Layers Hold the Key to Knowledge Injection}, author={Tianxiang Chen, Zhentao Tan, Tao Gong, Yue Wu, Qi Chu, Bin Liu, Jieping Ye, Nenghai Yu}, journal={arXiv preprint arXiv:2410.02330}, year={2024}, archivePrefix={arXiv}, eprint={2410.02330}, primaryClass={cs.CL} }
chen2024llama
arxiv-664982
2410.02331
Self-eXplainable AI for Medical Image Analysis: A Survey and New Outlooks
<|reference_start|>Self-eXplainable AI for Medical Image Analysis: A Survey and New Outlooks: The increasing demand for transparent and reliable models, particularly in high-stakes decision-making areas such as medical image analysis, has led to the emergence of eXplainable Artificial Intelligence (XAI). Post-hoc XAI techniques, which aim to explain black-box models after training, have been controversial in recent works concerning their fidelity to the models' predictions. In contrast, Self-eXplainable AI (S-XAI) offers a compelling alternative by incorporating explainability directly into the training process of deep learning models. This approach allows models to generate inherent explanations that are closely aligned with their internal decision-making processes. Such enhanced transparency significantly supports the trustworthiness, robustness, and accountability of AI systems in real-world medical applications. To facilitate the development of S-XAI methods for medical image analysis, this survey presents an comprehensive review across various image modalities and clinical applications. It covers more than 200 papers from three key perspectives: 1) input explainability through the integration of explainable feature engineering and knowledge graph, 2) model explainability via attention-based learning, concept-based learning, and prototype-based learning, and 3) output explainability by providing counterfactual explanation and textual explanation. Additionally, this paper outlines the desired characteristics of explainability and existing evaluation methods for assessing explanation quality. Finally, it discusses the major challenges and future research directions in developing S-XAI for medical image analysis.<|reference_end|>
arxiv
@article{hou2024self-explainable, title={Self-eXplainable AI for Medical Image Analysis: A Survey and New Outlooks}, author={Junlin Hou, Sicen Liu, Yequan Bie, Hongmei Wang, Andong Tan, Luyang Luo, Hao Chen}, journal={arXiv preprint arXiv:2410.02331}, year={2024}, archivePrefix={arXiv}, eprint={2410.02331}, primaryClass={cs.CV} }
hou2024self-explainable
arxiv-664983
2410.02335
Data Optimisation of Machine Learning Models for Smart Irrigation in Urban Parks
<|reference_start|>Data Optimisation of Machine Learning Models for Smart Irrigation in Urban Parks: Urban environments face significant challenges due to climate change, including extreme heat, drought, and water scarcity, which impact public health, community well-being, and local economies. Effective management of these issues is crucial, particularly in areas like Sydney Olympic Park, which relies on one of Australia's largest irrigation systems. The Smart Irrigation Management for Parks and Cool Towns (SIMPaCT) project, initiated in 2021, leverages advanced technologies and machine learning models to optimize irrigation and induce physical cooling. This paper introduces two novel methods to enhance the efficiency of the SIMPaCT system's extensive sensor network and applied machine learning models. The first method employs clustering of sensor time series data using K-shape and K-means algorithms to estimate readings from missing sensors, ensuring continuous and reliable data. This approach can detect anomalies, correct data sources, and identify and remove redundant sensors to reduce maintenance costs. The second method involves sequential data collection from different sensor locations using robotic systems, significantly reducing the need for high numbers of stationary sensors. Together, these methods aim to maintain accurate soil moisture predictions while optimizing sensor deployment and reducing maintenance costs, thereby enhancing the efficiency and effectiveness of the smart irrigation system. Our evaluations demonstrate significant improvements in the efficiency and cost-effectiveness of soil moisture monitoring networks. The cluster-based replacement of missing sensors provides up to 5.4% decrease in average error. The sequential sensor data collection as a robotic emulation shows 17.2% and 2.1% decrease in average error for circular and linear paths respectively.<|reference_end|>
arxiv
@article{ghadiri2024data, title={Data Optimisation of Machine Learning Models for Smart Irrigation in Urban Parks}, author={Nasser Ghadiri, Bahman Javadi, Oliver Obst and Sebastian Pfautsch}, journal={arXiv preprint arXiv:2410.02335}, year={2024}, archivePrefix={arXiv}, eprint={2410.02335}, primaryClass={cs.LG cs.RO} }
ghadiri2024data
arxiv-664984
2410.02338
How Much Can RAG Help the Reasoning of LLM?
<|reference_start|>How Much Can RAG Help the Reasoning of LLM?: Retrieval-Augmented Generation (RAG) has gained significant popularity in modern Large Language Models (LLMs) due to its effectiveness in introducing new knowledge and reducing hallucinations. However, the deep understanding of RAG remains limited, how does RAG help the reasoning process and can RAG help improve the reasoning capability remains question. While external documents are typically considered as a method to incorporate domain-specific information, they also contain intermediate reasoning results related to the query, this suggests that documents could enhance the reasoning capability of LLMs, which has not been previously explored. In this paper, we investigate this issue in depth and find that while RAG can assist with reasoning, the help is limited. If we conceptualize the reasoning process as a tree with fixed depth, then RAG struggles to assist LLMs in performing deeper reasoning. Additionally, the information in the documents requires preprocessing to filter out noise. We demonstrate that this preprocessing is difficult to achieve simply fine-tuning of the LLM, it often necessitates numerous additional transformer layers to solve the problem. To simplify the problem, we propose DPrompt tuning, which effectively resolves the issue within just limited transformer layers, leading to improved performance.<|reference_end|>
arxiv
@article{liu2024how, title={How Much Can RAG Help the Reasoning of LLM?}, author={Jingyu Liu, Jiaen Lin, Yong Liu}, journal={arXiv preprint arXiv:2410.02338}, year={2024}, archivePrefix={arXiv}, eprint={2410.02338}, primaryClass={cs.CL cs.AI} }
liu2024how
arxiv-664985
2410.02340
Equivalence between Geometric Frequency and Lagrange Derivative
<|reference_start|>Equivalence between Geometric Frequency and Lagrange Derivative: The paper shows the equivalence between the geometric frequency of an electric quantity, namely, voltage and current, and the Lagrange derivative of a stream-line of a fluid. The geometric frequency is a concept recently proposed by the author and is a generalization of the instantaneous frequency, a quantity that is particularly important for the analysis and the control of electric power systems. On the other hand, the Lagrange derivative is mostly utilized in fluid dynamics and helps decomposing the time derivative into various components. The paper shows how these components relate to the elements of the geometric frequency. The paper also shows, through a variety of numerical examples, how the decomposition of the Lagrange derivative helps identifying the distortion of the waveform of a measured electric quantity and how this information can be utilized to classify system operating conditions.<|reference_end|>
arxiv
@article{milano2024equivalence, title={Equivalence between Geometric Frequency and Lagrange Derivative}, author={Federico Milano}, journal={arXiv preprint arXiv:2410.02340}, year={2024}, archivePrefix={arXiv}, eprint={2410.02340}, primaryClass={eess.SY cs.SY math.DG physics.flu-dyn} }
milano2024equivalence
arxiv-664986
2410.02342
Capacity Bounds for the Poisson-Repeat Channel
<|reference_start|>Capacity Bounds for the Poisson-Repeat Channel: We develop bounds on the capacity of Poisson-repeat channels (PRCs) for which each input bit is independently repeated according to a Poisson distribution. The upper bounds are obtained by considering an auxiliary channel where the output lengths corresponding to input blocks of a given length are provided as side information at the receiver. Numerical results show that the resulting upper bounds are significantly tighter than the best known one for a large range of the PRC parameter $\lambda$ (specifically, for $\lambda\ge 0.35$). We also describe a way of obtaining capacity lower bounds using information rates of the auxiliary channel and the entropy rate of the provided side information.<|reference_end|>
arxiv
@article{kazemi2024capacity, title={Capacity Bounds for the Poisson-Repeat Channel}, author={Mohammad Kazemi, Tolga M. Duman}, journal={arXiv preprint arXiv:2410.02342}, year={2024}, archivePrefix={arXiv}, eprint={2410.02342}, primaryClass={cs.IT math.IT} }
kazemi2024capacity
arxiv-664987
2410.02343
Listening to the Wise Few: Select-and-Copy Attention Heads for Multiple-Choice QA
<|reference_start|>Listening to the Wise Few: Select-and-Copy Attention Heads for Multiple-Choice QA: A standard way to evaluate the abilities of LLM involves presenting a multiple-choice question and selecting the option with the highest logit as the model's predicted answer. However, such a format for evaluating LLMs has limitations, since even if the model knows the correct answer, it may struggle to select the corresponding letter simply due to difficulties in following this rigid format. To address this, we introduce new scores that better capture and reveal model's underlying knowledge: the Query-Key Score (QK-score), derived from the interaction between query and key representations in attention heads, and the Attention Score, based on attention weights. These scores are extracted from specific \textit{select-and-copy} heads, which show consistent performance across popular Multi-Choice Question Answering (MCQA) datasets. Based on these scores, our method improves knowledge extraction, yielding up to 16\% gain for LLaMA2-7B and up to 10\% for larger models on popular MCQA benchmarks. At the same time, the accuracy on a simple synthetic dataset, where the model explicitly knows the right answer, increases by almost 60\%, achieving nearly perfect accuracy, therefore demonstrating the method's efficiency in mitigating MCQA format limitations. To support our claims, we conduct experiments on models ranging from 7 billion to 70 billion parameters in both zero- and few-shot setups.<|reference_end|>
arxiv
@article{tulchinskii2024listening, title={Listening to the Wise Few: Select-and-Copy Attention Heads for Multiple-Choice QA}, author={Eduard Tulchinskii, Laida Kushnareva, Kristian Kuznetsov, Anastasia Voznyuk, Andrei Andriiainen, Irina Piontkovskaya, Evgeny Burnaev, Serguei Barannikov}, journal={arXiv preprint arXiv:2410.02343}, year={2024}, archivePrefix={arXiv}, eprint={2410.02343}, primaryClass={cs.CL cs.LG} }
tulchinskii2024listening
arxiv-664988
2410.02344
RelChaNet: Neural Network Feature Selection using Relative Change Scores
<|reference_start|>RelChaNet: Neural Network Feature Selection using Relative Change Scores: There is an ongoing effort to develop feature selection algorithms to improve interpretability, reduce computational resources, and minimize overfitting in predictive models. Neural networks stand out as architectures on which to build feature selection methods, and recently, neuron pruning and regrowth have emerged from the sparse neural network literature as promising new tools. We introduce RelChaNet, a novel and lightweight feature selection algorithm that uses neuron pruning and regrowth in the input layer of a dense neural network. For neuron pruning, a gradient sum metric measures the relative change induced in a network after a feature enters, while neurons are randomly regrown. We also propose an extension that adapts the size of the input layer at runtime. Extensive experiments on nine different datasets show that our approach generally outperforms the current state-of-the-art methods, and in particular improves the average accuracy by 2% on the MNIST dataset. Our code is available at https://github.com/flxzimmer/relchanet.<|reference_end|>
arxiv
@article{zimmer2024relchanet:, title={RelChaNet: Neural Network Feature Selection using Relative Change Scores}, author={Felix Zimmer}, journal={arXiv preprint arXiv:2410.02344}, year={2024}, archivePrefix={arXiv}, eprint={2410.02344}, primaryClass={cs.LG} }
zimmer2024relchanet:
arxiv-664989
2410.02345
Coastal Underwater Evidence Search System with Surface-Underwater Collaboration
<|reference_start|>Coastal Underwater Evidence Search System with Surface-Underwater Collaboration: The Coastal underwater evidence search system with surface-underwater collaboration is designed to revolutionize the search for artificial objects in coastal underwater environments, overcoming limitations associated with traditional methods such as divers and tethered remotely operated vehicles. Our innovative multi-robot collaborative system consists of three parts, an autonomous surface vehicle as a mission control center, a towed underwater vehicle for wide-area search, and a biomimetic underwater robot inspired by marine organisms for detailed inspections of identified areas. We conduct extensive simulations and real-world experiments in pond environments and coastal fields to demonstrate the system potential to surpass the limitations of conventional underwater search methods, offering a robust and efficient solution for law enforcement and recovery operations in marine settings.<|reference_end|>
arxiv
@article{lin2024coastal, title={Coastal Underwater Evidence Search System with Surface-Underwater Collaboration}, author={Hin Wang Lin, Pengyu Wang, Zhaohua Yang, Ka Chun Leung, Fangming Bao, Ka Yu Kui, Jian Xiang Erik Xu and Ling Shi}, journal={arXiv preprint arXiv:2410.02345}, year={2024}, archivePrefix={arXiv}, eprint={2410.02345}, primaryClass={cs.RO} }
lin2024coastal
arxiv-664990
2410.02348
Simplicity bias and optimization threshold in two-layer ReLU networks
<|reference_start|>Simplicity bias and optimization threshold in two-layer ReLU networks: Understanding generalization of overparametrized neural networks remains a fundamental challenge in machine learning. Most of the literature mostly studies generalization from an interpolation point of view, taking convergence of parameters towards a global minimum of the training loss for granted. While overparametrized architectures indeed interpolated the data for typical classification tasks, this interpolation paradigm does not seem valid anymore for more complex tasks such as in-context learning or diffusion. Instead for such tasks, it has been empirically observed that the trained models goes from global minima to spurious local minima of the training loss as the number of training samples becomes larger than some level we call optimization threshold. While the former yields a poor generalization to the true population loss, the latter was observed to actually correspond to the minimiser of this true loss. This paper explores theoretically this phenomenon in the context of two-layer ReLU networks. We demonstrate that, despite overparametrization, networks often converge toward simpler solutions rather than interpolating the training data, which can lead to a drastic improvement on the test loss with respect to interpolating solutions. Our analysis relies on the so called early alignment phase, during which neurons align towards specific directions. This directional alignment, which occurs in the early stage of training, leads to a simplicity bias, wherein the network approximates the ground truth model without converging to the global minimum of the training loss. Our results suggest that this bias, resulting in an optimization threshold from which interpolation is not reached anymore, is beneficial and enhances the generalization of trained models.<|reference_end|>
arxiv
@article{boursier2024simplicity, title={Simplicity bias and optimization threshold in two-layer ReLU networks}, author={Etienne Boursier and Nicolas Flammarion}, journal={arXiv preprint arXiv:2410.02348}, year={2024}, archivePrefix={arXiv}, eprint={2410.02348}, primaryClass={cs.LG stat.ML} }
boursier2024simplicity
arxiv-664991
2410.02352
ProtoSeg: A Prototype-Based Point Cloud Instance Segmentation Method
<|reference_start|>ProtoSeg: A Prototype-Based Point Cloud Instance Segmentation Method: 3D instance segmentation is crucial for obtaining an understanding of a point cloud scene. This paper presents a novel neural network architecture for performing instance segmentation on 3D point clouds. We propose to jointly learn coefficients and prototypes in parallel which can be combined to obtain the instance predictions. The coefficients are computed using an overcomplete set of sampled points with a novel multi-scale module, dubbed dilated point inception. As the set of obtained instance mask predictions is overcomplete, we employ a non-maximum suppression algorithm to retrieve the final predictions. This approach allows to omit the time-expensive clustering step and leads to a more stable inference time. The proposed method is not only 28% faster than the state-of-the-art, it also exhibits the lowest standard deviation. Our experiments have shown that the standard deviation of the inference time is only 1.0% of the total time while it ranges between 10.8 and 53.1% for the state-of-the-art methods. Lastly, our method outperforms the state-of-the-art both on S3DIS-blocks (4.9% in mRec on Fold-5) and PartNet (2.0% on average in mAP).<|reference_end|>
arxiv
@article{royen2024protoseg:, title={ProtoSeg: A Prototype-Based Point Cloud Instance Segmentation Method}, author={Remco Royen, Leon Denis, Adrian Munteanu}, journal={arXiv preprint arXiv:2410.02352}, year={2024}, archivePrefix={arXiv}, eprint={2410.02352}, primaryClass={cs.CV} }
royen2024protoseg:
arxiv-664992
2410.02355
AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models
<|reference_start|>AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models: Large language models (LLMs) often exhibit hallucinations due to incorrect or outdated knowledge. Hence, model editing methods have emerged to enable targeted knowledge updates. To achieve this, a prevailing paradigm is the locating-then-editing approach, which first locates influential parameters and then edits them by introducing a perturbation. While effective, current studies have demonstrated that this perturbation inevitably disrupt the originally preserved knowledge within LLMs, especially in sequential editing scenarios. To address this, we introduce AlphaEdit, a novel solution that projects perturbation onto the null space of the preserved knowledge before applying it to the parameters. We theoretically prove that this projection ensures the output of post-edited LLMs remains unchanged when queried about the preserved knowledge, thereby mitigating the issue of disruption. Extensive experiments on various LLMs, including LLaMA3, GPT2-XL, and GPT-J, show that AlphaEdit boosts the performance of most locating-then-editing methods by an average of 36.4% with a single line of additional code for projection solely. Our code is available at: https://github.com/jianghoucheng/AlphaEdit.<|reference_end|>
arxiv
@article{fang2024alphaedit:, title={AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models}, author={Junfeng Fang, Houcheng Jiang, Kun Wang, Yunshan Ma, Xiang Wang, Xiangnan He, Tat-seng Chua}, journal={arXiv preprint arXiv:2410.02355}, year={2024}, archivePrefix={arXiv}, eprint={2410.02355}, primaryClass={cs.CL cs.AI} }
fang2024alphaedit:
arxiv-664993
2410.02358
Cross-Domain Comparative Analysis of Digital Twins and Universalised Solutions
<|reference_start|>Cross-Domain Comparative Analysis of Digital Twins and Universalised Solutions: Digitalisation is one of the main drivers of most economic sectors nowadays and the digital twin, as a reification of digitalisation for complex systems has attracted much attention from both academics and industry. There have been studies focusing on digital twins in a specific sector while there are few exercising insightful comparisons of digital twins from different domains. Considering the digital twinning is a cross-domain transformation, it is beneficial to establish the principles of universality and variation that can explain similarities and differences in any digital twins. This paper first delivers a comparative analysis of digital twins in five domains through a six-dimensional characterisation framework. Then, by departing from the correlations among the domain-specific DT development, a cross-domain Digital Twin Platform-as-a-Service (DT-PaaS) is proposed to universalise the common process, tools and applications, meanwhile being inclusive of variations of every digital twin instance. As a centralised data, modeling and service platform, it is expected to break the barriers between domains by enabling the cross-domain digital twin data sharing, interoperability and development synergy and tackle some complex global challenges such as climate challenge, net zero, pandemics, etc.<|reference_end|>
arxiv
@article{xiong2024cross-domain, title={Cross-Domain Comparative Analysis of Digital Twins and Universalised Solutions}, author={Guanyu Xiong, Yan Gao, Haijiang Li}, journal={arXiv preprint arXiv:2410.02358}, year={2024}, archivePrefix={arXiv}, eprint={2410.02358}, primaryClass={eess.SY cs.SY} }
xiong2024cross-domain
arxiv-664994
2410.02360
Source Data Selection for Brain-Computer Interfaces based on Simple Features
<|reference_start|>Source Data Selection for Brain-Computer Interfaces based on Simple Features: This paper demonstrates that simple features available during the calibration of a brain-computer interface can be utilized for source data selection to improve the performance of the brain-computer interface for a new target user through transfer learning. To support this, a public motor imagery dataset is used for analysis, and a method called the Transfer Performance Predictor method is presented. The simple features are based on the covariance matrices of the data and the Riemannian distance between them. The Transfer Performance Predictor method outperforms other source data selection methods as it selects source data that gives a better transfer learning performance for the target users.<|reference_end|>
arxiv
@article{heskebeck2024source, title={Source Data Selection for Brain-Computer Interfaces based on Simple Features}, author={Frida Heskebeck, Carolina Bergeling, Bo Bernhardsson}, journal={arXiv preprint arXiv:2410.02360}, year={2024}, archivePrefix={arXiv}, eprint={2410.02360}, primaryClass={cs.HC cs.LG} }
heskebeck2024source
arxiv-664995
2410.02362
A Comprehensive Survey of Mamba Architectures for Medical Image Analysis: Classification, Segmentation, Restoration and Beyond
<|reference_start|>A Comprehensive Survey of Mamba Architectures for Medical Image Analysis: Classification, Segmentation, Restoration and Beyond: Mamba, a special case of the State Space Model, is gaining popularity as an alternative to template-based deep learning approaches in medical image analysis. While transformers are powerful architectures, they have drawbacks, including quadratic computational complexity and an inability to address long-range dependencies efficiently. This limitation affects the analysis of large and complex datasets in medical imaging, where there are many spatial and temporal relationships. In contrast, Mamba offers benefits that make it well-suited for medical image analysis. It has linear time complexity, which is a significant improvement over transformers. Mamba processes longer sequences without attention mechanisms, enabling faster inference and requiring less memory. Mamba also demonstrates strong performance in merging multimodal data, improving diagnosis accuracy and patient outcomes. The organization of this paper allows readers to appreciate the capabilities of Mamba in medical imaging step by step. We begin by defining core concepts of SSMs and models, including S4, S5, and S6, followed by an exploration of Mamba architectures such as pure Mamba, U-Net variants, and hybrid models with convolutional neural networks, transformers, and Graph Neural Networks. We also cover Mamba optimizations, techniques and adaptations, scanning, datasets, applications, experimental results, and conclude with its challenges and future directions in medical imaging. This review aims to demonstrate the transformative potential of Mamba in overcoming existing barriers within medical imaging while paving the way for innovative advancements in the field. A comprehensive list of Mamba architectures applied in the medical field, reviewed in this work, is available at Github.<|reference_end|>
arxiv
@article{bansal2024a, title={A Comprehensive Survey of Mamba Architectures for Medical Image Analysis: Classification, Segmentation, Restoration and Beyond}, author={Shubhi Bansal, Sreeharish A, Madhava Prasath J, Manikandan S, Sreekanth Madisetty, Mohammad Zia Ur Rehman, Chandravardhan Singh Raghaw, Gaurav Duggal, and Nagendra Kumar}, journal={arXiv preprint arXiv:2410.02362}, year={2024}, archivePrefix={arXiv}, eprint={2410.02362}, primaryClass={cs.CV cs.AI} }
bansal2024a
arxiv-664996
2410.02365
From Concrete to Abstract: A Multimodal Generative Approach to Abstract Concept Learning
<|reference_start|>From Concrete to Abstract: A Multimodal Generative Approach to Abstract Concept Learning: Understanding and manipulating concrete and abstract concepts is fundamental to human intelligence. Yet, they remain challenging for artificial agents. This paper introduces a multimodal generative approach to high order abstract concept learning, which integrates visual and categorical linguistic information from concrete ones. Our model initially grounds subordinate level concrete concepts, combines them to form basic level concepts, and finally abstracts to superordinate level concepts via the grounding of basic-level concepts. We evaluate the model language learning ability through language-to-visual and visual-to-language tests with high order abstract concepts. Experimental results demonstrate the proficiency of the model in both language understanding and language naming tasks.<|reference_end|>
arxiv
@article{xie2024from, title={From Concrete to Abstract: A Multimodal Generative Approach to Abstract Concept Learning}, author={Haodong Xie, Rahul Singh Maharjan, Federico Tavella, Angelo Cangelosi}, journal={arXiv preprint arXiv:2410.02365}, year={2024}, archivePrefix={arXiv}, eprint={2410.02365}, primaryClass={cs.CL cs.AI} }
xie2024from
arxiv-664997
2410.02367
SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration
<|reference_start|>SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration: The transformer architecture predominates across various models. As the heart of the transformer, attention has a computational complexity of O(N^2), compared to O(N) for linear transformations. When handling large sequence lengths, attention becomes the primary time-consuming component. Although quantization has proven to be an effective method for accelerating model inference, existing quantization methods primarily focus on optimizing the linear layer. In response, we first analyze the feasibility of quantization in attention detailedly. Following that, we propose SageAttention, a highly efficient and accurate quantization method for attention. The OPS (operations per second) of our approach outperforms FlashAttention2 and xformers by about 2.1 times and 2.7 times, respectively. SageAttention also achieves superior accuracy performance over FlashAttention3. Comprehensive experiments confirm that our approach incurs almost no end-to-end metrics loss across diverse models, including those for large language processing, image generation, and video generation.<|reference_end|>
arxiv
@article{zhang2024sageattention:, title={SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration}, author={Jintao Zhang, Jia wei, Haofeng Huang, Pengle Zhang, Jun Zhu, Jianfei Chen}, journal={arXiv preprint arXiv:2410.02367}, year={2024}, archivePrefix={arXiv}, eprint={2410.02367}, primaryClass={cs.LG} }
zhang2024sageattention:
arxiv-664998
2410.02369
Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation
<|reference_start|>Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation: The Diffusion Model has not only garnered noteworthy achievements in the realm of image generation but has also demonstrated its potential as an effective pretraining method utilizing unlabeled data. Drawing from the extensive potential unveiled by the Diffusion Model in both semantic correspondence and open vocabulary segmentation, our work initiates an investigation into employing the Latent Diffusion Model for Few-shot Semantic Segmentation. Recently, inspired by the in-context learning ability of large language models, Few-shot Semantic Segmentation has evolved into In-context Segmentation tasks, morphing into a crucial element in assessing generalist segmentation models. In this context, we concentrate on Few-shot Semantic Segmentation, establishing a solid foundation for the future development of a Diffusion-based generalist model for segmentation. Our initial focus lies in understanding how to facilitate interaction between the query image and the support image, resulting in the proposal of a KV fusion method within the self-attention framework. Subsequently, we delve deeper into optimizing the infusion of information from the support mask and simultaneously re-evaluating how to provide reasonable supervision from the query mask. Based on our analysis, we establish a simple and effective framework named DiffewS, maximally retaining the original Latent Diffusion Model's generative framework and effectively utilizing the pre-training prior. Experimental results demonstrate that our method significantly outperforms the previous SOTA models in multiple settings.<|reference_end|>
arxiv
@article{zhu2024unleashing, title={Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation}, author={Muzhi Zhu, Yang Liu, Zekai Luo, Chenchen Jing, Hao Chen, Guangkai Xu, Xinlong Wang, Chunhua Shen}, journal={arXiv preprint arXiv:2410.02369}, year={2024}, archivePrefix={arXiv}, eprint={2410.02369}, primaryClass={cs.CV} }
zhu2024unleashing
arxiv-664999
2410.02371
NTU-NPU System for Voice Privacy 2024 Challenge
<|reference_start|>NTU-NPU System for Voice Privacy 2024 Challenge: In this work, we describe our submissions for the Voice Privacy Challenge 2024. Rather than proposing a novel speech anonymization system, we enhance the provided baselines to meet all required conditions and improve evaluated metrics. Specifically, we implement emotion embedding and experiment with WavLM and ECAPA2 speaker embedders for the B3 baseline. Additionally, we compare different speaker and prosody anonymization techniques. Furthermore, we introduce Mean Reversion F0 for B5, which helps to enhance privacy without a loss in utility. Finally, we explore disentanglement models, namely $\beta$-VAE and NaturalSpeech3 FACodec.<|reference_end|>
arxiv
@article{kuzmin2024ntu-npu, title={NTU-NPU System for Voice Privacy 2024 Challenge}, author={Nikita Kuzmin, Hieu-Thi Luong, Jixun Yao, Lei Xie, Kong Aik Lee, Eng Siong Chng}, journal={2024 Challenge. Proc. 4th Symposium on Security and Privacy in Speech Communication, 72-79}, year={2024}, doi={10.21437/SPSC.2024-13}, archivePrefix={arXiv}, eprint={2410.02371}, primaryClass={eess.AS cs.AI} }
kuzmin2024ntu-npu
arxiv-665000
2410.02372
Fast Crystal Tensor Property Prediction: A General O(3)-Equivariant Framework Based on Polar Decomposition
<|reference_start|>Fast Crystal Tensor Property Prediction: A General O(3)-Equivariant Framework Based on Polar Decomposition: Predicting the tensor properties of crystalline materials is a fundamental task in materials science. Unlike single-value property prediction, which is inherently invariant, tensor property prediction requires maintaining $O(3)$ group tensor equivariance. This equivariance constraint often introduces tremendous computational costs, necessitating specialized designs for effective and efficient predictions. To address this limitation, we propose a general $O(3)$-equivariant framework for fast crystal tensor property prediction, called GoeCTP. Our framework is efficient as it does not need to impose equivalence constraints onto the network architecture. Instead, GoeCTP captures the tensor equivariance with a simple external rotation and reflection (R&R) module based on polar decomposition. The crafted external R&R module can rotate and reflect the crystal into an invariant standardized crystal position in space without introducing extra computational cost. We show that GoeCTP is general as it is a plug-and-play module that can be smoothly integrated with any existing single-value property prediction framework for predicting tensor properties. Experimental results indicate that GoeCTP achieves higher prediction performance and runs 13$\times$ faster compared to existing state-of-the-art methods in elastic benchmarking datasets, underscoring its effectiveness and efficiency.<|reference_end|>
arxiv
@article{hua2024fast, title={Fast Crystal Tensor Property Prediction: A General O(3)-Equivariant Framework Based on Polar Decomposition}, author={Haowei Hua, Wanyu Lin, Jingwen Yang}, journal={arXiv preprint arXiv:2410.02372}, year={2024}, archivePrefix={arXiv}, eprint={2410.02372}, primaryClass={cs.CE} }
hua2024fast