corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-662301
2409.17758
Adapting Deep Variational Bayes Filter for Enhanced Confidence Estimation in Finite Element Method Integrated Networks (FEMIN)
<|reference_start|>Adapting Deep Variational Bayes Filter for Enhanced Confidence Estimation in Finite Element Method Integrated Networks (FEMIN): The Finite Element Method (FEM) is a widely used technique for simulating crash scenarios with high accuracy and reliability. To reduce the significant computational costs associated with FEM, the Finite Element Method Integrated Networks (FEMIN) framework integrates neural networks (NNs) with FEM solvers. However, this integration can introduce errors and deviations from full-FEM simulations, highlighting the need for an additional metric to assess prediction confidence, especially when no ground truth data is available. In this study, we adapt the Deep Variational Bayes Filter (DVBF) to the FEMIN framework, incorporating a probabilistic approach to provide qualitative insights into prediction confidence during FEMIN simulations. The adaptation involves using the learned transition model for a predictive decoding step, generating a preliminary force prediction. This predictive force is used alongside the displacement and the velocity data from the FEM solver as input for the encoder model. The decoder reconstructs the likelihood distribution based on the posterior. The mean force of this distribution is applied to the FEM solver, while the predicted standard deviation can be used for uncertainty estimation. Our findings demonstrate that the DVBF outperforms deterministic NN architectures in terms of accuracy. Furthermore, the standard deviation derived from the decoder serves as a valuable qualitative metric for assessing the confidence in FEMIN simulations. This approach enhances the robustness of FEMIN by providing a measure of reliability alongside the simulation results.<|reference_end|>
arxiv
@article{thel2024adapting, title={Adapting Deep Variational Bayes Filter for Enhanced Confidence Estimation in Finite Element Method Integrated Networks (FEMIN)}, author={Simon Thel, Lars Greve, Maximilian Karl, Patrick van der Smagt}, journal={arXiv preprint arXiv:2409.17758}, year={2024}, archivePrefix={arXiv}, eprint={2409.17758}, primaryClass={cs.CE} }
thel2024adapting
arxiv-662302
2409.17759
LGFN: Lightweight Light Field Image Super-Resolution using Local Convolution Modulation and Global Attention Feature Extraction
<|reference_start|>LGFN: Lightweight Light Field Image Super-Resolution using Local Convolution Modulation and Global Attention Feature Extraction: Capturing different intensity and directions of light rays at the same scene Light field (LF) can encode the 3D scene cues into a 4D LF image which has a wide range of applications (i.e. post-capture refocusing and depth sensing). LF image super-resolution (SR) aims to improve the image resolution limited by the performance of LF camera sensor. Although existing methods have achieved promising results the practical application of these models is limited because they are not lightweight enough. In this paper we propose a lightweight model named LGFN which integrates the local and global features of different views and the features of different channels for LF image SR. Specifically owing to neighboring regions of the same pixel position in different sub-aperture images exhibit similar structural relationships we design a lightweight CNN-based feature extraction module (namely DGCE) to extract local features better through feature modulation. Meanwhile as the position beyond the boundaries in the LF image presents a large disparity we propose an efficient spatial attention module (namely ESAM) which uses decomposable large-kernel convolution to obtain an enlarged receptive field and an efficient channel attention module (namely ECAM). Compared with the existing LF image SR models with large parameter our model has a parameter of 0.45M and a FLOPs of 19.33G which has achieved a competitive effect. Extensive experiments with ablation studies demonstrate the effectiveness of our proposed method which ranked the second place in the Track 2 Fidelity & Efficiency of NTIRE2024 Light Field Super Resolution Challenge and the seventh place in the Track 1 Fidelity.<|reference_end|>
arxiv
@article{yu2024lgfn:, title={LGFN: Lightweight Light Field Image Super-Resolution using Local Convolution Modulation and Global Attention Feature Extraction}, author={Zhongxin Yu, Liang Chen, Zhiyun Zeng, Kunping Yang, Shaofei Luo, Shaorui Chen, Cheng Zhong}, journal={CVPR 2024 workshop}, year={2024}, archivePrefix={arXiv}, eprint={2409.17759}, primaryClass={eess.IV cs.CV} }
yu2024lgfn:
arxiv-662303
2409.17763
Confidence intervals uncovered: Are we ready for real-world medical imaging AI?
<|reference_start|>Confidence intervals uncovered: Are we ready for real-world medical imaging AI?: Medical imaging is spearheading the AI transformation of healthcare. Performance reporting is key to determine which methods should be translated into clinical practice. Frequently, broad conclusions are simply derived from mean performance values. In this paper, we argue that this common practice is often a misleading simplification as it ignores performance variability. Our contribution is threefold. (1) Analyzing all MICCAI segmentation papers (n = 221) published in 2023, we first observe that more than 50% of papers do not assess performance variability at all. Moreover, only one (0.5%) paper reported confidence intervals (CIs) for model performance. (2) To address the reporting bottleneck, we show that the unreported standard deviation (SD) in segmentation papers can be approximated by a second-order polynomial function of the mean Dice similarity coefficient (DSC). Based on external validation data from 56 previous MICCAI challenges, we demonstrate that this approximation can accurately reconstruct the CI of a method using information provided in publications. (3) Finally, we reconstructed 95% CIs around the mean DSC of MICCAI 2023 segmentation papers. The median CI width was 0.03 which is three times larger than the median performance gap between the first and second ranked method. For more than 60% of papers, the mean performance of the second-ranked method was within the CI of the first-ranked method. We conclude that current publications typically do not provide sufficient evidence to support which models could potentially be translated into clinical practice.<|reference_end|>
arxiv
@article{christodoulou2024confidence, title={Confidence intervals uncovered: Are we ready for real-world medical imaging AI?}, author={Evangelia Christodoulou, Annika Reinke, Rola Houhou, Piotr Kalinowski, Selen Erkan, Carole H. Sudre, Ninon Burgos, Sofi`ene Boutaj, Sophie Loizillon, Ma"elys Solal, Nicola Rieke, Veronika Cheplygina, Michela Antonelli, Leon D. Mayer, Minu D. Tizabi, M. Jorge Cardoso, Amber Simpson, Paul F. J"ager, Annette Kopp-Schneider, Ga"el Varoquaux, Olivier Colliot, Lena Maier-Hein}, journal={arXiv preprint arXiv:2409.17763}, year={2024}, archivePrefix={arXiv}, eprint={2409.17763}, primaryClass={cs.CV cs.AI cs.LG} }
christodoulou2024confidence
arxiv-662304
2409.17766
MorphoHaptics: An Open-Source Tool for Visuohaptic Exploration of Morphological Image Datasets
<|reference_start|>MorphoHaptics: An Open-Source Tool for Visuohaptic Exploration of Morphological Image Datasets: Although digital methods have significantly advanced morphology, practitioners are still challenged to understand and process tomographic specimen data. As automated processing of fossil data remains insufficient, morphologists still engage in intensive manual work to prepare digital fossils for research objectives. We present an open-source tool that enables morphologists to explore tomographic data similarly to the physical workflows that traditional fossil preparators experience in the field. We assessed the usability of our prototype for virtual fossil preparation and its accompanying tasks in the digital preparation workflow. Our findings indicate that integrating haptics into the virtual preparation workflow enhances the understanding of the morphology and material properties of working specimens. Our design's visuohaptic sculpting of fossil volumes was deemed straightforward and an improvement over current tomographic data processing methods.<|reference_end|>
arxiv
@article{rodrigues2024morphohaptics:, title={MorphoHaptics: An Open-Source Tool for Visuohaptic Exploration of Morphological Image Datasets}, author={Lucas Siqueira Rodrigues, Thomas Kosch, John Nyakatura, Stefan Zachow, Johann Habakuk Israel}, journal={arXiv preprint arXiv:2409.17766}, year={2024}, archivePrefix={arXiv}, eprint={2409.17766}, primaryClass={cs.HC} }
rodrigues2024morphohaptics:
arxiv-662305
2409.17767
Federated Learning under Attack: Improving Gradient Inversion for Batch of Images
<|reference_start|>Federated Learning under Attack: Improving Gradient Inversion for Batch of Images: Federated Learning (FL) has emerged as a machine learning approach able to preserve the privacy of user's data. Applying FL, clients train machine learning models on a local dataset and a central server aggregates the learned parameters coming from the clients, training a global machine learning model without sharing user's data. However, the state-of-the-art shows several approaches to promote attacks on FL systems. For instance, inverting or leaking gradient attacks can find, with high precision, the local dataset used during the training phase of the FL. This paper presents an approach, called Deep Leakage from Gradients with Feedback Blending (DLG-FB), which is able to improve the inverting gradient attack, considering the spatial correlation that typically exists in batches of images. The performed evaluation shows an improvement of 19.18% and 48,82% in terms of attack success rate and the number of iterations per attacked image, respectively.<|reference_end|>
arxiv
@article{leite2024federated, title={Federated Learning under Attack: Improving Gradient Inversion for Batch of Images}, author={Luiz Leite, Yuri Santo, Bruno L. Dalmazo, Andr'e Riker}, journal={arXiv preprint arXiv:2409.17767}, year={2024}, archivePrefix={arXiv}, eprint={2409.17767}, primaryClass={cs.CR cs.AI} }
leite2024federated
arxiv-662306
2409.17769
Value Identification in Multistakeholder Recommender Systems for Humanities and Historical Research: The Case of the Digital Archive Monasteriumnet
<|reference_start|>Value Identification in Multistakeholder Recommender Systems for Humanities and Historical Research: The Case of the Digital Archive Monasteriumnet: Recommender systems remain underutilized in humanities and historical research, despite their potential to enhance the discovery of cultural records. This paper offers an initial value identification of the multiple stakeholders that might be impacted by recommendations in Monasterium.net, a digital archive for historical legal documents. Specifically, we discuss the diverse values and objectives of its stakeholders, such as editors, aggregators, platform owners, researchers, publishers, and funding agencies. These in-depth insights into the potentially conflicting values of stakeholder groups allow designing and adapting recommender systems to enhance their usefulness for humanities and historical research. Additionally, our findings will support deeper engagement with additional stakeholders to refine value models and evaluation metrics for recommender systems in the given domains. Our conclusions are embedded in and applicable to other digital archives and a broader cultural heritage context.<|reference_end|>
arxiv
@article{atzenhofer-baumgartner2024value, title={Value Identification in Multistakeholder Recommender Systems for Humanities and Historical Research: The Case of the Digital Archive Monasterium.net}, author={Florian Atzenhofer-Baumgartner, Bernhard C. Geiger, Georg Vogeler, Dominik Kowald}, journal={arXiv preprint arXiv:2409.17769}, year={2024}, archivePrefix={arXiv}, eprint={2409.17769}, primaryClass={cs.IR cs.DL} }
atzenhofer-baumgartner2024value
arxiv-662307
2409.17774
Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations
<|reference_start|>Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations: Faithfulness is arguably the most critical metric to assess the reliability of explainable AI. In NLP, current methods for faithfulness evaluation are fraught with discrepancies and biases, often failing to capture the true reasoning of models. We introduce Adversarial Sensitivity as a novel approach to faithfulness evaluation, focusing on the explainer's response when the model is under adversarial attack. Our method accounts for the faithfulness of explainers by capturing sensitivity to adversarial input changes. This work addresses significant limitations in existing evaluation techniques, and furthermore, quantifies faithfulness from a crucial yet underexplored paradigm.<|reference_end|>
arxiv
@article{manna2024faithfulness, title={Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations}, author={Supriya Manna and Niladri Sett}, journal={arXiv preprint arXiv:2409.17774}, year={2024}, archivePrefix={arXiv}, eprint={2409.17774}, primaryClass={cs.CL cs.AI} }
manna2024faithfulness
arxiv-662308
2409.17775
UNICORN: A Deep Learning Model for Integrating Multi-Stain Data in Histopathology
<|reference_start|>UNICORN: A Deep Learning Model for Integrating Multi-Stain Data in Histopathology: Background: The integration of multi-stain histopathology images through deep learning poses a significant challenge in digital histopathology. Current multi-modal approaches struggle with data heterogeneity and missing data. This study aims to overcome these limitations by developing a novel transformer model for multi-stain integration that can handle missing data during training as well as inference. Methods: We propose UNICORN (UNiversal modality Integration Network for CORonary classificatioN) a multi-modal transformer capable of processing multi-stain histopathology for atherosclerosis severity class prediction. The architecture comprises a two-stage, end-to-end trainable model with specialized modules utilizing transformer self-attention blocks. The initial stage employs domain-specific expert modules to extract features from each modality. In the subsequent stage, an aggregation expert module integrates these features by learning the interactions between the different data modalities. Results: Evaluation was performed using a multi-class dataset of atherosclerotic lesions from the Munich Cardiovascular Studies Biobank (MISSION), using over 4,000 paired multi-stain whole slide images (WSIs) from 170 deceased individuals on 7 prespecified segments of the coronary tree, each stained according to four histopathological protocols. UNICORN achieved a classification accuracy of 0.67, outperforming other state-of-the-art models. The model effectively identifies relevant tissue phenotypes across stainings and implicitly models disease progression. Conclusion: Our proposed multi-modal transformer model addresses key challenges in medical data analysis, including data heterogeneity and missing modalities. Explainability and the model's effectiveness in predicting atherosclerosis progression underscores its potential for broader applications in medical research.<|reference_end|>
arxiv
@article{koch2024unicorn:, title={UNICORN: A Deep Learning Model for Integrating Multi-Stain Data in Histopathology}, author={Valentin Koch, Sabine Bauer, Valerio Luppberger, Michael Joner, Heribert Schunkert, Julia A. Schnabel, Moritz von Scheidt, Carsten Marr}, journal={arXiv preprint arXiv:2409.17775}, year={2024}, archivePrefix={arXiv}, eprint={2409.17775}, primaryClass={cs.CV} }
koch2024unicorn:
arxiv-662309
2409.17777
Harnessing Shared Relations via Multimodal Mixup Contrastive Learning for Multimodal Classification
<|reference_start|>Harnessing Shared Relations via Multimodal Mixup Contrastive Learning for Multimodal Classification: Deep multimodal learning has shown remarkable success by leveraging contrastive learning to capture explicit one-to-one relations across modalities. However, real-world data often exhibits shared relations beyond simple pairwise associations. We propose M3CoL, a Multimodal Mixup Contrastive Learning approach to capture nuanced shared relations inherent in multimodal data. Our key contribution is a Mixup-based contrastive loss that learns robust representations by aligning mixed samples from one modality with their corresponding samples from other modalities thereby capturing shared relations between them. For multimodal classification tasks, we introduce a framework that integrates a fusion module with unimodal prediction modules for auxiliary supervision during training, complemented by our proposed Mixup-based contrastive loss. Through extensive experiments on diverse datasets (N24News, ROSMAP, BRCA, and Food-101), we demonstrate that M3CoL effectively captures shared multimodal relations and generalizes across domains. It outperforms state-of-the-art methods on N24News, ROSMAP, and BRCA, while achieving comparable performance on Food-101. Our work highlights the significance of learning shared relations for robust multimodal learning, opening up promising avenues for future research.<|reference_end|>
arxiv
@article{kumar2024harnessing, title={Harnessing Shared Relations via Multimodal Mixup Contrastive Learning for Multimodal Classification}, author={Raja Kumar, Raghav Singhal, Pranamya Kulkarni, Deval Mehta, Kshitij Jadhav}, journal={arXiv preprint arXiv:2409.17777}, year={2024}, archivePrefix={arXiv}, eprint={2409.17777}, primaryClass={cs.CV cs.AI} }
kumar2024harnessing
arxiv-662310
2409.17778
Taming Diffusion Prior for Image Super-Resolution with Domain Shift SDEs
<|reference_start|>Taming Diffusion Prior for Image Super-Resolution with Domain Shift SDEs: Diffusion-based image super-resolution (SR) models have attracted substantial interest due to their powerful image restoration capabilities. However, prevailing diffusion models often struggle to strike an optimal balance between efficiency and performance. Typically, they either neglect to exploit the potential of existing extensive pretrained models, limiting their generative capacity, or they necessitate a dozens of forward passes starting from random noises, compromising inference efficiency. In this paper, we present DoSSR, a Domain Shift diffusion-based SR model that capitalizes on the generative powers of pretrained diffusion models while significantly enhancing efficiency by initiating the diffusion process with low-resolution (LR) images. At the core of our approach is a domain shift equation that integrates seamlessly with existing diffusion models. This integration not only improves the use of diffusion prior but also boosts inference efficiency. Moreover, we advance our method by transitioning the discrete shift process to a continuous formulation, termed as DoS-SDEs. This advancement leads to the fast and customized solvers that further enhance sampling efficiency. Empirical results demonstrate that our proposed method achieves state-of-the-art performance on synthetic and real-world datasets, while notably requiring only 5 sampling steps. Compared to previous diffusion prior based methods, our approach achieves a remarkable speedup of 5-7 times, demonstrating its superior efficiency. Code: https://github.com/QinpengCui/DoSSR.<|reference_end|>
arxiv
@article{cui2024taming, title={Taming Diffusion Prior for Image Super-Resolution with Domain Shift SDEs}, author={Qinpeng Cui, Yixuan Liu, Xinyi Zhang, Qiqi Bao, Zhongdao Wang, Qingmin Liao, Li Wang, Tian Lu, Emad Barsoum}, journal={arXiv preprint arXiv:2409.17778}, year={2024}, archivePrefix={arXiv}, eprint={2409.17778}, primaryClass={cs.CV} }
cui2024taming
arxiv-662311
2409.17779
A posteriori error analysis of the virtual element method for second-order quasilinear elliptic PDEs
<|reference_start|>A posteriori error analysis of the virtual element method for second-order quasilinear elliptic PDEs: In this paper we develop a $C^0$-conforming virtual element method (VEM) for a class of second-order quasilinear elliptic PDEs in two dimensions. We present a posteriori error analysis for this problem and derive a residual based error estimator. The estimator is fully computable and we prove upper and lower bounds of the error estimator which are explicit in the local mesh size. We use the estimator to drive an adaptive mesh refinement algorithm. A handful of numerical test problems are carried out to study the performance of the proposed error indicator.<|reference_end|>
arxiv
@article{congreve2024a, title={A posteriori error analysis of the virtual element method for second-order quasilinear elliptic PDEs}, author={Scott Congreve and Alice Hodson}, journal={arXiv preprint arXiv:2409.17779}, year={2024}, archivePrefix={arXiv}, eprint={2409.17779}, primaryClass={math.NA cs.NA} }
congreve2024a
arxiv-662312
2409.17785
A Syzygial Method for Equidimensional Decomposition
<|reference_start|>A Syzygial Method for Equidimensional Decomposition: Based on a theorem by Vasconcelos, we give an algorithm for equidimensional decomposition of algebraic sets using syzygy computations via Gr\"obner bases. This algorithm avoids the use of elimination, homological algebra and processing the input equations one-by-one present in previous algorithms. We experimentally demonstrate the practical interest of our algorithm compared to the state of the art.<|reference_end|>
arxiv
@article{mohr2024a, title={A Syzygial Method for Equidimensional Decomposition}, author={Rafael Mohr}, journal={arXiv preprint arXiv:2409.17785}, year={2024}, archivePrefix={arXiv}, eprint={2409.17785}, primaryClass={cs.SC} }
mohr2024a
arxiv-662313
2409.17786
Predicting the Stay Length of Patients in Hospitals using Convolutional Gated Recurrent Deep Learning Model
<|reference_start|>Predicting the Stay Length of Patients in Hospitals using Convolutional Gated Recurrent Deep Learning Model: Predicting hospital length of stay (LoS) stands as a critical factor in shaping public health strategies. This data serves as a cornerstone for governments to discern trends, patterns, and avenues for enhancing healthcare delivery. In this study, we introduce a robust hybrid deep learning model, a combination of Multi-layer Convolutional (CNNs) deep learning, Gated Recurrent Units (GRU), and Dense neural networks, that outperforms 11 conventional and state-of-the-art Machine Learning (ML) and Deep Learning (DL) methodologies in accurately forecasting inpatient hospital stay duration. Our investigation delves into the implementation of this hybrid model, scrutinising variables like geographic indicators tied to caregiving institutions, demographic markers encompassing patient ethnicity, race, and age, as well as medical attributes such as the CCS diagnosis code, APR DRG code, illness severity metrics, and hospital stay duration. Statistical evaluations reveal the pinnacle LoS accuracy achieved by our proposed model (CNN-GRU-DNN), which averages at 89% across a 10-fold cross-validation test, surpassing LSTM, BiLSTM, GRU, and Convolutional Neural Networks (CNNs) by 19%, 18.2%, 18.6%, and 7%, respectively. Accurate LoS predictions not only empower hospitals to optimise resource allocation and curb expenses associated with prolonged stays but also pave the way for novel strategies in hospital stay management. This avenue holds promise for catalysing advancements in healthcare research and innovation, inspiring a new era of precision-driven healthcare practices.<|reference_end|>
arxiv
@article{neshat2024predicting, title={Predicting the Stay Length of Patients in Hospitals using Convolutional Gated Recurrent Deep Learning Model}, author={Mehdi Neshat, Michael Phipps, Chris A. Browne, Nicole T. Vargas, Seyedali Mirjalili}, journal={arXiv preprint arXiv:2409.17786}, year={2024}, archivePrefix={arXiv}, eprint={2409.17786}, primaryClass={cs.NE cs.LG} }
neshat2024predicting
arxiv-662314
2409.17788
Ophthalmic Biomarker Detection with Parallel Prediction of Transformer and Convolutional Architecture
<|reference_start|>Ophthalmic Biomarker Detection with Parallel Prediction of Transformer and Convolutional Architecture: Ophthalmic diseases represent a significant global health issue, necessitating the use of advanced precise diagnostic tools. Optical Coherence Tomography (OCT) imagery which offers high-resolution cross-sectional images of the retina has become a pivotal imaging modality in ophthalmology. Traditionally physicians have manually detected various diseases and biomarkers from such diagnostic imagery. In recent times, deep learning techniques have been extensively used for medical diagnostic tasks enabling fast and precise diagnosis. This paper presents a novel approach for ophthalmic biomarker detection using an ensemble of Convolutional Neural Network (CNN) and Vision Transformer. While CNNs are good for feature extraction within the local context of the image, transformers are known for their ability to extract features from the global context of the image. Using an ensemble of both techniques allows us to harness the best of both worlds. Our method has been implemented on the OLIVES dataset to detect 6 major biomarkers from the OCT images and shows significant improvement of the macro averaged F1 score on the dataset.<|reference_end|>
arxiv
@article{islam2024ophthalmic, title={Ophthalmic Biomarker Detection with Parallel Prediction of Transformer and Convolutional Architecture}, author={Md. Touhidul Islam, Md. Abtahi Majeed Chowdhury, Mahmudul Hasan, Asif Quadir, Lutfa Aktar}, journal={arXiv preprint arXiv:2409.17788}, year={2024}, archivePrefix={arXiv}, eprint={2409.17788}, primaryClass={cs.AI} }
islam2024ophthalmic
arxiv-662315
2409.17790
CASPFormer: Trajectory Prediction from BEV Images with Deformable Attention
<|reference_start|>CASPFormer: Trajectory Prediction from BEV Images with Deformable Attention: Motion prediction is an important aspect for Autonomous Driving (AD) and Advance Driver Assistance Systems (ADAS). Current state-of-the-art motion prediction methods rely on High Definition (HD) maps for capturing the surrounding context of the ego vehicle. Such systems lack scalability in real-world deployment as HD maps are expensive to produce and update in real-time. To overcome this issue, we propose Context Aware Scene Prediction Transformer (CASPFormer), which can perform multi-modal motion prediction from rasterized Bird-Eye-View (BEV) images. Our system can be integrated with any upstream perception module that is capable of generating BEV images. Moreover, CASPFormer directly decodes vectorized trajectories without any postprocessing. Trajectories are decoded recurrently using deformable attention, as it is computationally efficient and provides the network with the ability to focus its attention on the important spatial locations of the BEV images. In addition, we also address the issue of mode collapse for generating multiple scene-consistent trajectories by incorporating learnable mode queries. We evaluate our model on the nuScenes dataset and show that it reaches state-of-the-art across multiple metrics<|reference_end|>
arxiv
@article{yadav2024caspformer:, title={CASPFormer: Trajectory Prediction from BEV Images with Deformable Attention}, author={Harsh Yadav, Maximilian Schaefer, Kun Zhao, and Tobias Meisen}, journal={arXiv preprint arXiv:2409.17790}, year={2024}, archivePrefix={arXiv}, eprint={2409.17790}, primaryClass={cs.LG cs.CV} }
yadav2024caspformer:
arxiv-662316
2409.17791
Self-supervised Preference Optimization: Enhance Your Language Model with Preference Degree Awareness
<|reference_start|>Self-supervised Preference Optimization: Enhance Your Language Model with Preference Degree Awareness: Recently, there has been significant interest in replacing the reward model in Reinforcement Learning with Human Feedback (RLHF) methods for Large Language Models (LLMs), such as Direct Preference Optimization (DPO) and its variants. These approaches commonly use a binary cross-entropy mechanism on pairwise samples, i.e., minimizing and maximizing the loss based on preferred or dis-preferred responses, respectively. However, while this training strategy omits the reward model, it also overlooks the varying preference degrees within different responses. We hypothesize that this is a key factor hindering LLMs from sufficiently understanding human preferences. To address this problem, we propose a novel Self-supervised Preference Optimization (SPO) framework, which constructs a self-supervised preference degree loss combined with the alignment loss, thereby helping LLMs improve their ability to understand the degree of preference. Extensive experiments are conducted on two widely used datasets of different tasks. The results demonstrate that SPO can be seamlessly integrated with existing preference optimization methods and significantly boost their performance to achieve state-of-the-art performance. We also conduct detailed analyses to offer comprehensive insights into SPO, which verifies its effectiveness. The code is available at https://github.com/lijian16/SPO.<|reference_end|>
arxiv
@article{li2024self-supervised, title={Self-supervised Preference Optimization: Enhance Your Language Model with Preference Degree Awareness}, author={Jian Li, Haojing Huang, Yujia Zhang, Pengfei Xu, Xi Chen, Rui Song, Lida Shi, Jingwen Wang, Hao Xu}, journal={arXiv preprint arXiv:2409.17791}, year={2024}, archivePrefix={arXiv}, eprint={2409.17791}, primaryClass={cs.CL cs.AI} }
li2024self-supervised
arxiv-662317
2409.17792
Reblurring-Guided Single Image Defocus Deblurring: A Learning Framework with Misaligned Training Pairs
<|reference_start|>Reblurring-Guided Single Image Defocus Deblurring: A Learning Framework with Misaligned Training Pairs: For single image defocus deblurring, acquiring well-aligned training pairs (or training triplets), i.e., a defocus blurry image, an all-in-focus sharp image (and a defocus blur map), is an intricate task for the development of deblurring models. Existing image defocus deblurring methods typically rely on training data collected by specialized imaging equipment, presupposing that these pairs or triplets are perfectly aligned. However, in practical scenarios involving the collection of real-world data, direct acquisition of training triplets is infeasible, and training pairs inevitably encounter spatial misalignment issues. In this work, we introduce a reblurring-guided learning framework for single image defocus deblurring, enabling the learning of a deblurring network even with misaligned training pairs. Specifically, we first propose a baseline defocus deblurring network that utilizes spatially varying defocus blur map as degradation prior to enhance the deblurring performance. Then, to effectively learn the baseline defocus deblurring network with misaligned training pairs, our reblurring module ensures spatial consistency between the deblurred image, the reblurred image and the input blurry image by reconstructing spatially variant isotropic blur kernels. Moreover, the spatially variant blur derived from the reblurring module can serve as pseudo supervision for defocus blur map during training, interestingly transforming training pairs into training triplets. Additionally, we have collected a new dataset specifically for single image defocus deblurring (SDD) with typical misalignments, which not only substantiates our proposed method but also serves as a benchmark for future research.<|reference_end|>
arxiv
@article{shu2024reblurring-guided, title={Reblurring-Guided Single Image Defocus Deblurring: A Learning Framework with Misaligned Training Pairs}, author={Xinya Shu and Yu Li and Dongwei Ren and Xiaohe Wu and Jin Li and Wangmeng Zuo}, journal={arXiv preprint arXiv:2409.17792}, year={2024}, archivePrefix={arXiv}, eprint={2409.17792}, primaryClass={cs.CV} }
shu2024reblurring-guided
arxiv-662318
2409.17795
Physics-driven complex relaxation for multi-body systems of SPH method
<|reference_start|>Physics-driven complex relaxation for multi-body systems of SPH method: In the smoothed particle dynamics (SPH) method, the characteristics of a target particle are interpolated based on the information from its neighboring particles. Consequently, a uniform initial distribution of particles significantly enhances the accuracy of SPH calculations. This aspect is particularly critical in Eulerian SPH, where particles are stationary throughout the simulation. To address this, we introduce a physics-driven complex relaxation method for multi-body systems. Through a series of two-dimensional and three-dimensional case studies, we demonstrate that this method is capable of achieving a globally uniform particle distribution, especially at the interfaces between contacting bodies, and ensuring improved zero-order consistency. Moreover, the effectiveness and reliability of the complex relaxation method in enhancing the accuracy of physical simulations are further validated.<|reference_end|>
arxiv
@article{zhao2024physics-driven, title={Physics-driven complex relaxation for multi-body systems of SPH method}, author={Chenxi Zhao and Yongchuan Yu and Oskar J. Haidn and Xiangyu Hu}, journal={arXiv preprint arXiv:2409.17795}, year={2024}, archivePrefix={arXiv}, eprint={2409.17795}, primaryClass={cs.CE} }
zhao2024physics-driven
arxiv-662319
2409.17798
Swarm-LIO2: Decentralized, Efficient LiDAR-inertial Odometry for UAV Swarms
<|reference_start|>Swarm-LIO2: Decentralized, Efficient LiDAR-inertial Odometry for UAV Swarms: Aerial swarm systems possess immense potential in various aspects, such as cooperative exploration, target tracking, search and rescue. Efficient, accurate self and mutual state estimation are the critical preconditions for completing these swarm tasks, which remain challenging research topics. This paper proposes Swarm-LIO2: a fully decentralized, plug-and-play, computationally efficient, and bandwidth-efficient LiDAR-inertial odometry for aerial swarm systems. Swarm-LIO2 uses a decentralized, plug-and-play network as the communication infrastructure. Only bandwidth-efficient and low-dimensional information is exchanged, including identity, ego-state, mutual observation measurements, and global extrinsic transformations. To support the plug-and-play of new teammate participants, Swarm-LIO2 detects potential teammate UAVs and initializes the temporal offset and global extrinsic transformation all automatically. To enhance the initialization efficiency, novel reflectivity-based UAV detection, trajectory matching, and factor graph optimization methods are proposed. For state estimation, Swarm-LIO2 fuses LiDAR, IMU, and mutual observation measurements within an efficient ESIKF framework, with careful compensation of temporal delay and modeling of measurements to enhance the accuracy and consistency.<|reference_end|>
arxiv
@article{zhu2024swarm-lio2:, title={Swarm-LIO2: Decentralized, Efficient LiDAR-inertial Odometry for UAV Swarms}, author={Fangcheng Zhu, Yunfan Ren, Longji Yin, Fanze Kong, Qingbo Liu, Ruize Xue, Wenyi Liu, Yixi Cai, Guozheng Lu, Haotian Li, Fu Zhang}, journal={arXiv preprint arXiv:2409.17798}, year={2024}, archivePrefix={arXiv}, eprint={2409.17798}, primaryClass={cs.RO} }
zhu2024swarm-lio2:
arxiv-662320
2409.17800
Bias Assessment and Data Drift Detection in Medical Image Analysis: A Survey
<|reference_start|>Bias Assessment and Data Drift Detection in Medical Image Analysis: A Survey: Machine Learning (ML) models have gained popularity in medical imaging analysis given their expert level performance in many medical domains. To enhance the trustworthiness, acceptance, and regulatory compliance of medical imaging models and to facilitate their integration into clinical settings, we review and categorise methods for ensuring ML reliability, both during development and throughout the model's lifespan. Specifically, we provide an overview of methods assessing models' inner-workings regarding bias encoding and detection of data drift for disease classification models. Additionally, to evaluate the severity in case of a significant drift, we provide an overview of the methods developed for classifier accuracy estimation in case of no access to ground truth labels. This should enable practitioners to implement methods ensuring reliable ML deployment and consistent prediction performance over time.<|reference_end|>
arxiv
@article{prenner2024bias, title={Bias Assessment and Data Drift Detection in Medical Image Analysis: A Survey}, author={Andrea Prenner, Bernhard Kainz}, journal={arXiv preprint arXiv:2409.17800}, year={2024}, archivePrefix={arXiv}, eprint={2409.17800}, primaryClass={cs.HC eess.IV} }
prenner2024bias
arxiv-662321
2409.17804
Enriched Functional Tree-Based Classifiers: A Novel Approach Leveraging Derivatives and Geometric Features
<|reference_start|>Enriched Functional Tree-Based Classifiers: A Novel Approach Leveraging Derivatives and Geometric Features: The positioning of this research falls within the scalar-on-function classification literature, a field of significant interest across various domains, particularly in statistics, mathematics, and computer science. This study introduces an advanced methodology for supervised classification by integrating Functional Data Analysis (FDA) with tree-based ensemble techniques for classifying high-dimensional time series. The proposed framework, Enriched Functional Tree-Based Classifiers (EFTCs), leverages derivative and geometric features, benefiting from the diversity inherent in ensemble methods to further enhance predictive performance and reduce variance. While our approach has been tested on the enrichment of Functional Classification Trees (FCTs), Functional K-NN (FKNN), Functional Random Forest (FRF), Functional XGBoost (FXGB), and Functional LightGBM (FLGBM), it could be extended to other tree-based and non-tree-based classifiers, with appropriate considerations emerging from this investigation. Through extensive experimental evaluations on seven real-world datasets and six simulated scenarios, this proposal demonstrates fascinating improvements over traditional approaches, providing new insights into the application of FDA in complex, high-dimensional learning problems.<|reference_end|>
arxiv
@article{maturo2024enriched, title={Enriched Functional Tree-Based Classifiers: A Novel Approach Leveraging Derivatives and Geometric Features}, author={Fabrizio Maturo, Annamaria Porreca}, journal={arXiv preprint arXiv:2409.17804}, year={2024}, archivePrefix={arXiv}, eprint={2409.17804}, primaryClass={stat.ML cs.LG stat.ME} }
maturo2024enriched
arxiv-662322
2409.17805
Cascade Prompt Learning for Vision-Language Model Adaptation
<|reference_start|>Cascade Prompt Learning for Vision-Language Model Adaptation: Prompt learning has surfaced as an effective approach to enhance the performance of Vision-Language Models (VLMs) like CLIP when applied to downstream tasks. However, current learnable prompt tokens are primarily used for the single phase of adapting to tasks (i.e., adapting prompt), easily leading to overfitting risks. In this work, we propose a novel Cascade Prompt Learning CasPL framework to enable prompt learning to serve both generic and specific expertise (i.e., boosting and adapting prompt) simultaneously. Specifically, CasPL is a new learning paradigm comprising two distinct phases of learnable prompts: the first boosting prompt is crafted to extract domain-general knowledge from a senior larger CLIP teacher model by aligning their predicted logits using extensive unlabeled domain images. The second adapting prompt is then cascaded with the frozen first set to fine-tune the downstream tasks, following the approaches employed in prior research. In this manner, CasPL can effectively capture both domain-general and task-specific representations into explicitly different gradual groups of prompts, thus potentially alleviating overfitting issues in the target domain. It's worth noting that CasPL serves as a plug-and-play module that can seamlessly integrate into any existing prompt learning approach. CasPL achieves a significantly better balance between performance and inference speed, which is especially beneficial for deploying smaller VLM models in resource-constrained environments. Compared to the previous state-of-the-art method PromptSRC, CasPL shows an average improvement of 1.85% for base classes, 3.44% for novel classes, and 2.72% for the harmonic mean over 11 image classification datasets. Code is publicly available at: https://github.com/megvii-research/CasPL.<|reference_end|>
arxiv
@article{wu2024cascade, title={Cascade Prompt Learning for Vision-Language Model Adaptation}, author={Ge Wu, Xin Zhang, Zheng Li, Zhaowei Chen, Jiajun Liang, Jian Yang and Xiang Li}, journal={arXiv preprint arXiv:2409.17805}, year={2024}, archivePrefix={arXiv}, eprint={2409.17805}, primaryClass={cs.CV} }
wu2024cascade
arxiv-662323
2409.17806
Continual learning with task specialist
<|reference_start|>Continual learning with task specialist: Continual learning (CL) adapt the deep learning scenarios with timely updated datasets. However, existing CL models suffer from the catastrophic forgetting issue, where new knowledge replaces past learning. In this paper, we propose Continual Learning with Task Specialists (CLTS) to address the issues of catastrophic forgetting and limited labelled data in real-world datasets by performing class incremental learning of the incoming stream of data. The model consists of Task Specialists (T S) and Task Predictor (T P ) with pre-trained Stable Diffusion (SD) module. Here, we introduce a new specialist to handle a new task sequence and each T S has three blocks; i) a variational autoencoder (V AE) to learn the task distribution in a low dimensional latent space, ii) a K-Means block to perform data clustering and iii) Bootstrapping Language-Image Pre-training (BLIP ) model to generate a small batch of captions from the input data. These captions are fed as input to the pre-trained stable diffusion model (SD) for the generation of task samples. The proposed model does not store any task samples for replay, instead uses generated samples from SD to train the T P module. A comparison study with four SOTA models conducted on three real-world datasets shows that the proposed model outperforms all the selected baselines<|reference_end|>
arxiv
@article{solomon2024continual, title={Continual learning with task specialist}, author={Indu Solomon, Aye Phyu Phyu Aung, Uttam Kumar, Senthilnath Jayavelu}, journal={arXiv preprint arXiv:2409.17806}, year={2024}, archivePrefix={arXiv}, eprint={2409.17806}, primaryClass={cs.LG} }
solomon2024continual
arxiv-662324
2409.17808
Generative Modeling of Molecular Dynamics Trajectories
<|reference_start|>Generative Modeling of Molecular Dynamics Trajectories: Molecular dynamics (MD) is a powerful technique for studying microscopic phenomena, but its computational cost has driven significant interest in the development of deep learning-based surrogate models. We introduce generative modeling of molecular trajectories as a paradigm for learning flexible multi-task surrogate models of MD from data. By conditioning on appropriately chosen frames of the trajectory, we show such generative models can be adapted to diverse tasks such as forward simulation, transition path sampling, and trajectory upsampling. By alternatively conditioning on part of the molecular system and inpainting the rest, we also demonstrate the first steps towards dynamics-conditioned molecular design. We validate the full set of these capabilities on tetrapeptide simulations and show that our model can produce reasonable ensembles of protein monomers. Altogether, our work illustrates how generative modeling can unlock value from MD data towards diverse downstream tasks that are not straightforward to address with existing methods or even MD itself. Code is available at https://github.com/bjing2016/mdgen.<|reference_end|>
arxiv
@article{jing2024generative, title={Generative Modeling of Molecular Dynamics Trajectories}, author={Bowen Jing, Hannes St"ark, Tommi Jaakkola, Bonnie Berger}, journal={arXiv preprint arXiv:2409.17808}, year={2024}, archivePrefix={arXiv}, eprint={2409.17808}, primaryClass={q-bio.BM cs.LG} }
jing2024generative
arxiv-662325
2409.17814
E-scooter effects on public transport demand: a case study in Santiago, Chile
<|reference_start|>E-scooter effects on public transport demand: a case study in Santiago, Chile: As cities adopt sustainable mobility solutions, electric scooters (e-scooters) offer both challenges and opportunities for public transportation systems. This study, the first in Latin America, examines the effects of e-scooters on public transport demand in Santiago, Chile, focusing on two scenarios: "generation" of trips (trips starting in study zones) and "attraction" of trips (trips ending in study zones). A negative binomial regression model was applied to data from public transport smart cards and e-scooter GPS. The methodology included urban area clustering and a differences-in-differences approach. The findings reveal significant regional differences: in the Central Region, public transport trips decreased by 21.38% in the generation scenario, while bus trips increased by 76.39%. In the Intermediate Region, metro trips increased by 70.05%, and in the Peripheral Region, bus trips increased by 84.64%. These results suggest that e-scooters reduce public transport usage in highly accessible areas but increase it in less accessible regions.<|reference_end|>
arxiv
@article{opitz2024e-scooter, title={E-scooter effects on public transport demand: a case study in Santiago, Chile}, author={Daniela Opitz, Eduardo Graells-Garrido, Jacqueline Arriagada, Matilde Rivas, Natalia Meza}, journal={arXiv preprint arXiv:2409.17814}, year={2024}, archivePrefix={arXiv}, eprint={2409.17814}, primaryClass={cs.CY} }
opitz2024e-scooter
arxiv-662326
2409.17815
DREAMS: A python framework to train deep learning models with model card reporting for medical and health applications
<|reference_start|>DREAMS: A python framework to train deep learning models with model card reporting for medical and health applications: Electroencephalography (EEG) data provides a non-invasive method for researchers and clinicians to observe brain activity in real time. The integration of deep learning techniques with EEG data has significantly improved the ability to identify meaningful patterns, leading to valuable insights for both clinical and research purposes. However, most of the frameworks so far, designed for EEG data analysis, are either too focused on pre-processing or in deep learning methods per, making their use for both clinician and developer communities problematic. Moreover, critical issues such as ethical considerations, biases, uncertainties, and the limitations inherent in AI models for EEG data analysis are frequently overlooked, posing challenges to the responsible implementation of these technologies. In this paper, we introduce a comprehensive deep learning framework tailored for EEG data processing, model training and report generation. While constructed in way to be adapted and developed further by AI developers, it enables to report, through model cards, the outcome and specific information of use for both developers and clinicians. In this way, we discuss how this framework can, in the future, provide clinical researchers and developers with the tools needed to create transparent and accountable AI models for EEG data analysis and diagnosis.<|reference_end|>
arxiv
@article{khadka2024dreams:, title={DREAMS: A python framework to train deep learning models with model card reporting for medical and health applications}, author={Rabindra Khadka, Pedro G Lind, Anis Yazidi, Asma Belhadi}, journal={arXiv preprint arXiv:2409.17815}, year={2024}, archivePrefix={arXiv}, eprint={2409.17815}, primaryClass={cs.AI} }
khadka2024dreams:
arxiv-662327
2409.17819
Inference-Time Language Model Alignment via Integrated Value Guidance
<|reference_start|>Inference-Time Language Model Alignment via Integrated Value Guidance: Large language models are typically fine-tuned to align with human preferences, but tuning large models is computationally intensive and complex. In this work, we introduce $\textit{Integrated Value Guidance}$ (IVG), a method that uses implicit and explicit value functions to guide language model decoding at token and chunk-level respectively, efficiently aligning large language models purely at inference time. This approach circumvents the complexities of direct fine-tuning and outperforms traditional methods. Empirically, we demonstrate the versatility of IVG across various tasks. In controlled sentiment generation and summarization tasks, our method significantly improves the alignment of large models using inference-time guidance from $\texttt{gpt2}$-based value functions. Moreover, in a more challenging instruction-following benchmark AlpacaEval 2.0, we show that both specifically tuned and off-the-shelf value functions greatly improve the length-controlled win rates of large models against $\texttt{gpt-4-turbo}$ (e.g., $19.51\% \rightarrow 26.51\%$ for $\texttt{Mistral-7B-Instruct-v0.2}$ and $25.58\% \rightarrow 33.75\%$ for $\texttt{Mixtral-8x7B-Instruct-v0.1}$ with Tulu guidance).<|reference_end|>
arxiv
@article{liu2024inference-time, title={Inference-Time Language Model Alignment via Integrated Value Guidance}, author={Zhixuan Liu, Zhanhui Zhou, Yuanfu Wang, Chao Yang, Yu Qiao}, journal={arXiv preprint arXiv:2409.17819}, year={2024}, archivePrefix={arXiv}, eprint={2409.17819}, primaryClass={cs.CL cs.AI} }
liu2024inference-time
arxiv-662328
2409.17823
Kendall's $\tau$ Coefficient for Logits Distillation
<|reference_start|>Kendall's $\tau$ Coefficient for Logits Distillation: Knowledge distillation typically employs the Kullback-Leibler (KL) divergence to constrain the student model's output to match the soft labels provided by the teacher model exactly. However, sometimes the optimization direction of the KL divergence loss is not always aligned with the task loss, where a smaller KL divergence could lead to erroneous predictions that diverge from the soft labels. This limitation often results in suboptimal optimization for the student. Moreover, even under temperature scaling, the KL divergence loss function tends to overly focus on the larger-valued channels in the logits, disregarding the rich inter-class information provided by the multitude of smaller-valued channels. This hard constraint proves too challenging for lightweight students, hindering further knowledge distillation. To address this issue, we propose a plug-and-play ranking loss based on Kendall's $\tau$ coefficient, called Rank-Kendall Knowledge Distillation (RKKD). RKKD balances the attention to smaller-valued channels by constraining the order of channel values in student logits, providing more inter-class relational information. The rank constraint on the top-valued channels helps avoid suboptimal traps during optimization. We also discuss different differentiable forms of Kendall's $\tau$ coefficient and demonstrate that the proposed ranking loss function shares a consistent optimization objective with the KL divergence. Extensive experiments on the CIFAR-100 and ImageNet datasets show that our RKKD can enhance the performance of various knowledge distillation baselines and offer broad improvements across multiple teacher-student architecture combinations.<|reference_end|>
arxiv
@article{guan2024kendall's, title={Kendall's $\tau$ Coefficient for Logits Distillation}, author={Yuchen Guan, Runxi Cheng, Kang Liu, Chun Yuan}, journal={arXiv preprint arXiv:2409.17823}, year={2024}, archivePrefix={arXiv}, eprint={2409.17823}, primaryClass={cs.CV} }
guan2024kendall's
arxiv-662329
2409.17825
Physics-aligned Schr\"odinger bridge
<|reference_start|>Physics-aligned Schr\"odinger bridge: The reconstruction of physical fields from sparse measurements is pivotal in both scientific research and engineering applications. Traditional methods are increasingly supplemented by deep learning models due to their efficacy in extracting features from data. However, except for the low accuracy on complex physical systems, these models often fail to comply with essential physical constraints, such as governing equations and boundary conditions. To overcome this limitation, we introduce a novel data-driven field reconstruction framework, termed the Physics-aligned Schr\"{o}dinger Bridge (PalSB). This framework leverages a diffusion Schr\"{o}dinger bridge mechanism that is specifically tailored to align with physical constraints. The PalSB approach incorporates a dual-stage training process designed to address both local reconstruction mapping and global physical principles. Additionally, a boundary-aware sampling technique is implemented to ensure adherence to physical boundary conditions. We demonstrate the effectiveness of PalSB through its application to three complex nonlinear systems: cylinder flow from Particle Image Velocimetry experiments, two-dimensional turbulence, and a reaction-diffusion system. The results reveal that PalSB not only achieves higher accuracy but also exhibits enhanced compliance with physical constraints compared to existing methods. This highlights PalSB's capability to generate high-quality representations of intricate physical interactions, showcasing its potential for advancing field reconstruction techniques.<|reference_end|>
arxiv
@article{li2024physics-aligned, title={Physics-aligned Schr\"{o}dinger bridge}, author={Zeyu Li, Hongkun Dou, Shen Fang, Wang Han, Yue Deng, Lijun Yang}, journal={arXiv preprint arXiv:2409.17825}, year={2024}, archivePrefix={arXiv}, eprint={2409.17825}, primaryClass={physics.flu-dyn cs.LG} }
li2024physics-aligned
arxiv-662330
2409.17827
BeanCounter: A low-toxicity, large-scale, and open dataset of business-oriented text
<|reference_start|>BeanCounter: A low-toxicity, large-scale, and open dataset of business-oriented text: Many of the recent breakthroughs in language modeling have resulted from scaling effectively the same model architecture to larger datasets. In this vein, recent work has highlighted performance gains from increasing training dataset size and quality, suggesting a need for novel sources of large-scale datasets. In this work, we introduce BeanCounter, a public dataset consisting of more than 159B tokens extracted from businesses' disclosures. We show that this data is indeed novel: less than 0.1% of BeanCounter appears in Common Crawl-based datasets and it is an order of magnitude larger than datasets relying on similar sources. Given the data's provenance, we hypothesize that BeanCounter is comparatively more factual and less toxic than web-based datasets. Exploring this hypothesis, we find that many demographic identities occur with similar prevalence in BeanCounter but with significantly less toxic context relative to other datasets. To demonstrate the utility of BeanCounter, we evaluate and compare two LLMs continually pre-trained on BeanCounter with their base models. We find an 18-33% reduction in toxic generation and improved performance within the finance domain for the continually pretrained models. Collectively, our work suggests that BeanCounter is a novel source of low-toxicity and high-quality domain-specific data with sufficient scale to train multi-billion parameter LLMs.<|reference_end|>
arxiv
@article{wang2024beancounter:, title={BeanCounter: A low-toxicity, large-scale, and open dataset of business-oriented text}, author={Siyan Wang and Bradford Levy}, journal={arXiv preprint arXiv:2409.17827}, year={2024}, archivePrefix={arXiv}, eprint={2409.17827}, primaryClass={cs.CL} }
wang2024beancounter:
arxiv-662331
2409.17830
Unsupervised Learning Based Multi-Scale Exposure Fusion
<|reference_start|>Unsupervised Learning Based Multi-Scale Exposure Fusion: Unsupervised learning based multi-scale exposure fusion (ULMEF) is efficient for fusing differently exposed low dynamic range (LDR) images into a higher quality LDR image for a high dynamic range (HDR) scene. Unlike supervised learning, loss functions play a crucial role in the ULMEF. In this paper, novel loss functions are proposed for the ULMEF and they are defined by using all the images to be fused and other differently exposed images from the same HDR scene. The proposed loss functions can guide the proposed ULMEF to learn more reliable information from the HDR scene than existing loss functions which are defined by only using the set of images to be fused. As such, the quality of the fused image is significantly improved. The proposed ULMEF also adopts a multi-scale strategy that includes a multi-scale attention module to effectively preserve the scene depth and local contrast in the fused image. Meanwhile, the proposed ULMEF can be adopted to achieve exposure interpolation and exposure extrapolation. Extensive experiments show that the proposed ULMEF algorithm outperforms state-of-the-art exposure fusion algorithms.<|reference_end|>
arxiv
@article{zheng2024unsupervised, title={Unsupervised Learning Based Multi-Scale Exposure Fusion}, author={Chaobing Zheng, Shiqian Wu, Zhenggguo Li}, journal={arXiv preprint arXiv:2409.17830}, year={2024}, archivePrefix={arXiv}, eprint={2409.17830}, primaryClass={cs.CV} }
zheng2024unsupervised
arxiv-662332
2409.17831
Asymptotically Optimal Hardness for $k$-Set Packing and $k$-Matroid Intersection
<|reference_start|>Asymptotically Optimal Hardness for $k$-Set Packing and $k$-Matroid Intersection: For any $\varepsilon > 0$, we prove that $k$-Dimensional Matching is hard to approximate within a factor of $k/(12 + \varepsilon)$ for large $k$ unless $\textsf{NP} \subseteq \textsf{BPP}$. Listed in Karp's 21 $\textsf{NP}$-complete problems, $k$-Dimensional Matching is a benchmark computational complexity problem which we find as a special case of many constrained optimization problems over independence systems including: $k$-Set Packing, $k$-Matroid Intersection, and Matroid $k$-Parity. For all the aforementioned problems, the best known lower bound was a $\Omega(k /\log(k))$-hardness by Hazan, Safra, and Schwartz. In contrast, state-of-the-art algorithms achieved an approximation of $O(k)$. Our result narrows down this gap to a constant and thus provides a rationale for the observed algorithmic difficulties. The crux of our result hinges on a novel approximation preserving gadget from $R$-degree bounded $k$-CSPs over alphabet size $R$ to $kR$-Dimensional Matching. Along the way, we prove that $R$-degree bounded $k$-CSPs over alphabet size $R$ are hard to approximate within a factor $\Omega_k(R)$ using known randomised sparsification methods for CSPs.<|reference_end|>
arxiv
@article{lee2024asymptotically, title={Asymptotically Optimal Hardness for $k$-Set Packing and $k$-Matroid Intersection}, author={Euiwoong Lee, Ola Svensson, Theophile Thiery}, journal={arXiv preprint arXiv:2409.17831}, year={2024}, archivePrefix={arXiv}, eprint={2409.17831}, primaryClass={cs.CC cs.DS math.CO} }
lee2024asymptotically
arxiv-662333
2409.17833
Ordinary Differential Equations for Enhanced 12-Lead ECG Generation
<|reference_start|>Ordinary Differential Equations for Enhanced 12-Lead ECG Generation: In the realm of artificial intelligence, the generation of realistic training data for supervised learning tasks presents a significant challenge. This is particularly true in the synthesis of electrocardiograms (ECGs), where the objective is to develop a synthetic 12-lead ECG model. The primary complexity of this task stems from accurately modeling the intricate biological and physiological interactions among different ECG leads. Although mathematical process simulators have shed light on these dynamics, effectively incorporating this understanding into generative models is not straightforward. In this work, we introduce an innovative method that employs ordinary differential equations (ODEs) to enhance the fidelity of generating 12-lead ECG data. This approach integrates a system of ODEs that represent cardiac dynamics directly into the generative model's optimization process, allowing for the production of biologically plausible ECG training data that authentically reflects real-world variability and inter-lead dependencies. We conducted an empirical analysis of thousands of ECGs and found that incorporating cardiac simulation insights into the data generation process significantly improves the accuracy of heart abnormality classifiers trained on this synthetic 12-lead ECG data.<|reference_end|>
arxiv
@article{yehuda2024ordinary, title={Ordinary Differential Equations for Enhanced 12-Lead ECG Generation}, author={Yakir Yehuda, Kira Radinsky}, journal={arXiv preprint arXiv:2409.17833}, year={2024}, archivePrefix={arXiv}, eprint={2409.17833}, primaryClass={cs.LG} }
yehuda2024ordinary
arxiv-662334
2409.17834
PEDRO: Parameter-Efficient Fine-tuning with Prompt DEpenDent Representation MOdification
<|reference_start|>PEDRO: Parameter-Efficient Fine-tuning with Prompt DEpenDent Representation MOdification: Due to their substantial sizes, large language models (LLMs) are typically deployed within a single-backbone multi-tenant framework. In this setup, a single instance of an LLM backbone must cater to multiple users or tasks through the application of various parameter-efficient fine-tuning (PEFT) models. Despite the availability of numerous effective PEFT techniques such as LoRA, there remains a need for a PEFT approach that achieves both high efficiency during inference and competitive performance on downstream tasks. In this research, we introduce a new and straightforward PEFT methodology named \underline{P}rompt D\underline{E}pen\underline{D}ent \underline{R}epresentation M\underline{O}dification (PEDRO). The proposed method involves integrating a lightweight vector generator into each Transformer layer, which generates vectors contingent upon the input prompts. These vectors then modify the hidden representations created by the LLM through a dot product operation, thereby influencing the semantic output and generated content of the model. Extensive experimentation across a variety of tasks indicates that: (a) PEDRO surpasses recent PEFT benchmarks when using a similar number of tunable parameters. (b) Under the single-backbone multi-tenant deployment model, PEDRO exhibits superior efficiency compared to LoRA, indicating significant industrial potential.<|reference_end|>
arxiv
@article{xie2024pedro:, title={PEDRO: Parameter-Efficient Fine-tuning with Prompt DEpenDent Representation MOdification}, author={Tianfang Xie, Tianjing Li, Wei Zhu, Wei Han, Yi Zhao}, journal={arXiv preprint arXiv:2409.17834}, year={2024}, archivePrefix={arXiv}, eprint={2409.17834}, primaryClass={cs.CL} }
xie2024pedro:
arxiv-662335
2409.17836
Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models
<|reference_start|>Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models: Despite the widespread use of statistical prior models in various fields, such models for neural network gradients have long been overlooked. The inherent challenge stems from their high-dimensional structures and complex interdependencies, which complicate effective modeling. In this work, we demonstrate the potential of large language models (LLMs) to act as gradient priors in a zero-shot setting. We examine the property by considering lossless gradient compression -- a critical application in distributed learning -- that depends heavily on precise probability modeling. To achieve this, we introduce LM-GC, a novel method that integrates LLMs with arithmetic coding. Our technique converts plain gradients into text-like formats, enhancing token efficiency by up to 38 times compared to their plain representations. We ensure that this data conversion maintains a close alignment with the structure of plain gradients and the symbols commonly recognized by LLMs. Our experiments indicate that LM-GC surpasses existing state-of-the-art lossless compression methods, improving compression rates by 10\% up to 17.2\% across various datasets and architectures. Additionally, our approach shows promising compatibility with lossy compression techniques such as quantization and sparsification. These findings highlight the significant potential of LLMs as a model for effectively handling gradients. We will release the source code upon publication.<|reference_end|>
arxiv
@article{wang2024language, title={Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models}, author={Hui-Po Wang, Mario Fritz}, journal={arXiv preprint arXiv:2409.17836}, year={2024}, archivePrefix={arXiv}, eprint={2409.17836}, primaryClass={cs.LG cs.AI} }
wang2024language
arxiv-662336
2409.17840
Detecting and Measuring Confounding Using Causal Mechanism Shifts
<|reference_start|>Detecting and Measuring Confounding Using Causal Mechanism Shifts: Detecting and measuring confounding effects from data is a key challenge in causal inference. Existing methods frequently assume causal sufficiency, disregarding the presence of unobserved confounding variables. Causal sufficiency is both unrealistic and empirically untestable. Additionally, existing methods make strong parametric assumptions about the underlying causal generative process to guarantee the identifiability of confounding variables. Relaxing the causal sufficiency and parametric assumptions and leveraging recent advancements in causal discovery and confounding analysis with non-i.i.d. data, we propose a comprehensive approach for detecting and measuring confounding. We consider various definitions of confounding and introduce tailored methodologies to achieve three objectives: (i) detecting and measuring confounding among a set of variables, (ii) separating observed and unobserved confounding effects, and (iii) understanding the relative strengths of confounding bias between different sets of variables. We present useful properties of a confounding measure and present measures that satisfy those properties. Empirical results support the theoretical analysis.<|reference_end|>
arxiv
@article{reddy2024detecting, title={Detecting and Measuring Confounding Using Causal Mechanism Shifts}, author={Abbavaram Gowtham Reddy and Vineeth N Balasubramanian}, journal={arXiv preprint arXiv:2409.17840}, year={2024}, archivePrefix={arXiv}, eprint={2409.17840}, primaryClass={cs.AI} }
reddy2024detecting
arxiv-662337
2409.17841
Machine Learning-based vs Deep Learning-based Anomaly Detection in Multivariate Time Series for Spacecraft Attitude Sensors
<|reference_start|>Machine Learning-based vs Deep Learning-based Anomaly Detection in Multivariate Time Series for Spacecraft Attitude Sensors: In the framework of Failure Detection, Isolation and Recovery (FDIR) on spacecraft, new AI-based approaches are emerging in the state of the art to overcome the limitations commonly imposed by traditional threshold checking. The present research aims at characterizing two different approaches to the problem of stuck values detection in multivariate time series coming from spacecraft attitude sensors. The analysis reveals the performance differences in the two approaches, while commenting on their interpretability and generalization to different scenarios.<|reference_end|>
arxiv
@article{gallon2024machine, title={Machine Learning-based vs Deep Learning-based Anomaly Detection in Multivariate Time Series for Spacecraft Attitude Sensors}, author={R. Gallon, F. Schiemenz, A. Krstova, A. Menicucci, E. Gill}, journal={arXiv preprint arXiv:2409.17841}, year={2024}, doi={10.5281/zenodo.13885631}, archivePrefix={arXiv}, eprint={2409.17841}, primaryClass={cs.LG cs.AI} }
gallon2024machine
arxiv-662338
2409.17843
Auction-based Adaptive Resource Allocation Optimization in Dense IoT Networks
<|reference_start|>Auction-based Adaptive Resource Allocation Optimization in Dense IoT Networks: The rapid pervasivity of the Internet of Things (IoT) calls for an autonomous and efficient resource management framework to seamlessly register and discover facilities and services. Cloud-Fog-Automation (CFA) standards provide a robust foundation for multi-tiered wireless architectures, enhancing cyber-physical system performance with advanced abstractions. This work is for resource allocation optimization in IoT networks, particularly in power management and time-frequency spreading techniques, ensuring deterministic connectivity, networked computing, and intelligent control systems. Auction game theory is pivotal in managing resource allocation in densely populated, high-demand IoT networks. By employing sealed-bid auctions based on Bayesian game theory, the uncertainties in individual hypotheses and channel states among IoT entities are effectively mitigated. A novel dispersion metric optimization further enhances the coordination of layer-specific IoT uplinks, enabling ultra-reliable, low-latency (URLLC) communication. Numerical results demonstrate the superior performance of this resilient architecture, achieving fair resource allocation with minimal power consumption and robust performance in unsecured scenarios.<|reference_end|>
arxiv
@article{wickramasinghe2024auction-based, title={Auction-based Adaptive Resource Allocation Optimization in Dense IoT Networks}, author={Nirmal D. Wickramasinghe, John Dooley, Dirk Pesch, Indrakshi Dey}, journal={arXiv preprint arXiv:2409.17843}, year={2024}, archivePrefix={arXiv}, eprint={2409.17843}, primaryClass={cs.GT} }
wickramasinghe2024auction-based
arxiv-662339
2409.17844
Software Security Analysis in 2030 and Beyond: A Research Roadmap
<|reference_start|>Software Security Analysis in 2030 and Beyond: A Research Roadmap: As our lives, our businesses, and indeed our world economy become increasingly reliant on the secure operation of many interconnected software systems, the software engineering research community is faced with unprecedented research challenges, but also with exciting new opportunities. In this roadmap paper, we outline our vision of Software Security Analysis for the software systems of the future. Given the recent advances in generative AI, we need new methods to evaluate and maximize the security of code co-written by machines. As our software systems become increasingly heterogeneous, we need practical approaches that work even if some functions are automatically generated, e.g., by deep neural networks. As software systems depend evermore on the software supply chain, we need tools that scale to an entire ecosystem. What kind of vulnerabilities exist in future systems and how do we detect them? When all the shallow bugs are found, how do we discover vulnerabilities hidden deeply in the system? Assuming we cannot find all security flaws, how can we nevertheless protect our system? To answer these questions, we start our research roadmap with a survey of recent advances in software security, then discuss open challenges and opportunities, and conclude with a long-term perspective for the field.<|reference_end|>
arxiv
@article{böhme2024software, title={Software Security Analysis in 2030 and Beyond: A Research Roadmap}, author={Marcel B"ohme, Eric Bodden, Tevfik Bultan, Cristian Cadar, Yang Liu, Giuseppe Scanniello}, journal={arXiv preprint arXiv:2409.17844}, year={2024}, archivePrefix={arXiv}, eprint={2409.17844}, primaryClass={cs.SE cs.CR} }
böhme2024software
arxiv-662340
2409.17851
A New Dataset for Monocular Depth Estimation Under Viewpoint Shifts
<|reference_start|>A New Dataset for Monocular Depth Estimation Under Viewpoint Shifts: Monocular depth estimation is a critical task for autonomous driving and many other computer vision applications. While significant progress has been made in this field, the effects of viewpoint shifts on depth estimation models remain largely underexplored. This paper introduces a novel dataset and evaluation methodology to quantify the impact of different camera positions and orientations on monocular depth estimation performance. We propose a ground truth strategy based on homography estimation and object detection, eliminating the need for expensive lidar sensors. We collect a diverse dataset of road scenes from multiple viewpoints and use it to assess the robustness of a modern depth estimation model to geometric shifts. After assessing the validity of our strategy on a public dataset, we provide valuable insights into the limitations of current models and highlight the importance of considering viewpoint variations in real-world applications.<|reference_end|>
arxiv
@article{pjetri2024a, title={A New Dataset for Monocular Depth Estimation Under Viewpoint Shifts}, author={Aurel Pjetri, Stefano Caprasecca, Leonardo Taccari, Matteo Simoncini, Henrique Pi~neiro Monteagudo, Walter Wallace, Douglas Coimbra de Andrade, Francesco Sambo, Andrew David Bagdanov}, journal={arXiv preprint arXiv:2409.17851}, year={2024}, archivePrefix={arXiv}, eprint={2409.17851}, primaryClass={cs.CV} }
pjetri2024a
arxiv-662341
2409.17852
AMARO: All Heavy-Atom Transferable Neural Network Potentials of Protein Thermodynamics
<|reference_start|>AMARO: All Heavy-Atom Transferable Neural Network Potentials of Protein Thermodynamics: All-atom molecular simulations offer detailed insights into macromolecular phenomena, but their substantial computational cost hinders the exploration of complex biological processes. We introduce Advanced Machine-learning Atomic Representation Omni-force-field (AMARO), a new neural network potential (NNP) that combines an O(3)-equivariant message-passing neural network architecture, TensorNet, with a coarse-graining map that excludes hydrogen atoms. AMARO demonstrates the feasibility of training coarser NNP, without prior energy terms, to run stable protein dynamics with scalability and generalization capabilities.<|reference_end|>
arxiv
@article{mirarchi2024amaro:, title={AMARO: All Heavy-Atom Transferable Neural Network Potentials of Protein Thermodynamics}, author={Antonio Mirarchi, Raul P. Pelaez, Guillem Simeon, Gianni De Fabritiis}, journal={arXiv preprint arXiv:2409.17852}, year={2024}, doi={10.1021/acs.jctc.4c01239}, archivePrefix={arXiv}, eprint={2409.17852}, primaryClass={q-bio.BM cs.LG physics.bio-ph physics.comp-ph} }
mirarchi2024amaro:
arxiv-662342
2409.17854
Visualization of Age Distributions as Elements of Medical Data-Stories
<|reference_start|>Visualization of Age Distributions as Elements of Medical Data-Stories: In various fields, including medicine, age distributions are crucial. Despite widespread media coverage of health topics, there remains a need to enhance health communication. Narrative medical visualization is promising for improving information comprehension and retention. This study explores the most effective ways to present age distributions of diseases through narrative visualizations. We conducted a thorough analysis of existing visualizations, held workshops with a broad audience, and reviewed relevant literature. From this, we identified design choices focusing on comprehension, aesthetics, engagement, and memorability. We specifically tested three pictogram variants: pictograms as bars, stacked pictograms, and annotations. After evaluating 18 visualizations with 72 participants and three expert reviews, we determined that annotations were most effective for comprehension and aesthetics. However, traditional bar charts were preferred for engagement, and other variants were more memorable. The study provides a set of design recommendations based on these insights.<|reference_end|>
arxiv
@article{dowlatabadi2024visualization, title={Visualization of Age Distributions as Elements of Medical Data-Stories}, author={Sophia Dowlatabadi, Bernhard Preim, Monique Meuschke}, journal={arXiv preprint arXiv:2409.17854}, year={2024}, archivePrefix={arXiv}, eprint={2409.17854}, primaryClass={cs.HC cs.CV cs.GR} }
dowlatabadi2024visualization
arxiv-662343
2409.17858
How Feature Learning Can Improve Neural Scaling Laws
<|reference_start|>How Feature Learning Can Improve Neural Scaling Laws: We develop a solvable model of neural scaling laws beyond the kernel limit. Theoretical analysis of this model shows how performance scales with model size, training time, and the total amount of available data. We identify three scaling regimes corresponding to varying task difficulties: hard, easy, and super easy tasks. For easy and super-easy target functions, which lie in the reproducing kernel Hilbert space (RKHS) defined by the initial infinite-width Neural Tangent Kernel (NTK), the scaling exponents remain unchanged between feature learning and kernel regime models. For hard tasks, defined as those outside the RKHS of the initial NTK, we demonstrate both analytically and empirically that feature learning can improve scaling with training time and compute, nearly doubling the exponent for hard tasks. This leads to a different compute optimal strategy to scale parameters and training time in the feature learning regime. We support our finding that feature learning improves the scaling law for hard tasks but not for easy and super-easy tasks with experiments of nonlinear MLPs fitting functions with power-law Fourier spectra on the circle and CNNs learning vision tasks.<|reference_end|>
arxiv
@article{bordelon2024how, title={How Feature Learning Can Improve Neural Scaling Laws}, author={Blake Bordelon, Alexander Atanasov, Cengiz Pehlevan}, journal={arXiv preprint arXiv:2409.17858}, year={2024}, archivePrefix={arXiv}, eprint={2409.17858}, primaryClass={stat.ML cond-mat.dis-nn cs.LG} }
bordelon2024how
arxiv-662344
2409.17863
A 5T-2MTJ STT-assisted Spin Orbit Torque based Ternary Content Addressable Memory for Hardware Accelerators
<|reference_start|>A 5T-2MTJ STT-assisted Spin Orbit Torque based Ternary Content Addressable Memory for Hardware Accelerators: In this work, we present a novel non-volatile spin transfer torque (STT) assisted spin-orbit torque (SOT) based ternary content addressable memory (TCAM) with 5 transistors and 2 magnetic tunnel junctions (MTJs). We perform a comprehensive study of the proposed design from the device-level to application-level. At the device-level, various write characteristics such as write error rate, time, and current have been obtained using micromagnetic simulations. The array-level search and write performance have been evaluated based on SPICE circuit simulations with layout extracted parasitics for bitcells while also accounting for the impact of interconnect parasitics at the 7nm technology node. A search error rate of 3.9x10^-11 is projected for exact search while accounting for various sources of variation in the design. In addition, the resolution of the search operation is quantified under various scenarios to understand the achievable quality of the approximate search operations. Application-level performance and accuracy of the proposed design have been evaluated and benchmarked against other state-of-the-art CAM designs in the context of a CAM-based recommendation system.<|reference_end|>
arxiv
@article{narla2024a, title={A 5T-2MTJ STT-assisted Spin Orbit Torque based Ternary Content Addressable Memory for Hardware Accelerators}, author={Siri Narla, Piyush Kumar and Azad Naeemi}, journal={arXiv preprint arXiv:2409.17863}, year={2024}, archivePrefix={arXiv}, eprint={2409.17863}, primaryClass={cs.ET cs.AR} }
narla2024a
arxiv-662345
2409.17864
A Multimodal Single-Branch Embedding Network for Recommendation in Cold-Start and Missing Modality Scenarios
<|reference_start|>A Multimodal Single-Branch Embedding Network for Recommendation in Cold-Start and Missing Modality Scenarios: Most recommender systems adopt collaborative filtering (CF) and provide recommendations based on past collective interactions. Therefore, the performance of CF algorithms degrades when few or no interactions are available, a scenario referred to as cold-start. To address this issue, previous work relies on models leveraging both collaborative data and side information on the users or items. Similar to multimodal learning, these models aim at combining collaborative and content representations in a shared embedding space. In this work we propose a novel technique for multimodal recommendation, relying on a multimodal Single-Branch embedding network for Recommendation (SiBraR). Leveraging weight-sharing, SiBraR encodes interaction data as well as multimodal side information using the same single-branch embedding network on different modalities. This makes SiBraR effective in scenarios of missing modality, including cold start. Our extensive experiments on large-scale recommendation datasets from three different recommendation domains (music, movie, and e-commerce) and providing multimodal content information (audio, text, image, labels, and interactions) show that SiBraR significantly outperforms CF as well as state-of-the-art content-based RSs in cold-start scenarios, and is competitive in warm scenarios. We show that SiBraR's recommendations are accurate in missing modality scenarios, and that the model is able to map different modalities to the same region of the shared embedding space, hence reducing the modality gap.<|reference_end|>
arxiv
@article{ganhör2024a, title={A Multimodal Single-Branch Embedding Network for Recommendation in Cold-Start and Missing Modality Scenarios}, author={Christian Ganh"or, Marta Moscati, Anna Hausberger, Shah Nawaz, Markus Schedl}, journal={arXiv preprint arXiv:2409.17864}, year={2024}, doi={10.1145/3640457.3688009}, archivePrefix={arXiv}, eprint={2409.17864}, primaryClass={cs.IR cs.AI cs.LG cs.MM} }
ganhör2024a
arxiv-662346
2409.17865
Implementing a Nordic-Baltic Federated Health Data Network: a case report
<|reference_start|>Implementing a Nordic-Baltic Federated Health Data Network: a case report: Background: Centralized collection and processing of healthcare data across national borders pose significant challenges, including privacy concerns, data heterogeneity and legal barriers. To address some of these challenges, we formed an interdisciplinary consortium to develop a feder-ated health data network, comprised of six institutions across five countries, to facilitate Nordic-Baltic cooperation on secondary use of health data. The objective of this report is to offer early insights into our experiences developing this network. Methods: We used a mixed-method ap-proach, combining both experimental design and implementation science to evaluate the factors affecting the implementation of our network. Results: Technically, our experiments indicate that the network functions without significant performance degradation compared to centralized simu-lation. Conclusion: While use of interdisciplinary approaches holds a potential to solve challeng-es associated with establishing such collaborative networks, our findings turn the spotlight on the uncertain regulatory landscape playing catch up and the significant operational costs.<|reference_end|>
arxiv
@article{chomutare2024implementing, title={Implementing a Nordic-Baltic Federated Health Data Network: a case report}, author={Taridzo Chomutare, Aleksandar Babic, Laura-Maria Peltonen, Silja Elunurm, Peter Lundberg, Arne J"onsson, Emma Eneling, Ciprian-Virgil Gerstenberger, Troels Siggaard, Raivo Kolde, Oskar Jerdhaf, Martin Hansson, Alexandra Makhlysheva, Miroslav Muzny, Erik Ylip"a"a, S{o}ren Brunak and Hercules Dalianis}, journal={arXiv preprint arXiv:2409.17865}, year={2024}, archivePrefix={arXiv}, eprint={2409.17865}, primaryClass={cs.CY cs.AI cs.CL cs.LG} }
chomutare2024implementing
arxiv-662347
2409.17870
Efficient Arbitrary Precision Acceleration for Large Language Models on GPU Tensor Cores
<|reference_start|>Efficient Arbitrary Precision Acceleration for Large Language Models on GPU Tensor Cores: Large language models (LLMs) have been widely applied but face challenges in efficient inference. While quantization methods reduce computational demands, ultra-low bit quantization with arbitrary precision is hindered by limited GPU Tensor Core support and inefficient memory management, leading to suboptimal acceleration. To address these challenges, we propose a comprehensive acceleration scheme for arbitrary precision LLMs. At its core, we introduce a novel bipolar-INT data format that facilitates parallel computing and supports symmetric quantization, effectively reducing data redundancy. Building on this, we implement an arbitrary precision matrix multiplication scheme that decomposes and recovers matrices at the bit level, enabling flexible precision while maximizing GPU Tensor Core utilization. Furthermore, we develop an efficient matrix preprocessing method that optimizes data layout for subsequent computations. Finally, we design a data recovery-oriented memory management system that strategically utilizes fast shared memory, significantly enhancing kernel execution speed and minimizing memory access latency. Experimental results demonstrate our approach's effectiveness, with up to 13\times speedup in matrix multiplication compared to NVIDIA's CUTLASS. When integrated into LLMs, we achieve up to 6.7\times inference acceleration. These improvements significantly enhance LLM inference efficiency, enabling broader and more responsive applications of LLMs.<|reference_end|>
arxiv
@article{ma2024efficient, title={Efficient Arbitrary Precision Acceleration for Large Language Models on GPU Tensor Cores}, author={Shaobo Ma, Chao Fang, Haikuo Shao, Zhongfeng Wang}, journal={arXiv preprint arXiv:2409.17870}, year={2024}, archivePrefix={arXiv}, eprint={2409.17870}, primaryClass={cs.LG cs.AI cs.AR} }
ma2024efficient
arxiv-662348
2409.17872
A method for identifying causality in the response of nonlinear dynamical systems
<|reference_start|>A method for identifying causality in the response of nonlinear dynamical systems: Predicting the response of nonlinear dynamical systems subject to random, broadband excitation is important across a range of scientific disciplines, such as structural dynamics and neuroscience. Building data-driven models requires experimental measurements of the system input and output, but it can be difficult to determine whether inaccuracies in the model stem from modelling errors or noise. This paper presents a novel method to identify the causal component of the input-output data from measurements of a system in the presence of output noise, as a function of frequency, without needing a high fidelity model. An output prediction, calculated using an available model, is optimally combined with noisy measurements of the output to predict the input to the system. The parameters of the algorithm balance the two output signals and are utilised to calculate a nonlinear coherence metric as a measure of causality. This method is applicable to a broad class of nonlinear dynamical systems. There are currently no solutions to this problem in the absence of a complete benchmark model.<|reference_end|>
arxiv
@article{massingham2024a, title={A method for identifying causality in the response of nonlinear dynamical systems}, author={Joseph Massingham, Ole Nielsen, Tore Butlin}, journal={arXiv preprint arXiv:2409.17872}, year={2024}, archivePrefix={arXiv}, eprint={2409.17872}, primaryClass={cs.LG} }
massingham2024a
arxiv-662349
2409.17873
ReThink: Reveal the Threat of Electromagnetic Interference on Power Inverters
<|reference_start|>ReThink: Reveal the Threat of Electromagnetic Interference on Power Inverters: With the boom of renewable energy sources (RES), the number of power inverters proliferates. Power inverters are the key electronic devices that transform the direct current (DC) power from RES to the alternating current (AC) power on the grids, and their security can affect the stable operation of RES and even power grids. This paper analyzes the security of photovoltaic (PV) inverters from the aspects of internal sensors since they serve as the foundation for safe power conversion. We discover that both the embedded current sensors and voltage sensors are vulnerable to electromagnetic interference (EMI) of 1 GHz or higher, despite electromagnetic compatibility (EMC) countermeasures. Such vulnerabilities can lead to incorrect measurements and deceiving the control algorithms, and we design ReThink that could produce three types of consequences on PV inverters by emitting carefully crafted EMI, i.e., Denial of Service (DoS), damaging inverters physically or damping the power output. We successfully validate these consequences on 5 off-the-shelf PV inverters, and even in a real-world microgrid, by transmitting EMI signals at a distance of 100-150cm and a total power within 20W. Our work aims to raise awareness of the security of power electronic devices of RES, as they represent an emerging Cyber-Physical attack surface to the future RES-dominated grid. Finally, to cope with such threats, we provide hardware and software-based countermeasures.<|reference_end|>
arxiv
@article{yang2024rethink:, title={ReThink: Reveal the Threat of Electromagnetic Interference on Power Inverters}, author={Fengchen Yang, Zihao Dan, Kaikai Pan, Chen Yan, Xiaoyu Ji, Wenyuan Xu}, journal={arXiv preprint arXiv:2409.17873}, year={2024}, doi={10.14722/ndss.2025.23691}, archivePrefix={arXiv}, eprint={2409.17873}, primaryClass={cs.CR} }
yang2024rethink:
arxiv-662350
2409.17874
DarkSAM: Fooling Segment Anything Model to Segment Nothing
<|reference_start|>DarkSAM: Fooling Segment Anything Model to Segment Nothing: Segment Anything Model (SAM) has recently gained much attention for its outstanding generalization to unseen data and tasks. Despite its promising prospect, the vulnerabilities of SAM, especially to universal adversarial perturbation (UAP) have not been thoroughly investigated yet. In this paper, we propose DarkSAM, the first prompt-free universal attack framework against SAM, including a semantic decoupling-based spatial attack and a texture distortion-based frequency attack. We first divide the output of SAM into foreground and background. Then, we design a shadow target strategy to obtain the semantic blueprint of the image as the attack target. DarkSAM is dedicated to fooling SAM by extracting and destroying crucial object features from images in both spatial and frequency domains. In the spatial domain, we disrupt the semantics of both the foreground and background in the image to confuse SAM. In the frequency domain, we further enhance the attack effectiveness by distorting the high-frequency components (i.e., texture information) of the image. Consequently, with a single UAP, DarkSAM renders SAM incapable of segmenting objects across diverse images with varying prompts. Experimental results on four datasets for SAM and its two variant models demonstrate the powerful attack capability and transferability of DarkSAM.<|reference_end|>
arxiv
@article{zhou2024darksam:, title={DarkSAM: Fooling Segment Anything Model to Segment Nothing}, author={Ziqi Zhou, Yufei Song, Minghui Li, Shengshan Hu, Xianlong Wang, Leo Yu Zhang, Dezhong Yao, Hai Jin}, journal={arXiv preprint arXiv:2409.17874}, year={2024}, archivePrefix={arXiv}, eprint={2409.17874}, primaryClass={cs.AI} }
zhou2024darksam:
arxiv-662351
2409.17876
Why Companies "Democratise" Artificial Intelligence: The Case of Open Source Software Donations
<|reference_start|>Why Companies "Democratise" Artificial Intelligence: The Case of Open Source Software Donations: Companies claim to "democratise" artificial intelligence (AI) when they donate AI open source software (OSS) to non-profit foundations or release AI models, among others, but what does this term mean and why do they do it? As the impact of AI on society and the economy grows, understanding the commercial incentives behind AI democratisation efforts is crucial for ensuring these efforts serve broader interests beyond commercial agendas. Towards this end, this study employs a mixed-methods approach to investigate commercial incentives for 43 AI OSS donations to the Linux Foundation. It makes contributions to both research and practice. It contributes a taxonomy of both individual and organisational social, economic, and technological incentives for AI democratisation. In particular, it highlights the role of democratising the governance and control rights of an OSS project (i.e., from one company to open governance) as a structural enabler for downstream goals, such as attracting external contributors, reducing development costs, and influencing industry standards, among others. Furthermore, OSS donations are often championed by individual developers within companies, highlighting the importance of the bottom-up incentives for AI democratisation. The taxonomy provides a framework and toolkit for discerning incentives for other AI democratisation efforts, such as the release of AI models. The paper concludes with a discussion of future research directions.<|reference_end|>
arxiv
@article{osborne2024why, title={Why Companies "Democratise" Artificial Intelligence: The Case of Open Source Software Donations}, author={Cailean Osborne}, journal={arXiv preprint arXiv:2409.17876}, year={2024}, archivePrefix={arXiv}, eprint={2409.17876}, primaryClass={cs.CY cs.AI cs.SE} }
osborne2024why
arxiv-662352
2409.17880
Self-Distilled Depth Refinement with Noisy Poisson Fusion
<|reference_start|>Self-Distilled Depth Refinement with Noisy Poisson Fusion: Depth refinement aims to infer high-resolution depth with fine-grained edges and details, refining low-resolution results of depth estimation models. The prevailing methods adopt tile-based manners by merging numerous patches, which lacks efficiency and produces inconsistency. Besides, prior arts suffer from fuzzy depth boundaries and limited generalizability. Analyzing the fundamental reasons for these limitations, we model depth refinement as a noisy Poisson fusion problem with local inconsistency and edge deformation noises. We propose the Self-distilled Depth Refinement (SDDR) framework to enforce robustness against the noises, which mainly consists of depth edge representation and edge-based guidance. With noisy depth predictions as input, SDDR generates low-noise depth edge representations as pseudo-labels by coarse-to-fine self-distillation. Edge-based guidance with edge-guided gradient loss and edge-based fusion loss serves as the optimization objective equivalent to Poisson fusion. When depth maps are better refined, the labels also become more noise-free. Our model can acquire strong robustness to the noises, achieving significant improvements in accuracy, edge quality, efficiency, and generalizability on five different benchmarks. Moreover, directly training another model with edge labels produced by SDDR brings improvements, suggesting that our method could help with training robust refinement models in future works.<|reference_end|>
arxiv
@article{li2024self-distilled, title={Self-Distilled Depth Refinement with Noisy Poisson Fusion}, author={Jiaqi Li, Yiran Wang, Jinghong Zheng, Zihao Huang, Ke Xian, Zhiguo Cao, Jianming Zhang}, journal={arXiv preprint arXiv:2409.17880}, year={2024}, archivePrefix={arXiv}, eprint={2409.17880}, primaryClass={cs.CV} }
li2024self-distilled
arxiv-662353
2409.17881
Discontinuous Reception with Adjustable Inactivity Timer for IIoT
<|reference_start|>Discontinuous Reception with Adjustable Inactivity Timer for IIoT: Discontinuous reception (DRX) is a key technology for reducing the energy consumption of industrial Internet of Things (IIoT) devices. Specifically, DRX allows the devices to operate in a low-power mode when no data reception is scheduled, and its effectiveness depends on the proper configuration of the DRX parameters. In this paper, we characterize the DRX process departing from a semi-Markov chain modeling. We detail two ways to set DRX parameters to minimize the device power consumption while meeting a mean delay constraint. The first method exhaustively searches for the optimal configuration. In contrast, the second method uses a low-complexity metaheuristic to find a sub-optimal configuration, thus considering ideal and practical DRX configurations. Notably, within the DRX parameters, the inactivity timer (IT) is a caution time that specifies how long a device remains active after the last information exchange. Traditionally, a device implementing DRX will restart the IT after each data reception as a precedent to a low-power mode. The usual approach lies in restarting the IT whenever new data is received during this cautious period, which might sometimes needlessly extend the active time. Herein, we propose a more efficient method in which the transmit base station (BS) explicitly indicates restarting the timer through the control channel only when appropriate. The decision is taken based on the BS's knowledge about its buffer status. We consider Poisson and bursty traffic models, which are typical in IIoT setups, and verify the suitability of our proposal for reducing the energy consumption of the devices without significantly compromising the communication latency through extensive numerical simulations. Specifically, energy-saving gains of up to 30% can be obtained regardless of the arrival rate and delay constraints.<|reference_end|>
arxiv
@article{ruíz-guirola2024discontinuous, title={Discontinuous Reception with Adjustable Inactivity Timer for IIoT}, author={David E. Ru'iz-Guirola, Carlos A. Rodr'iguez-L'opez, Onel L. A. L'opez, Samuel Montejo-S'anchez, Vitalio Alfonso Reguera, and Matti Latva-aho}, journal={arXiv preprint arXiv:2409.17881}, year={2024}, doi={10.1109/TII.2024.3455010}, archivePrefix={arXiv}, eprint={2409.17881}, primaryClass={eess.SY cs.SY} }
ruíz-guirola2024discontinuous
arxiv-662354
2409.17882
Multi-UAV Enabled MEC Networks: Optimizing Delay through Intelligent 3D Trajectory Planning and Resource Allocation
<|reference_start|>Multi-UAV Enabled MEC Networks: Optimizing Delay through Intelligent 3D Trajectory Planning and Resource Allocation: Mobile Edge Computing (MEC) reduces the computational burden on terminal devices by shortening the distance between these devices and computing nodes. Integrating Unmanned Aerial Vehicles (UAVs) with enhanced MEC networks can leverage the high mobility of UAVs to flexibly adjust network topology, further expanding the applicability of MEC. However, in highly dynamic and complex real-world environments, it is crucial to balance task offloading effectiveness with algorithm performance. This paper investigates a multi-UAV communication network equipped with edge computing nodes to assist terminal users in task computation. Our goal is to reduce the task processing delay for users through the joint optimization of discrete computation modes, continuous 3D trajectories, and resource assignment. To address the challenges posed by the mixed action space, we propose a Multi-UAV Edge Computing Resource Scheduling (MUECRS) algorithm, which comprises two key components: 1) trajectory optimization, and 2) computation mode and resource management. Experimental results demonstrate our method effectively designs the 3D flight trajectories of UAVs, enabling rapid terminal coverage. Furthermore, the proposed algorithm achieves efficient resource deployment and scheduling, outperforming comparative algorithms by at least 16.7%, demonstrating superior adaptability and robustness.<|reference_end|>
arxiv
@article{wang2024multi-uav, title={Multi-UAV Enabled MEC Networks: Optimizing Delay through Intelligent 3D Trajectory Planning and Resource Allocation}, author={Zhiying Wang, Tianxi Wei, Gang Sun, Xinyue Liu, Hongfang Yu, Dusit Niyato}, journal={arXiv preprint arXiv:2409.17882}, year={2024}, archivePrefix={arXiv}, eprint={2409.17882}, primaryClass={cs.MA} }
wang2024multi-uav
arxiv-662355
2409.17885
Sentiment Analysis of ML Projects: Bridging Emotional Intelligence and Code Quality
<|reference_start|>Sentiment Analysis of ML Projects: Bridging Emotional Intelligence and Code Quality: This study explores the intricate relationship between sentiment analysis (SA) and code quality within machine learning (ML) projects, illustrating how the emotional dynamics of developers affect the technical and functional attributes of software projects. Recognizing the vital role of developer sentiments, this research employs advanced sentiment analysis techniques to scrutinize affective states from textual interactions such as code comments, commit messages, and issue discussions within high-profile ML projects. By integrating a comprehensive dataset of popular ML repositories, this analysis applies a blend of rule-based, machine learning, and hybrid sentiment analysis methodologies to systematically quantify sentiment scores. The emotional valence expressed by developers is then correlated with a spectrum of code quality indicators, including the prevalence of bugs, vulnerabilities, security hotspots, code smells, and duplication instances. Findings from this study distinctly illustrate that positive sentiments among developers are strongly associated with superior code quality metrics manifested through reduced bugs and lower incidence of code smells. This relationship underscores the importance of fostering positive emotional environments to enhance productivity and code craftsmanship. Conversely, the analysis reveals that negative sentiments correlate with an uptick in code issues, particularly increased duplication and heightened security risks, pointing to the detrimental effects of adverse emotional conditions on project health.<|reference_end|>
arxiv
@article{ahmed2024sentiment, title={Sentiment Analysis of ML Projects: Bridging Emotional Intelligence and Code Quality}, author={Md Shoaib Ahmed, Dongyoung Park, Nasir U. Eisty}, journal={arXiv preprint arXiv:2409.17885}, year={2024}, archivePrefix={arXiv}, eprint={2409.17885}, primaryClass={cs.SE} }
ahmed2024sentiment
arxiv-662356
2409.17886
Upper-Body Pose-based Gaze Estimation for Privacy-Preserving 3D Gaze Target Detection
<|reference_start|>Upper-Body Pose-based Gaze Estimation for Privacy-Preserving 3D Gaze Target Detection: Gaze Target Detection (GTD), i.e., determining where a person is looking within a scene from an external viewpoint, is a challenging task, particularly in 3D space. Existing approaches heavily rely on analyzing the person's appearance, primarily focusing on their face to predict the gaze target. This paper presents a novel approach to tackle this problem by utilizing the person's upper-body pose and available depth maps to extract a 3D gaze direction and employing a multi-stage or an end-to-end pipeline to predict the gazed target. When predicted accurately, the human body pose can provide valuable information about the head pose, which is a good approximation of the gaze direction, as well as the position of the arms and hands, which are linked to the activity the person is performing and the objects they are likely focusing on. Consequently, in addition to performing gaze estimation in 3D, we are also able to perform GTD simultaneously. We demonstrate state-of-the-art results on the most comprehensive publicly accessible 3D gaze target detection dataset without requiring images of the person's face, thus promoting privacy preservation in various application contexts. The code is available at https://github.com/intelligolabs/privacy-gtd-3D.<|reference_end|>
arxiv
@article{toaiari2024upper-body, title={Upper-Body Pose-based Gaze Estimation for Privacy-Preserving 3D Gaze Target Detection}, author={Andrea Toaiari, Vittorio Murino, Marco Cristani, Cigdem Beyan}, journal={arXiv preprint arXiv:2409.17886}, year={2024}, archivePrefix={arXiv}, eprint={2409.17886}, primaryClass={cs.CV} }
toaiari2024upper-body
arxiv-662357
2409.17889
A multi-source data power load forecasting method using attention mechanism-based parallel cnn-gru
<|reference_start|>A multi-source data power load forecasting method using attention mechanism-based parallel cnn-gru: Accurate power load forecasting is crucial for improving energy efficiency and ensuring power supply quality. Considering the power load forecasting problem involves not only dynamic factors like historical load variations but also static factors such as climate conditions that remain constant over specific periods. From the model-agnostic perspective, this paper proposes a parallel structure network to extract important information from both dynamic and static data. Firstly, based on complexity learning theory, it is demonstrated that models integrated through parallel structures exhibit superior generalization abilities compared to individual base learners. Additionally, the higher the independence between base learners, the stronger the generalization ability of the parallel structure model. This suggests that the structure of machine learning models inherently contains significant information. Building on this theoretical foundation, a parallel convolutional neural network (CNN)-gate recurrent unit (GRU) attention model (PCGA) is employed to address the power load forecasting issue, aiming to effectively integrate the influences of dynamic and static features. The CNN module is responsible for capturing spatial characteristics from static data, while the GRU module captures long-term dependencies in dynamic time series data. The attention layer is designed to focus on key information from the spatial-temporal features extracted by the parallel CNN-GRU. To substantiate the advantages of the parallel structure model in extracting and integrating multi-source information, a series of experiments are conducted.<|reference_end|>
arxiv
@article{min2024a, title={A multi-source data power load forecasting method using attention mechanism-based parallel cnn-gru}, author={Chao Min, Yijia Wang, Bo Zhang, Xin Ma and Junyi Cui}, journal={arXiv preprint arXiv:2409.17889}, year={2024}, archivePrefix={arXiv}, eprint={2409.17889}, primaryClass={cs.LG} }
min2024a
arxiv-662358
2409.17892
EMMA-500: Enhancing Massively Multilingual Adaptation of Large Language Models
<|reference_start|>EMMA-500: Enhancing Massively Multilingual Adaptation of Large Language Models: In this work, we introduce EMMA-500, a large-scale multilingual language model continue-trained on texts across 546 languages designed for enhanced multilingual performance, focusing on improving language coverage for low-resource languages. To facilitate continual pre-training, we compile the MaLA corpus, a comprehensive multilingual dataset enriched with curated datasets across diverse domains. Leveraging this corpus, we conduct extensive continual pre-training of the Llama 2 7B model, resulting in EMMA-500, which demonstrates robust performance across a wide collection of benchmarks, including a comprehensive set of multilingual tasks and PolyWrite, an open-ended generation benchmark developed in this study. Our results highlight the effectiveness of continual pre-training in expanding large language models' language capacity, particularly for underrepresented languages, demonstrating significant gains in cross-lingual transfer, task generalization, and language adaptability.<|reference_end|>
arxiv
@article{ji2024emma-500:, title={EMMA-500: Enhancing Massively Multilingual Adaptation of Large Language Models}, author={Shaoxiong Ji, Zihao Li, Indraneil Paul, Jaakko Paavola, Peiqin Lin, Pinzhen Chen, Dayy'an O'Brien, Hengyu Luo, Hinrich Sch"utze, J"org Tiedemann, Barry Haddow}, journal={arXiv preprint arXiv:2409.17892}, year={2024}, archivePrefix={arXiv}, eprint={2409.17892}, primaryClass={cs.CL} }
ji2024emma-500:
arxiv-662359
2409.17895
Self-supervised Monocular Depth Estimation with Large Kernel Attention
<|reference_start|>Self-supervised Monocular Depth Estimation with Large Kernel Attention: Self-supervised monocular depth estimation has emerged as a promising approach since it does not rely on labeled training data. Most methods combine convolution and Transformer to model long-distance dependencies to estimate depth accurately. However, Transformer treats 2D image features as 1D sequences, and positional encoding somewhat mitigates the loss of spatial information between different feature blocks, tending to overlook channel features, which limit the performance of depth estimation. In this paper, we propose a self-supervised monocular depth estimation network to get finer details. Specifically, we propose a decoder based on large kernel attention, which can model long-distance dependencies without compromising the two-dimension structure of features while maintaining feature channel adaptivity. In addition, we introduce a up-sampling module to accurately recover the fine details in the depth map. Our method achieves competitive results on the KITTI dataset.<|reference_end|>
arxiv
@article{xiang2024self-supervised, title={Self-supervised Monocular Depth Estimation with Large Kernel Attention}, author={Xuezhi Xiang, Yao Wang, Lei Zhang, Denis Ombati, Himaloy Himu, Xiantong Zhen}, journal={arXiv preprint arXiv:2409.17895}, year={2024}, archivePrefix={arXiv}, eprint={2409.17895}, primaryClass={cs.CV} }
xiang2024self-supervised
arxiv-662360
2409.17896
Model-Free versus Model-Based Reinforcement Learning for Fixed-Wing UAV Attitude Control Under Varying Wind Conditions
<|reference_start|>Model-Free versus Model-Based Reinforcement Learning for Fixed-Wing UAV Attitude Control Under Varying Wind Conditions: This paper evaluates and compares the performance of model-free and model-based reinforcement learning for the attitude control of fixed-wing unmanned aerial vehicles using PID as a reference point. The comparison focuses on their ability to handle varying flight dynamics and wind disturbances in a simulated environment. Our results show that the Temporal Difference Model Predictive Control agent outperforms both the PID controller and other model-free reinforcement learning methods in terms of tracking accuracy and robustness over different reference difficulties, particularly in nonlinear flight regimes. Furthermore, we introduce actuation fluctuation as a key metric to assess energy efficiency and actuator wear, and we test two different approaches from the literature: action variation penalty and conditioning for action policy smoothness. We also evaluate all control methods when subject to stochastic turbulence and gusts separately, so as to measure their effects on tracking performance, observe their limitations and outline their implications on the Markov decision process formalism.<|reference_end|>
arxiv
@article{olivares2024model-free, title={Model-Free versus Model-Based Reinforcement Learning for Fixed-Wing UAV Attitude Control Under Varying Wind Conditions}, author={David Olivares, Pierre Fournier, Pavan Vasishta, Julien Marzat}, journal={In Proceedings of the 21st International Conference on Informatics in Control, Automation and Robotics (ICINCO 2024)}, year={2024}, archivePrefix={arXiv}, eprint={2409.17896}, primaryClass={cs.RO cs.LG cs.SY eess.SY} }
olivares2024model-free
arxiv-662361
2409.17898
MC-SEMamba: A Simple Multi-channel Extension of SEMamba
<|reference_start|>MC-SEMamba: A Simple Multi-channel Extension of SEMamba: Transformer-based models have become increasingly popular and have impacted speech-processing research owing to their exceptional performance in sequence modeling. Recently, a promising model architecture, Mamba, has emerged as a potential alternative to transformer-based models because of its efficient modeling of long sequences. In particular, models like SEMamba have demonstrated the effectiveness of the Mamba architecture in single-channel speech enhancement. This paper aims to adapt SEMamba for multi-channel applications with only a small increase in parameters. The resulting system, MC-SEMamba, achieved results on the CHiME3 dataset that were comparable or even superior to several previous baseline models. Additionally, we found that increasing the number of microphones from 1 to 6 improved the speech enhancement performance of MC-SEMamba.<|reference_end|>
arxiv
@article{ting2024mc-semamba:, title={MC-SEMamba: A Simple Multi-channel Extension of SEMamba}, author={Wen-Yuan Ting, Wenze Ren, Rong Chao, Hsin-Yi Lin, Yu Tsao, Fan-Gang Zeng}, journal={arXiv preprint arXiv:2409.17898}, year={2024}, archivePrefix={arXiv}, eprint={2409.17898}, primaryClass={eess.AS cs.SD} }
ting2024mc-semamba:
arxiv-662362
2409.17899
Revisiting Acoustic Similarity in Emotional Speech and Music via Self-Supervised Representations
<|reference_start|>Revisiting Acoustic Similarity in Emotional Speech and Music via Self-Supervised Representations: Emotion recognition from speech and music shares similarities due to their acoustic overlap, which has led to interest in transferring knowledge between these domains. However, the shared acoustic cues between speech and music, particularly those encoded by Self-Supervised Learning (SSL) models, remain largely unexplored, given the fact that SSL models for speech and music have rarely been applied in cross-domain research. In this work, we revisit the acoustic similarity between emotion speech and music, starting with an analysis of the layerwise behavior of SSL models for Speech Emotion Recognition (SER) and Music Emotion Recognition (MER). Furthermore, we perform cross-domain adaptation by comparing several approaches in a two-stage fine-tuning process, examining effective ways to utilize music for SER and speech for MER. Lastly, we explore the acoustic similarities between emotional speech and music using Frechet audio distance for individual emotions, uncovering the issue of emotion bias in both speech and music SSL models. Our findings reveal that while speech and music SSL models do capture shared acoustic features, their behaviors can vary depending on different emotions due to their training strategies and domain-specificities. Additionally, parameter-efficient fine-tuning can enhance SER and MER performance by leveraging knowledge from each other. This study provides new insights into the acoustic similarity between emotional speech and music, and highlights the potential for cross-domain generalization to improve SER and MER systems.<|reference_end|>
arxiv
@article{sun2024revisiting, title={Revisiting Acoustic Similarity in Emotional Speech and Music via Self-Supervised Representations}, author={Yujia Sun, Zeyu Zhao, Korin Richmond, Yuanchao Li}, journal={arXiv preprint arXiv:2409.17899}, year={2024}, archivePrefix={arXiv}, eprint={2409.17899}, primaryClass={eess.AS cs.AI cs.CL cs.MM cs.SD} }
sun2024revisiting
arxiv-662363
2409.17902
Designing Short-Stage CDC-XPUFs: Balancing Reliability, Cost, and Security in IoT Devices
<|reference_start|>Designing Short-Stage CDC-XPUFs: Balancing Reliability, Cost, and Security in IoT Devices: The rapid expansion of Internet of Things (IoT) devices demands robust and resource-efficient security solutions. Physically Unclonable Functions (PUFs), which generate unique cryptographic keys from inherent hardware variations, offer a promising approach. However, traditional PUFs like Arbiter PUFs (APUFs) and XOR Arbiter PUFs (XOR-PUFs) are susceptible to machine learning (ML) and reliability-based attacks. In this study, we investigate Component-Differentially Challenged XOR-PUFs (CDC-XPUFs), a less explored variant, to address these vulnerabilities. We propose an optimized CDC-XPUF design that incorporates a pre-selection strategy to enhance reliability and introduces a novel lightweight architecture to reduce hardware overhead. Rigorous testing demonstrates that our design significantly lowers resource consumption, maintains strong resistance to ML attacks, and improves reliability, effectively mitigating reliability-based attacks. These results highlight the potential of CDC-XPUFs as a secure and efficient candidate for widespread deployment in resource-constrained IoT systems.<|reference_end|>
arxiv
@article{li2024designing, title={Designing Short-Stage CDC-XPUFs: Balancing Reliability, Cost, and Security in IoT Devices}, author={Gaoxiang Li, Yu Zhuang}, journal={arXiv preprint arXiv:2409.17902}, year={2024}, archivePrefix={arXiv}, eprint={2409.17902}, primaryClass={cs.CR cs.LG} }
li2024designing
arxiv-662364
2409.17904
Learning to Love Edge Cases in Formative Math Assessment: Using the AMMORE Dataset and Chain-of-Thought Prompting to Improve Grading Accuracy
<|reference_start|>Learning to Love Edge Cases in Formative Math Assessment: Using the AMMORE Dataset and Chain-of-Thought Prompting to Improve Grading Accuracy: This paper introduces AMMORE, a new dataset of 53,000 math open-response question-answer pairs from Rori, a learning platform used by students in several African countries and conducts two experiments to evaluate the use of large language models (LLM) for grading particularly challenging student answers. The AMMORE dataset enables various potential analyses and provides an important resource for researching student math acquisition in understudied, real-world, educational contexts. In experiment 1 we use a variety of LLM-driven approaches, including zero-shot, few-shot, and chain-of-thought prompting, to grade the 1% of student answers that a rule-based classifier fails to grade accurately. We find that the best-performing approach -- chain-of-thought prompting -- accurately scored 92% of these edge cases, effectively boosting the overall accuracy of the grading from 98.7% to 99.9%. In experiment 2, we aim to better understand the consequential validity of the improved grading accuracy, by passing grades generated by the best-performing LLM-based approach to a Bayesian Knowledge Tracing (BKT) model, which estimated student mastery of specific lessons. We find that relatively modest improvements in model accuracy at the individual question level can lead to significant changes in the estimation of student mastery. Where the rules-based classifier currently used to grade student, answers misclassified the mastery status of 6.9% of students across their completed lessons, using the LLM chain-of-thought approach this misclassification rate was reduced to 2.6% of students. Taken together, these findings suggest that LLMs could be a valuable tool for grading open-response questions in K-12 mathematics education, potentially enabling encouraging wider adoption of open-ended questions in formative assessment.<|reference_end|>
arxiv
@article{henkel2024learning, title={Learning to Love Edge Cases in Formative Math Assessment: Using the AMMORE Dataset and Chain-of-Thought Prompting to Improve Grading Accuracy}, author={Owen Henkel, Hannah Horne-Robinson, Maria Dyshel, Nabil Ch, Baptiste Moreau-Pernet, Ralph Abood}, journal={arXiv preprint arXiv:2409.17904}, year={2024}, archivePrefix={arXiv}, eprint={2409.17904}, primaryClass={cs.AI} }
henkel2024learning
arxiv-662365
2409.17905
Rotation distance using flows
<|reference_start|>Rotation distance using flows: Splay trees are a simple and efficient dynamic data structure, invented by Sleator and Tarjan. The basic primitive for transforming a binary tree in this scheme is a rotation. Sleator, Tarjan, and Thurston proved that the maximum rotation distance between trees with n internal nodes is exactly 2n-6 for trees with n internal nodes (where n is larger than some constant). The proof of the upper bound is easy but the proof of the lower bound, remarkably, uses sophisticated arguments based on calculating hyperbolic volumes. We give an elementary proof of the same result. The main interest of the paper lies in the method, which is new. It basically relies on a potential function argument, similar to many amortized analyses. However, the potential of a tree is not defined explicitly, but by constructing an instance of a flow problem and using the max-flow min-cut theorem.<|reference_end|>
arxiv
@article{mathieu2024rotation, title={Rotation distance using flows}, author={Claire Mathieu and William Thurston}, journal={arXiv preprint arXiv:2409.17905}, year={2024}, archivePrefix={arXiv}, eprint={2409.17905}, primaryClass={cs.DM cs.DS} }
mathieu2024rotation
arxiv-662366
2409.17906
Graph Reasoning with Large Language Models via Pseudo-code Prompting
<|reference_start|>Graph Reasoning with Large Language Models via Pseudo-code Prompting: Large language models (LLMs) have recently achieved remarkable success in various reasoning tasks in the field of natural language processing. This success of LLMs has also motivated their use in graph-related tasks. Among others, recent work has explored whether LLMs can solve graph problems such as counting the number of connected components of a graph or computing the shortest path distance between two nodes. Although LLMs possess preliminary graph reasoning abilities, they might still struggle to solve some seemingly simple problems. In this paper, we investigate whether prompting via pseudo-code instructions can improve the performance of LLMs in solving graph problems. Our experiments demonstrate that using pseudo-code instructions generally improves the performance of all considered LLMs. The graphs, pseudo-code prompts, and evaluation code are publicly available.<|reference_end|>
arxiv
@article{skianis2024graph, title={Graph Reasoning with Large Language Models via Pseudo-code Prompting}, author={Konstantinos Skianis, Giannis Nikolentzos, Michalis Vazirgiannis}, journal={arXiv preprint arXiv:2409.17906}, year={2024}, archivePrefix={arXiv}, eprint={2409.17906}, primaryClass={cs.LG} }
skianis2024graph
arxiv-662367
2409.17907
PhantomLiDAR: Cross-modality Signal Injection Attacks against LiDAR
<|reference_start|>PhantomLiDAR: Cross-modality Signal Injection Attacks against LiDAR: LiDAR (Light Detection and Ranging) is a pivotal sensor for autonomous driving, offering precise 3D spatial information. Previous signal attacks against LiDAR systems mainly exploit laser signals. In this paper, we investigate the possibility of cross-modality signal injection attacks, i.e., injecting intentional electromagnetic interference (IEMI) to manipulate LiDAR output. Our insight is that the internal modules of a LiDAR, i.e., the laser receiving circuit, the monitoring sensors, and the beam-steering modules, even with strict electromagnetic compatibility (EMC) testing, can still couple with the IEMI attack signals and result in the malfunction of LiDAR systems. Based on the above attack surfaces, we propose the PhantomLiDAR attack, which manipulates LiDAR output in terms of Points Interference, Points Injection, Points Removal, and even LiDAR Power-Off. We evaluate and demonstrate the effectiveness of PhantomLiDAR with both simulated and real-world experiments on five COTS LiDAR systems. We also conduct feasibility experiments in real-world moving scenarios. We provide potential defense measures that can be implemented at both the sensor level and the vehicle system level to mitigate the risks associated with IEMI attacks. Video demonstrations can be viewed at https://sites.google.com/view/phantomlidar.<|reference_end|>
arxiv
@article{jin2024phantomlidar:, title={PhantomLiDAR: Cross-modality Signal Injection Attacks against LiDAR}, author={Zizhi Jin, Qinhong Jiang, Xuancun Lu, Chen Yan, Xiaoyu Ji, Wenyuan Xu}, journal={arXiv preprint arXiv:2409.17907}, year={2024}, doi={10.14722/ndss.2025.23997}, archivePrefix={arXiv}, eprint={2409.17907}, primaryClass={eess.SP cs.AI cs.ET cs.SY eess.SY} }
jin2024phantomlidar:
arxiv-662368
2409.17908
LKA-ReID:Vehicle Re-Identification with Large Kernel Attention
<|reference_start|>LKA-ReID:Vehicle Re-Identification with Large Kernel Attention: With the rapid development of intelligent transportation systems and the popularity of smart city infrastructure, Vehicle Re-ID technology has become an important research field. The vehicle Re-ID task faces an important challenge, which is the high similarity between different vehicles. Existing methods use additional detection or segmentation models to extract differentiated local features. However, these methods either rely on additional annotations or greatly increase the computational cost. Using attention mechanism to capture global and local features is crucial to solve the challenge of high similarity between classes in vehicle Re-ID tasks. In this paper, we propose LKA-ReID with large kernel attention. Specifically, the large kernel attention (LKA) utilizes the advantages of self-attention and also benefits from the advantages of convolution, which can extract the global and local features of the vehicle more comprehensively. We also introduce hybrid channel attention (HCA) combines channel attention with spatial information, so that the model can better focus on channels and feature regions, and ignore background and other disturbing information. Experiments on VeRi-776 dataset demonstrated the effectiveness of LKA-ReID, with mAP reaches 86.65% and Rank-1 reaches 98.03%.<|reference_end|>
arxiv
@article{xiang2024lka-reid:vehicle, title={LKA-ReID:Vehicle Re-Identification with Large Kernel Attention}, author={Xuezhi Xiang, Zhushan Ma, Lei Zhang, Denis Ombati, Himaloy Himu, Xiantong Zhen}, journal={arXiv preprint arXiv:2409.17908}, year={2024}, archivePrefix={arXiv}, eprint={2409.17908}, primaryClass={cs.CV} }
xiang2024lka-reid:vehicle
arxiv-662369
2409.17909
Unveiling the Potential of Graph Neural Networks in SME Credit Risk Assessment
<|reference_start|>Unveiling the Potential of Graph Neural Networks in SME Credit Risk Assessment: This paper takes the graph neural network as the technical framework, integrates the intrinsic connections between enterprise financial indicators, and proposes a model for enterprise credit risk assessment. The main research work includes: Firstly, based on the experience of predecessors, we selected 29 enterprise financial data indicators, abstracted each indicator as a vertex, deeply analyzed the relationships between the indicators, constructed a similarity matrix of indicators, and used the maximum spanning tree algorithm to achieve the graph structure mapping of enterprises; secondly, in the representation learning phase of the mapped graph, a graph neural network model was built to obtain its embedded representation. The feature vector of each node was expanded to 32 dimensions, and three GraphSAGE operations were performed on the graph, with the results pooled using the Pool operation, and the final output of three feature vectors was averaged to obtain the graph's embedded representation; finally, a classifier was constructed using a two-layer fully connected network to complete the prediction task. Experimental results on real enterprise data show that the model proposed in this paper can well complete the multi-level credit level estimation of enterprises. Furthermore, the tree-structured graph mapping deeply portrays the intrinsic connections of various indicator data of the company, and according to the ROC and other evaluation criteria, the model's classification effect is significant and has good "robustness".<|reference_end|>
arxiv
@article{liu2024unveiling, title={Unveiling the Potential of Graph Neural Networks in SME Credit Risk Assessment}, author={Bingyao Liu, Iris Li, Jianhua Yao, Yuan Chen, Guanming Huang, Jiajing Wang}, journal={arXiv preprint arXiv:2409.17909}, year={2024}, archivePrefix={arXiv}, eprint={2409.17909}, primaryClass={q-fin.RM cs.CL cs.LG} }
liu2024unveiling
arxiv-662370
2409.17912
Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect
<|reference_start|>Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect: We introduce Atlas-Chat, the first-ever collection of large language models specifically developed for dialectal Arabic. Focusing on Moroccan Arabic, also known as Darija, we construct our instruction dataset by consolidating existing Darija language resources, creating novel datasets both manually and synthetically, and translating English instructions with stringent quality control. Atlas-Chat-9B and 2B models, fine-tuned on the dataset, exhibit superior ability in following Darija instructions and performing standard NLP tasks. Notably, our models outperform both state-of-the-art and Arabic-specialized LLMs like LLaMa, Jais, and AceGPT, e.g., achieving a 13% performance boost over a larger 13B model on DarijaMMLU, in our newly introduced evaluation suite for Darija covering both discriminative and generative tasks. Furthermore, we perform an experimental analysis of various fine-tuning strategies and base model choices to determine optimal configurations. All our resources are publicly accessible, and we believe our work offers comprehensive design methodologies of instruction-tuning for low-resource language variants, which are often neglected in favor of data-rich languages by contemporary LLMs.<|reference_end|>
arxiv
@article{shang2024atlas-chat:, title={Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect}, author={Guokan Shang, Hadi Abdine, Yousef Khoubrane, Amr Mohamed, Yassine Abbahaddou, Sofiane Ennadir, Imane Momayiz, Xuguang Ren, Eric Moulines, Preslav Nakov, Michalis Vazirgiannis, Eric Xing}, journal={arXiv preprint arXiv:2409.17912}, year={2024}, archivePrefix={arXiv}, eprint={2409.17912}, primaryClass={cs.CL} }
shang2024atlas-chat:
arxiv-662371
2409.17916
Observer-Based Discontinuous Communication in the Secondary Control of AC Microgrids
<|reference_start|>Observer-Based Discontinuous Communication in the Secondary Control of AC Microgrids: This paper proposes an observer-based event-driven approach to decrease the overuse of communication networks. The suggested approach aims to estimate the required data for sharing between units in line with as much communication reduction as possible. In other words, the proposed approach effectively determines which state variables should be shared (observer concept) among the units during specific time intervals (event-triggered concept). This strategy significantly reduces the overall communication load. It is shown that the estimation error remains bounded and Zeno behavior, characterized by an endless number of transmissions occurring within a limited time frame, does not occur. The proposed methodology can be systematically applied to any communication-based secondary controller in alternating current (AC) microgrids. Simulation results demonstrate a high degree of precision in estimating the states under the proposed approach. Also, the secondary controller performance under the proposed method is evaluated in MATLAB/Simulink environment.<|reference_end|>
arxiv
@article{najafi2024observer-based, title={Observer-Based Discontinuous Communication in the Secondary Control of AC Microgrids}, author={Shahabeddin Najafi, Yazdan Batmani, Pouya Shafiee, and Charalambos Konstantinou}, journal={arXiv preprint arXiv:2409.17916}, year={2024}, archivePrefix={arXiv}, eprint={2409.17916}, primaryClass={eess.SY cs.SY} }
najafi2024observer-based
arxiv-662372
2409.17917
WaSt-3D: Wasserstein-2 Distance for Scene-to-Scene Stylization on 3D Gaussians
<|reference_start|>WaSt-3D: Wasserstein-2 Distance for Scene-to-Scene Stylization on 3D Gaussians: While style transfer techniques have been well-developed for 2D image stylization, the extension of these methods to 3D scenes remains relatively unexplored. Existing approaches demonstrate proficiency in transferring colors and textures but often struggle with replicating the geometry of the scenes. In our work, we leverage an explicit Gaussian Splatting (GS) representation and directly match the distributions of Gaussians between style and content scenes using the Earth Mover's Distance (EMD). By employing the entropy-regularized Wasserstein-2 distance, we ensure that the transformation maintains spatial smoothness. Additionally, we decompose the scene stylization problem into smaller chunks to enhance efficiency. This paradigm shift reframes stylization from a pure generative process driven by latent space losses to an explicit matching of distributions between two Gaussian representations. Our method achieves high-resolution 3D stylization by faithfully transferring details from 3D style scenes onto the content scene. Furthermore, WaSt-3D consistently delivers results across diverse content and style scenes without necessitating any training, as it relies solely on optimization-based techniques. See our project page for additional results and source code: $\href{https://compvis.github.io/wast3d/}{https://compvis.github.io/wast3d/}$.<|reference_end|>
arxiv
@article{kotovenko2024wast-3d:, title={WaSt-3D: Wasserstein-2 Distance for Scene-to-Scene Stylization on 3D Gaussians}, author={Dmytro Kotovenko, Olga Grebenkova, Nikolaos Sarafianos, Avinash Paliwal, Pingchuan Ma, Omid Poursaeed, Sreyas Mohan, Yuchen Fan, Yilei Li, Rakesh Ranjan, Bj"orn Ommer}, journal={arXiv preprint arXiv:2409.17917}, year={2024}, archivePrefix={arXiv}, eprint={2409.17917}, primaryClass={cs.CV} }
kotovenko2024wast-3d:
arxiv-662373
2409.17920
Resolving Multi-Condition Confusion for Finetuning-Free Personalized Image Generation
<|reference_start|>Resolving Multi-Condition Confusion for Finetuning-Free Personalized Image Generation: Personalized text-to-image generation methods can generate customized images based on the reference images, which have garnered wide research interest. Recent methods propose a finetuning-free approach with a decoupled cross-attention mechanism to generate personalized images requiring no test-time finetuning. However, when multiple reference images are provided, the current decoupled cross-attention mechanism encounters the object confusion problem and fails to map each reference image to its corresponding object, thereby seriously limiting its scope of application. To address the object confusion problem, in this work we investigate the relevance of different positions of the latent image features to the target object in diffusion model, and accordingly propose a weighted-merge method to merge multiple reference image features into the corresponding objects. Next, we integrate this weighted-merge method into existing pre-trained models and continue to train the model on a multi-object dataset constructed from the open-sourced SA-1B dataset. To mitigate object confusion and reduce training costs, we propose an object quality score to estimate the image quality for the selection of high-quality training samples. Furthermore, our weighted-merge training framework can be employed on single-object generation when a single object has multiple reference images. The experiments verify that our method achieves superior performance to the state-of-the-arts on the Concept101 dataset and DreamBooth dataset of multi-object personalized image generation, and remarkably improves the performance on single-object personalized image generation. Our code is available at https://github.com/hqhQAQ/MIP-Adapter.<|reference_end|>
arxiv
@article{huang2024resolving, title={Resolving Multi-Condition Confusion for Finetuning-Free Personalized Image Generation}, author={Qihan Huang, Siming Fu, Jinlong Liu, Hao Jiang, Yipeng Yu, Jie Song}, journal={arXiv preprint arXiv:2409.17920}, year={2024}, archivePrefix={arXiv}, eprint={2409.17920}, primaryClass={cs.CV} }
huang2024resolving
arxiv-662374
2409.17922
Navigation in a simplified Urban Flow through Deep Reinforcement Learning
<|reference_start|>Navigation in a simplified Urban Flow through Deep Reinforcement Learning: The increasing number of unmanned aerial vehicles (UAVs) in urban environments requires a strategy to minimize their environmental impact, both in terms of energy efficiency and noise reduction. In order to reduce these concerns, novel strategies for developing prediction models and optimization of flight planning, for instance through deep reinforcement learning (DRL), are needed. Our goal is to develop DRL algorithms capable of enabling the autonomous navigation of UAVs in urban environments, taking into account the presence of buildings and other UAVs, optimizing the trajectories in order to reduce both energetic consumption and noise. This is achieved using fluid-flow simulations which represent the environment in which UAVs navigate and training the UAV as an agent interacting with an urban environment. In this work, we consider a domain domain represented by a two-dimensional flow field with obstacles, ideally representing buildings, extracted from a three-dimensional high-fidelity numerical simulation. The presented methodology, using PPO+LSTM cells, was validated by reproducing a simple but fundamental problem in navigation, namely the Zermelo's problem, which deals with a vessel navigating in a turbulent flow, travelling from a starting point to a target location, optimizing the trajectory. The current method shows a significant improvement with respect to both a simple PPO and a TD3 algorithm, with a success rate (SR) of the PPO+LSTM trained policy of 98.7%, and a crash rate (CR) of 0.1%, outperforming both PPO (SR = 75.6%, CR=18.6%) and TD3 (SR=77.4% and CR=14.5%). This is the first step towards DRL strategies which will guide UAVs in a three-dimensional flow field using real-time signals, making the navigation efficient in terms of flight time and avoiding damages to the vehicle.<|reference_end|>
arxiv
@article{tonti2024navigation, title={Navigation in a simplified Urban Flow through Deep Reinforcement Learning}, author={Federica Tonti, Jean Rabault, Ricardo Vinuesa}, journal={arXiv preprint arXiv:2409.17922}, year={2024}, archivePrefix={arXiv}, eprint={2409.17922}, primaryClass={cs.AI} }
tonti2024navigation
arxiv-662375
2409.17924
Neural Light Spheres for Implicit Image Stitching and View Synthesis
<|reference_start|>Neural Light Spheres for Implicit Image Stitching and View Synthesis: Challenging to capture, and challenging to display on a cellphone screen, the panorama paradoxically remains both a staple and underused feature of modern mobile camera applications. In this work we address both of these challenges with a spherical neural light field model for implicit panoramic image stitching and re-rendering; able to accommodate for depth parallax, view-dependent lighting, and local scene motion and color changes during capture. Fit during test-time to an arbitrary path panoramic video capture -- vertical, horizontal, random-walk -- these neural light spheres jointly estimate the camera path and a high-resolution scene reconstruction to produce novel wide field-of-view projections of the environment. Our single-layer model avoids expensive volumetric sampling, and decomposes the scene into compact view-dependent ray offset and color components, with a total model size of 80 MB per scene, and real-time (50 FPS) rendering at 1080p resolution. We demonstrate improved reconstruction quality over traditional image stitching and radiance field methods, with significantly higher tolerance to scene motion and non-ideal capture settings.<|reference_end|>
arxiv
@article{chugunov2024neural, title={Neural Light Spheres for Implicit Image Stitching and View Synthesis}, author={Ilya Chugunov, Amogh Joshi, Kiran Murthy, Francois Bleibel, Felix Heide}, journal={arXiv preprint arXiv:2409.17924}, year={2024}, doi={10.1145/3680528.3687660}, archivePrefix={arXiv}, eprint={2409.17924}, primaryClass={cs.CV} }
chugunov2024neural
arxiv-662376
2409.17928
Pioneering Reliable Assessment in Text-to-Image Knowledge Editing: Leveraging a Fine-Grained Dataset and an Innovative Criterion
<|reference_start|>Pioneering Reliable Assessment in Text-to-Image Knowledge Editing: Leveraging a Fine-Grained Dataset and an Innovative Criterion: During pre-training, the Text-to-Image (T2I) diffusion models encode factual knowledge into their parameters. These parameterized facts enable realistic image generation, but they may become obsolete over time, thereby misrepresenting the current state of the world. Knowledge editing techniques aim to update model knowledge in a targeted way. However, facing the dual challenges posed by inadequate editing datasets and unreliable evaluation criterion, the development of T2I knowledge editing encounter difficulties in effectively generalizing injected knowledge. In this work, we design a T2I knowledge editing framework by comprehensively spanning on three phases: First, we curate a dataset \textbf{CAKE}, comprising paraphrase and multi-object test, to enable more fine-grained assessment on knowledge generalization. Second, we propose a novel criterion, \textbf{adaptive CLIP threshold}, to effectively filter out false successful images under the current criterion and achieve reliable editing evaluation. Finally, we introduce \textbf{MPE}, a simple but effective approach for T2I knowledge editing. Instead of tuning parameters, MPE precisely recognizes and edits the outdated part of the conditioning text-prompt to accommodate the up-to-date knowledge. A straightforward implementation of MPE (Based on in-context learning) exhibits better overall performance than previous model editors. We hope these efforts can further promote faithful evaluation of T2I knowledge editing methods.<|reference_end|>
arxiv
@article{gu2024pioneering, title={Pioneering Reliable Assessment in Text-to-Image Knowledge Editing: Leveraging a Fine-Grained Dataset and an Innovative Criterion}, author={Hengrui Gu, Kaixiong Zhou, Yili Wang, Ruobing Wang, Xin Wang}, journal={arXiv preprint arXiv:2409.17928}, year={2024}, archivePrefix={arXiv}, eprint={2409.17928}, primaryClass={cs.CL cs.AI} }
gu2024pioneering
arxiv-662377
2409.17929
The Lou Dataset -- Exploring the Impact of Gender-Fair Language in German Text Classification
<|reference_start|>The Lou Dataset -- Exploring the Impact of Gender-Fair Language in German Text Classification: Gender-fair language, an evolving German linguistic variation, fosters inclusion by addressing all genders or using neutral forms. Nevertheless, there is a significant lack of resources to assess the impact of this linguistic shift on classification using language models (LMs), which are probably not trained on such variations. To address this gap, we present Lou, the first dataset featuring high-quality reformulations for German text classification covering seven tasks, like stance detection and toxicity classification. Evaluating 16 mono- and multi-lingual LMs on Lou shows that gender-fair language substantially impacts predictions by flipping labels, reducing certainty, and altering attention patterns. However, existing evaluations remain valid, as LM rankings of original and reformulated instances do not significantly differ. While we offer initial insights on the effect on German text classification, the findings likely apply to other languages, as consistent patterns were observed in multi-lingual and English LMs.<|reference_end|>
arxiv
@article{waldis2024the, title={The Lou Dataset -- Exploring the Impact of Gender-Fair Language in German Text Classification}, author={Andreas Waldis and Joel Birrer and Anne Lauscher and Iryna Gurevych}, journal={arXiv preprint arXiv:2409.17929}, year={2024}, archivePrefix={arXiv}, eprint={2409.17929}, primaryClass={cs.CL} }
waldis2024the
arxiv-662378
2409.17931
Intelligent Energy Management: Remaining Useful Life Prediction and Charging Automation System Comprised of Deep Learning and the Internet of Things
<|reference_start|>Intelligent Energy Management: Remaining Useful Life Prediction and Charging Automation System Comprised of Deep Learning and the Internet of Things: Remaining Useful Life (RUL) of battery is an important parameter to know the battery's remaining life and need for recharge. The goal of this research project is to develop machine learning-based models for the battery RUL dataset. Different ML models are developed to classify the RUL of the vehicle, and the IoT (Internet of Things) concept is simulated for automating the charging system and managing any faults aligning. The graphs plotted depict the relationship between various vehicle parameters using the Blynk IoT platform. Results show that the catboost, Multi-Layer Perceptron (MLP), Gated Recurrent Unit (GRU), and hybrid model developed could classify RUL into three classes with 99% more accuracy. The data is fed using the tkinter GUI for simulating artificial intelligence (AI)-based charging, and with a pyserial backend, data can be entered into the Esp-32 microcontroller for making charge discharge possible with the model's predictions. Also, with an IoT system, the charging can be disconnected, monitored, and analyzed for automation. The results show that an accuracy of 99% can be obtained on models MLP, catboost model and similar accuracy on GRU model can be obtained, and finally relay-based triggering can be made by prediction through the model used for automating the charging and energy-saving mechanism. By showcasing an exemplary Blynk platform-based monitoring and automation phenomenon, we further present innovative ways of monitoring parameters and automating the system.<|reference_end|>
arxiv
@article{paneru2024intelligent, title={Intelligent Energy Management: Remaining Useful Life Prediction and Charging Automation System Comprised of Deep Learning and the Internet of Things}, author={Biplov Paneru, Bishwash Paneru, DP Sharma Mainali}, journal={arXiv preprint arXiv:2409.17931}, year={2024}, archivePrefix={arXiv}, eprint={2409.17931}, primaryClass={cs.LG cs.AI cs.SY eess.SY} }
paneru2024intelligent
arxiv-662379
2409.17932
Sample compression unleashed : New generalization bounds for real valued losses
<|reference_start|>Sample compression unleashed : New generalization bounds for real valued losses: The sample compression theory provides generalization guarantees for predictors that can be fully defined using a subset of the training dataset and a (short) message string, generally defined as a binary sequence. Previous works provided generalization bounds for the zero-one loss, which is restrictive, notably when applied to deep learning approaches. In this paper, we present a general framework for deriving new sample compression bounds that hold for real-valued losses. We empirically demonstrate the tightness of the bounds and their versatility by evaluating them on different types of models, e.g., neural networks and decision forests, trained with the Pick-To-Learn (P2L) meta-algorithm, which transforms the training method of any machine-learning predictor to yield sample-compressed predictors. In contrast to existing P2L bounds, ours are valid in the non-consistent case.<|reference_end|>
arxiv
@article{bazinet2024sample, title={Sample Compression Unleashed: New Generalization Bounds for Real Valued Losses}, author={Mathieu Bazinet, Valentina Zantedeschi, Pascal Germain}, journal={arXiv preprint arXiv:2409.17932}, year={2024}, archivePrefix={arXiv}, eprint={2409.17932}, primaryClass={cs.LG} }
bazinet2024sample
arxiv-662380
2409.17937
Adaptive Stream Processing on Edge Devices through Active Inference
<|reference_start|>Adaptive Stream Processing on Edge Devices through Active Inference: The current scenario of IoT is witnessing a constant increase on the volume of data, which is generated in constant stream, calling for novel architectural and logical solutions for processing it. Moving the data handling towards the edge of the computing spectrum guarantees better distribution of load and, in principle, lower latency and better privacy. However, managing such a structure is complex, especially when requirements, also referred to Service Level Objectives (SLOs), specified by applications' owners and infrastructure managers need to be ensured. Despite the rich number of proposals of Machine Learning (ML) based management solutions, researchers and practitioners yet struggle to guarantee long-term prediction and control, and accurate troubleshooting. Therefore, we present a novel ML paradigm based on Active Inference (AIF) -- a concept from neuroscience that describes how the brain constantly predicts and evaluates sensory information to decrease long-term surprise. We implement it and evaluate it in a heterogeneous real stream processing use case, where an AIF-based agent continuously optimizes the fulfillment of three SLOs for three autonomous driving services running on multiple devices. The agent used causal knowledge to gradually develop an understanding of how its actions are related to requirements fulfillment, and which configurations to favor. Through this approach, our agent requires up to thirty iterations to converge to the optimal solution, showing the capability of offering accurate results in a short amount of time. Furthermore, thanks to AIF and its causal structures, our method guarantees full transparency on the decision making, making the interpretation of the results and the troubleshooting effortless.<|reference_end|>
arxiv
@article{sedlak2024adaptive, title={Adaptive Stream Processing on Edge Devices through Active Inference}, author={Boris Sedlak, Victor Casamayor Pujol, Andrea Morichetta, Praveen Kumar Donta, and Schahram Dustdar}, journal={arXiv preprint arXiv:2409.17937}, year={2024}, archivePrefix={arXiv}, eprint={2409.17937}, primaryClass={cs.LG cs.DC} }
sedlak2024adaptive
arxiv-662381
2409.17938
Error bounds for Physics Informed Neural Networks in Nonlinear Schr\"odinger equations placed on unbounded domains
<|reference_start|>Error bounds for Physics Informed Neural Networks in Nonlinear Schr\"odinger equations placed on unbounded domains: We consider the subcritical nonlinear Schr\"odinger (NLS) in dimension one posed on the unbounded real line. Several previous works have considered the deep neural network approximation of NLS solutions from the numerical and theoretical point of view in the case of bounded domains. In this paper, we introduce a new PINNs method to treat the case of unbounded domains and provide rigorous bounds on the associated approximation error in terms of the energy and Strichartz norms, provided a reasonable integration scheme is available. Applications to traveling waves, breathers and solitons, as well as numerical experiments confirming the validity of the approximation are also provided as well.<|reference_end|>
arxiv
@article{alejo2024error, title={Error bounds for Physics Informed Neural Networks in Nonlinear Schr\"odinger equations placed on unbounded domains}, author={Miguel 'A. Alejo, Lucrezia Cossetti, Luca Fanelli, Claudio Mu~noz and Nicol'as Valenzuela}, journal={arXiv preprint arXiv:2409.17938}, year={2024}, archivePrefix={arXiv}, eprint={2409.17938}, primaryClass={math.AP cs.NA math.NA} }
alejo2024error
arxiv-662382
2409.17939
Predicting Anchored Text from Translation Memories for Machine Translation Using Deep Learning Methods
<|reference_start|>Predicting Anchored Text from Translation Memories for Machine Translation Using Deep Learning Methods: Translation memories (TMs) are the backbone for professional translation tools called computer-aided translation (CAT) tools. In order to perform a translation using a CAT tool, a translator uses the TM to gather translations similar to the desired segment to translate (s'). Many CAT tools offer a fuzzy-match algorithm to locate segments (s) in the TM that are close in distance to s'. After locating two similar segments, the CAT tool will present parallel segments (s, t) that contain one segment in the source language along with its translation in the target language. Additionally, CAT tools contain fuzzy-match repair (FMR) techniques that will automatically use the parallel segments from the TM to create new TM entries containing a modified version of the original with the idea in mind that it will be the translation of s'. Most FMR techniques use machine translation as a way of "repairing" those words that have to be modified. In this article, we show that for a large part of those words which are anchored, we can use other techniques that are based on machine learning approaches such as Word2Vec. BERT, and even ChatGPT. Specifically, we show that for anchored words that follow the continuous bag-of-words (CBOW) paradigm, Word2Vec, BERT, and GPT-4 can be used to achieve similar and, for some cases, better results than neural machine translation for translating anchored words from French to English.<|reference_end|>
arxiv
@article{yue2024predicting, title={Predicting Anchored Text from Translation Memories for Machine Translation Using Deep Learning Methods}, author={Richard Yue, John E. Ortega}, journal={arXiv preprint arXiv:2409.17939}, year={2024}, archivePrefix={arXiv}, eprint={2409.17939}, primaryClass={cs.CL cs.AI cs.LG} }
yue2024predicting
arxiv-662383
2409.17941
Perturb, Attend, Detect and Localize (PADL): Robust Proactive Image Defense
<|reference_start|>Perturb, Attend, Detect and Localize (PADL): Robust Proactive Image Defense: Image manipulation detection and localization have received considerable attention from the research community given the blooming of Generative Models (GMs). Detection methods that follow a passive approach may overfit to specific GMs, limiting their application in real-world scenarios, due to the growing diversity of generative models. Recently, approaches based on a proactive framework have shown the possibility of dealing with this limitation. However, these methods suffer from two main limitations, which raises concerns about potential vulnerabilities: i) the manipulation detector is not robust to noise and hence can be easily fooled; ii) the fact that they rely on fixed perturbations for image protection offers a predictable exploit for malicious attackers, enabling them to reverse-engineer and evade detection. To overcome this issue we propose PADL, a new solution able to generate image-specific perturbations using a symmetric scheme of encoding and decoding based on cross-attention, which drastically reduces the possibility of reverse engineering, even when evaluated with adaptive attack [31]. Additionally, PADL is able to pinpoint manipulated areas, facilitating the identification of specific regions that have undergone alterations, and has more generalization power than prior art on held-out generative models. Indeed, although being trained only on an attribute manipulation GAN model [15], our method generalizes to a range of unseen models with diverse architectural designs, such as StarGANv2, BlendGAN, DiffAE, StableDiffusion and StableDiffusionXL. Additionally, we introduce a novel evaluation protocol, which offers a fair evaluation of localisation performance in function of detection accuracy and better captures real-world scenarios.<|reference_end|>
arxiv
@article{bartolucci2024perturb,, title={Perturb, Attend, Detect and Localize (PADL): Robust Proactive Image Defense}, author={Filippo Bartolucci, Iacopo Masi, Giuseppe Lisanti}, journal={arXiv preprint arXiv:2409.17941}, year={2024}, archivePrefix={arXiv}, eprint={2409.17941}, primaryClass={cs.CV} }
bartolucci2024perturb,
arxiv-662384
2409.17943
On Translating Technical Terminology: A Translation Workflow for Machine-Translated Acronyms
<|reference_start|>On Translating Technical Terminology: A Translation Workflow for Machine-Translated Acronyms: The typical workflow for a professional translator to translate a document from its source language (SL) to a target language (TL) is not always focused on what many language models in natural language processing (NLP) do - predict the next word in a series of words. While high-resource languages like English and French are reported to achieve near human parity using common metrics for measurement such as BLEU and COMET, we find that an important step is being missed: the translation of technical terms, specifically acronyms. Some state-of-the art machine translation systems like Google Translate which are publicly available can be erroneous when dealing with acronyms - as much as 50% in our findings. This article addresses acronym disambiguation for MT systems by proposing an additional step to the SL-TL (FR-EN) translation workflow where we first offer a new acronym corpus for public consumption and then experiment with a search-based thresholding algorithm that achieves nearly 10% increase when compared to Google Translate and OpusMT.<|reference_end|>
arxiv
@article{yue2024on, title={On Translating Technical Terminology: A Translation Workflow for Machine-Translated Acronyms}, author={Richard Yue, John E. Ortega, Kenneth Ward Church}, journal={arXiv preprint arXiv:2409.17943}, year={2024}, archivePrefix={arXiv}, eprint={2409.17943}, primaryClass={cs.CL cs.AI cs.LG} }
yue2024on
arxiv-662385
2409.17945
Modular Autonomous Vehicle in Heterogeneous Traffic Flow: Modeling, Simulation, and Implication
<|reference_start|>Modular Autonomous Vehicle in Heterogeneous Traffic Flow: Modeling, Simulation, and Implication: Modular autonomous vehicles (MAVs) represent a groundbreaking concept that integrates modularity into the ongoing development of autonomous vehicles. This innovative design introduces unique features to traffic flow, allowing multiple modules to seamlessly join together and operate collectively. To understand the traffic flow characteristics involving these vehicles and their collective operations, this study established a modeling framework specifically designed to simulate their behavior within traffic flow. The mixed traffic flow, incorporating arbitrarily formed trains of various modular sizes, is modeled and studied. Simulations are conducted under varying levels of traffic demand and penetration rates to examine the traffic flow dynamics in the presence of these vehicles and their operations. The microscopic trajectories, MAV train compositions, and macroscopic fundamental diagrams of the mixed traffic flow are analyzed. The simulation findings indicate that integrating MAVs and their collective operations can substantially enhance capacity, with the extent of improvement depending on the penetration rate in mixed traffic flow. Notably, the capacity nearly doubles when the penetration rate exceeds 75%. Furthermore, their presence significantly influences and regulates the free-flow speed of the mixed traffic. Particularly, when variations in operational speed limits exist between the MAVs and the background traffic, the mixed traffic adjusts to the operating velocity of these vehicles. This study provides insights into potential future traffic flow systems incorporating emerging MAV technologies.<|reference_end|>
arxiv
@article{ye2024modular, title={Modular Autonomous Vehicle in Heterogeneous Traffic Flow: Modeling, Simulation, and Implication}, author={Lanhang Ye, Toshiyuki Yamamoto}, journal={arXiv preprint arXiv:2409.17945}, year={2024}, archivePrefix={arXiv}, eprint={2409.17945}, primaryClass={cs.MA cs.ET} }
ye2024modular
arxiv-662386
2409.17946
Backdoor Attacks for LLMs with Weak-To-Strong Knowledge Distillation
<|reference_start|>Backdoor Attacks for LLMs with Weak-To-Strong Knowledge Distillation: Despite being widely applied due to their exceptional capabilities, Large Language Models (LLMs) have been proven to be vulnerable to backdoor attacks. These attacks introduce targeted vulnerabilities into LLMs by poisoning training samples and full-parameter fine-tuning. However, this kind of backdoor attack is limited since they require significant computational resources, especially as the size of LLMs increases. Besides, parameter-efficient fine-tuning (PEFT) offers an alternative but the restricted parameter updating may impede the alignment of triggers with target labels. In this study, we first verify that backdoor attacks with PEFT may encounter challenges in achieving feasible performance. To address these issues and improve the effectiveness of backdoor attacks with PEFT, we propose a novel backdoor attack algorithm from weak to strong based on feature alignment-enhanced knowledge distillation (W2SAttack). Specifically, we poison small-scale language models through full-parameter fine-tuning to serve as the teacher model. The teacher model then covertly transfers the backdoor to the large-scale student model through feature alignment-enhanced knowledge distillation, which employs PEFT. Theoretical analysis reveals that W2SAttack has the potential to augment the effectiveness of backdoor attacks. We demonstrate the superior performance of W2SAttack on classification tasks across four language models, four backdoor attack algorithms, and two different architectures of teacher models. Experimental results indicate success rates close to 100% for backdoor attacks targeting PEFT.<|reference_end|>
arxiv
@article{zhao2024weak-to-strong, title={Weak-to-Strong Backdoor Attack for Large Language Models}, author={Shuai Zhao, Leilei Gan, Zhongliang Guo, Xiaobao Wu, Luwei Xiao, Xiaoyu Xu, Cong-Duy Nguyen, Luu Anh Tuan}, journal={arXiv preprint arXiv:2409.17946}, year={2024}, archivePrefix={arXiv}, eprint={2409.17946}, primaryClass={cs.CR cs.AI cs.CL} }
zhao2024weak-to-strong
arxiv-662387
2409.17950
An Achievable Rate-Distortion Region for Joint State and Message Communication over Multiple Access Channels
<|reference_start|>An Achievable Rate-Distortion Region for Joint State and Message Communication over Multiple Access Channels: This paper derives an achievable rate-distortion (R-D) region for the state-dependent discrete memoryless multiple access channel (SD-DMMAC), where the generalized feedback and causal side information are present at encoders, and the decoder performs the joint task of message decoding and state estimation. The Markov coding and backward-forward two-stage decoding schemes are adopted in the proof. This scenario is shown to be capable of modeling various integrated sensing and communication (ISAC) applications, including the monostatic-uplink system and multi-modal sensor networks, which are then studied as examples.<|reference_end|>
arxiv
@article{li2024an, title={An Achievable Rate-Distortion Region for Joint State and Message Communication over Multiple Access Channels}, author={Xinyang Li, Vlad C. Andrei, Ullrich J. M"onich, Holger Boche}, journal={arXiv preprint arXiv:2409.17950}, year={2024}, archivePrefix={arXiv}, eprint={2409.17950}, primaryClass={cs.IT math.IT} }
li2024an
arxiv-662388
2409.17951
Spatial Hierarchy and Temporal Attention Guided Cross Masking for Self-supervised Skeleton-based Action Recognition
<|reference_start|>Spatial Hierarchy and Temporal Attention Guided Cross Masking for Self-supervised Skeleton-based Action Recognition: In self-supervised skeleton-based action recognition, the mask reconstruction paradigm is gaining interest in enhancing model refinement and robustness through effective masking. However, previous works primarily relied on a single masking criterion, resulting in the model overfitting specific features and overlooking other effective information. In this paper, we introduce a hierarchy and attention guided cross-masking framework (HA-CM) that applies masking to skeleton sequences from both spatial and temporal perspectives. Specifically, in spatial graphs, we utilize hyperbolic space to maintain joint distinctions and effectively preserve the hierarchical structure of high-dimensional skeletons, employing joint hierarchy as the masking criterion. In temporal flows, we substitute traditional distance metrics with the global attention of joints for masking, addressing the convergence of distances in high-dimensional space and the lack of a global perspective. Additionally, we incorporate cross-contrast loss based on the cross-masking framework into the loss function to enhance the model's learning of instance-level features. HA-CM shows efficiency and universality on three public large-scale datasets, NTU-60, NTU-120, and PKU-MMD. The source code of our HA-CM is available at https://github.com/YinxPeng/HA-CM-main.<|reference_end|>
arxiv
@article{yin2024spatial, title={Spatial Hierarchy and Temporal Attention Guided Cross Masking for Self-supervised Skeleton-based Action Recognition}, author={Xinpeng Yin, Wenming Cao}, journal={arXiv preprint arXiv:2409.17951}, year={2024}, archivePrefix={arXiv}, eprint={2409.17951}, primaryClass={cs.CV} }
yin2024spatial
arxiv-662389
2409.17952
Participatory design: A systematic review and insights for future practice
<|reference_start|>Participatory design: A systematic review and insights for future practice: Participatory Design -- an iterative, flexible design process that uses the close involvement of stakeholders, most often end users -- is growing in use across design disciplines. As an increasing number of practitioners turn to Participatory Design (PD), it has become less rigidly defined, with stakeholders engaged to varying degrees through the use of disjointed techniques. This ambiguous understanding can be counterproductive when discussing PD processes. Our findings synthesize key decisions and approaches from design peers that can support others in engaging in PD practice. We investigated how scholars report the use of Participatory Design in the field through a systematic literature review. We found that a majority of PD literature examined specific case studies of PD (53 of 88 articles), with the design of intangible systems representing the most common design context (61 of 88 articles). Stakeholders most often participated throughout multiple stages of a design process (65 of 88 articles), recruited in a variety of ways and engaged in several of the 14 specific participatory techniques identified. This systematic review provides today's practitioners synthesized learnings from past Participatory Design processes to inform and improve future use of PD, attempting to remedy inequitable design by engaging directly with stakeholders and users.<|reference_end|>
arxiv
@article{wacnik2024participatory, title={Participatory design: A systematic review and insights for future practice}, author={Peter Wacnik, Shanna Daly, Aditi Verma}, journal={arXiv preprint arXiv:2409.17952}, year={2024}, archivePrefix={arXiv}, eprint={2409.17952}, primaryClass={cs.HC cs.CY physics.soc-ph} }
wacnik2024participatory
arxiv-662390
2409.17954
Enhancing elusive clues in knowledge learning by contrasting attention of language models
<|reference_start|>Enhancing elusive clues in knowledge learning by contrasting attention of language models: Causal language models acquire vast amount of knowledge from general text corpus during pretraining, but the efficiency of knowledge learning is known to be unsatisfactory, especially when learning from knowledge-dense and small-sized corpora. The deficiency can come from long-distance dependencies which are hard to capture by language models, and overfitting to co-occurrence patterns and distracting clues in the training text. To address these issues, the paper proposes a method to enhance knowledge learning during language model pretraining, by enhancing elusive but important clues in text discovered by the language model themselves. We found that larger language models pay more attention to non-obvious but important clues, which are often overlooked by smaller language models. Therefore, we can identify these clues by contrasting the attention weights of large and small language models. We use the identified clues as a guide to perform token-dropout data augmentation on the training text, and observed a significant boost in both small and large models' performance in fact memorization. This shows that the behavior contrast between more and less-performant language models contains important clues for knowledge learning, and it can be ``amplified" for a straight-forward improvement in knowledge learning efficiency.<|reference_end|>
arxiv
@article{gao2024enhancing, title={Enhancing elusive clues in knowledge learning by contrasting attention of language models}, author={Jian Gao, Xiao Zhang, Ji Wu, Miao Li}, journal={arXiv preprint arXiv:2409.17954}, year={2024}, archivePrefix={arXiv}, eprint={2409.17954}, primaryClass={cs.AI} }
gao2024enhancing
arxiv-662391
2409.17958
The Hard Positive Truth about Vision-Language Compositionality
<|reference_start|>The Hard Positive Truth about Vision-Language Compositionality: Several benchmarks have concluded that our best vision-language models (e.g., CLIP) are lacking in compositionality. Given an image, these benchmarks probe a model's ability to identify its associated caption amongst a set of compositional distractors. In response, a surge of recent proposals show improvements by finetuning CLIP with distractors as hard negatives. Our investigations reveal that these improvements have, in fact, been significantly overstated -- because existing benchmarks do not probe whether finetuned vision-language models remain invariant to hard positives. By curating an evaluation dataset with 112,382 hard negatives and hard positives, we uncover that including hard positives decreases CLIP's performance by 12.9%, while humans perform effortlessly at 99%. CLIP finetuned with hard negatives results in an even larger decrease, up to 38.7%. With this finding, we then produce a 1,775,259 image-text training set with both hard negative and hard positive captions. By training with both, we see improvements on existing benchmarks while simultaneously improving performance on hard positives, indicating a more robust improvement in compositionality. Our work suggests the need for future research to rigorously test and improve CLIP's understanding of semantic relationships between related "positive" concepts.<|reference_end|>
arxiv
@article{kamath2024the, title={The Hard Positive Truth about Vision-Language Compositionality}, author={Amita Kamath, Cheng-Yu Hsieh, Kai-Wei Chang, Ranjay Krishna}, journal={arXiv preprint arXiv:2409.17958}, year={2024}, archivePrefix={arXiv}, eprint={2409.17958}, primaryClass={cs.CL cs.CV} }
kamath2024the
arxiv-662392
2409.17959
A Policy Report Evaluating the National Assessment Program for Literacy and Numeracy (Naplan) Reform in Australia: The Impacts of High Stakes Assessment on Students
<|reference_start|>A Policy Report Evaluating the National Assessment Program for Literacy and Numeracy (Naplan) Reform in Australia: The Impacts of High Stakes Assessment on Students: The National Assessment Program for Literacy and Numeracy (NAPLAN) Reform in Australia, launched in 2008, has emerged as the country's most significant and contentious reform. However, due to its high-stakes nature and standardization, testing presents various challenges. These challenges include the combination of accountability with the 'My School' website, overlooking higher-order cognitive abilities, exacerbating students' anxiety and stress, and creating inequity for Language Background Other Than English (LBOTE) students. This report assesses the achievements and obstacles of the NAPLAN reform, proposing recommendations such as transitioning to online testing, enhancing content and platforms, increasing public assessment literacy, and investing more in LBOTE education. These suggestions aim to strike a balance between standardized testing and authentic educational pursuits, adapting to the evolving needs of students to create a fair, inclusive educational environment that addresses the demands of the 21st century.<|reference_end|>
arxiv
@article{zhang2024a, title={A Policy Report Evaluating the National Assessment Program for Literacy and Numeracy (Naplan) Reform in Australia: The Impacts of High Stakes Assessment on Students}, author={Wenya Zhang}, journal={Computer Science & Information Technology (CS & IT),ISSN : 2231 - 5403,Volume 14, Number 17, September 2024 link:https://airccse.org/csit/V14N17.html}, year={2024}, archivePrefix={arXiv}, eprint={2409.17959}, primaryClass={cs.CY} }
zhang2024a
arxiv-662393
2409.17961
SShaDe: scalable shape deformation via local representations
<|reference_start|>SShaDe: scalable shape deformation via local representations: With the increase in computational power for the available hardware, the demand for high-resolution data in computer graphics applications increases. Consequently, classical geometry processing techniques based on linear algebra solutions are starting to become obsolete. In this setting, we propose a novel approach for tackling mesh deformation tasks on high-resolution meshes. By reducing the input size with a fast remeshing technique and preserving a consistent representation of the original mesh with local reference frames, we provide a solution that is both scalable and robust in multiple applications, such as as-rigid-as-possible deformations, non-rigid isometric transformations, and pose transfer tasks. We extensively test our technique and compare it against state-of-the-art methods, proving that our approach can handle meshes with hundreds of thousands of vertices in tens of seconds while still achieving results comparable with the other solutions.<|reference_end|>
arxiv
@article{maggioli2024sshade:, title={SShaDe: scalable shape deformation via local representations}, author={Filippo Maggioli, Daniele Baieri, Zorah L"ahner, Simone Melzi}, journal={arXiv preprint arXiv:2409.17961}, year={2024}, archivePrefix={arXiv}, eprint={2409.17961}, primaryClass={cs.GR} }
maggioli2024sshade:
arxiv-662394
2409.17963
CNCA: Toward Customizable and Natural Generation of Adversarial Camouflage for Vehicle Detectors
<|reference_start|>CNCA: Toward Customizable and Natural Generation of Adversarial Camouflage for Vehicle Detectors: Prior works on physical adversarial camouflage against vehicle detectors mainly focus on the effectiveness and robustness of the attack. The current most successful methods optimize 3D vehicle texture at a pixel level. However, this results in conspicuous and attention-grabbing patterns in the generated camouflage, which humans can easily identify. To address this issue, we propose a Customizable and Natural Camouflage Attack (CNCA) method by leveraging an off-the-shelf pre-trained diffusion model. By sampling the optimal texture image from the diffusion model with a user-specific text prompt, our method can generate natural and customizable adversarial camouflage while maintaining high attack performance. With extensive experiments on the digital and physical worlds and user studies, the results demonstrate that our proposed method can generate significantly more natural-looking camouflage than the state-of-the-art baselines while achieving competitive attack performance. Our code is available at \href{https://anonymous.4open.science/r/CNCA-1D54}{https://anonymous.4open.science/r/CNCA-1D54}<|reference_end|>
arxiv
@article{lyu2024cnca:, title={CNCA: Toward Customizable and Natural Generation of Adversarial Camouflage for Vehicle Detectors}, author={Linye Lyu, Jiawei Zhou, Daojing He, Yu Li}, journal={arXiv preprint arXiv:2409.17963}, year={2024}, archivePrefix={arXiv}, eprint={2409.17963}, primaryClass={cs.CV} }
lyu2024cnca:
arxiv-662395
2409.17972
BEATS: Optimizing LLM Mathematical Capabilities with BackVerify and Adaptive Disambiguate based Efficient Tree Search
<|reference_start|>BEATS: Optimizing LLM Mathematical Capabilities with BackVerify and Adaptive Disambiguate based Efficient Tree Search: Large Language Models (LLMs) have exhibited exceptional performance across a broad range of tasks and domains. However, they still encounter difficulties in solving mathematical problems due to the rigorous and logical nature of mathematics. Previous studies have employed techniques such as supervised fine-tuning (SFT), prompt engineering, and search-based methods to improve the mathematical problem-solving abilities of LLMs. Despite these efforts, their performance remains suboptimal and demands substantial computational resources. To address this issue, we propose a novel approach, BEATS, to enhance mathematical problem-solving abilities. Our method leverages newly designed prompts that guide the model to iteratively rewrite, advance by one step, and generate answers based on previous steps. Additionally, we introduce a new back-verification technique that uses LLMs to validate the correctness of the generated answers. Furthermore, we employ a pruning tree search to optimize search time while achieving strong performance. Notably, our method improves Qwen2-7b-Instruct's score from 36.94 to 61.52, outperforming GPT4's 42.5 on the MATH benchmark.<|reference_end|>
arxiv
@article{sun2024beats:, title={BEATS: Optimizing LLM Mathematical Capabilities with BackVerify and Adaptive Disambiguate based Efficient Tree Search}, author={Linzhuang Sun, Hao Liang, Jingxuan Wei, Bihui Yu, Conghui He, Zenan Zhou, Wentao Zhang}, journal={arXiv preprint arXiv:2409.17972}, year={2024}, archivePrefix={arXiv}, eprint={2409.17972}, primaryClass={cs.CL cs.LG} }
sun2024beats:
arxiv-662396
2409.17977
Cross-Modality Attack Boosted by Gradient-Evolutionary Multiform Optimization
<|reference_start|>Cross-Modality Attack Boosted by Gradient-Evolutionary Multiform Optimization: In recent years, despite significant advancements in adversarial attack research, the security challenges in cross-modal scenarios, such as the transferability of adversarial attacks between infrared, thermal, and RGB images, have been overlooked. These heterogeneous image modalities collected by different hardware devices are widely prevalent in practical applications, and the substantial differences between modalities pose significant challenges to attack transferability. In this work, we explore a novel cross-modal adversarial attack strategy, termed multiform attack. We propose a dual-layer optimization framework based on gradient-evolution, facilitating efficient perturbation transfer between modalities. In the first layer of optimization, the framework utilizes image gradients to learn universal perturbations within each modality and employs evolutionary algorithms to search for shared perturbations with transferability across different modalities through secondary optimization. Through extensive testing on multiple heterogeneous datasets, we demonstrate the superiority and robustness of Multiform Attack compared to existing techniques. This work not only enhances the transferability of cross-modal adversarial attacks but also provides a new perspective for understanding security vulnerabilities in cross-modal systems.<|reference_end|>
arxiv
@article{gong2024cross-modality, title={Cross-Modality Attack Boosted by Gradient-Evolutionary Multiform Optimization}, author={Yunpeng Gong and Qingyuan Zeng and Dejun Xu and Zhenzhong Wang and Min Jiang}, journal={arXiv preprint arXiv:2409.17977}, year={2024}, archivePrefix={arXiv}, eprint={2409.17977}, primaryClass={cs.CV} }
gong2024cross-modality
arxiv-662397
2409.17978
HydraViT: Stacking Heads for a Scalable ViT
<|reference_start|>HydraViT: Stacking Heads for a Scalable ViT: The architecture of Vision Transformers (ViTs), particularly the Multi-head Attention (MHA) mechanism, imposes substantial hardware demands. Deploying ViTs on devices with varying constraints, such as mobile phones, requires multiple models of different sizes. However, this approach has limitations, such as training and storing each required model separately. This paper introduces HydraViT, a novel approach that addresses these limitations by stacking attention heads to achieve a scalable ViT. By repeatedly changing the size of the embedded dimensions throughout each layer and their corresponding number of attention heads in MHA during training, HydraViT induces multiple subnetworks. Thereby, HydraViT achieves adaptability across a wide spectrum of hardware environments while maintaining performance. Our experimental results demonstrate the efficacy of HydraViT in achieving a scalable ViT with up to 10 subnetworks, covering a wide range of resource constraints. HydraViT achieves up to 5 p.p. more accuracy with the same GMACs and up to 7 p.p. more accuracy with the same throughput on ImageNet-1K compared to the baselines, making it an effective solution for scenarios where hardware availability is diverse or varies over time. Source code available at https://github.com/ds-kiel/HydraViT.<|reference_end|>
arxiv
@article{haberer2024hydravit:, title={HydraViT: Stacking Heads for a Scalable ViT}, author={Janek Haberer, Ali Hojjat, Olaf Landsiedel}, journal={arXiv preprint arXiv:2409.17978}, year={2024}, archivePrefix={arXiv}, eprint={2409.17978}, primaryClass={cs.CV cs.AI cs.LG} }
haberer2024hydravit:
arxiv-662398
2409.17980
Formal verification of higher dimensional quantum protocols
<|reference_start|>Formal verification of higher dimensional quantum protocols: Formal methods have been a successful approach for modelling and verifying the correctness of complex technologies like microprocessor chip design, biological systems and others. This is the main motivation of developing quantum formal techniques which is to describe and analyse quantum information processing systems. Our previous work demonstrates the possibility of using a quantum process calculus called Communicating Quantum Processes (CQP) to model and describe higher dimensional quantum systems. By developing the theory to generalise the fundamental gates and Bell states, we have modelled quantum qudit protocols like teleportation and superdense coding in CQP. In this paper, we demonstrate the use of CQP to analyse higher dimensional quantum protocols. The main idea is to define two processes, one modelling the real protocol and the other expressing a specification, and prove that they are behaviourally equivalent. This is a work-in-progress and we present our preliminary results in extending the theory of behavioural equivalence in CQP to verify higher dimensional quantum protocols using qudits.<|reference_end|>
arxiv
@article{puthoor2024formal, title={Formal verification of higher dimensional quantum protocols}, author={Ittoop Vergheese Puthoor}, journal={arXiv preprint arXiv:2409.17980}, year={2024}, archivePrefix={arXiv}, eprint={2409.17980}, primaryClass={cs.FL quant-ph} }
puthoor2024formal
arxiv-662399
2409.17981
BlinkTrack: Feature Tracking over 100 FPS via Events and Images
<|reference_start|>BlinkTrack: Feature Tracking over 100 FPS via Events and Images: Feature tracking is crucial for, structure from motion (SFM), simultaneous localization and mapping (SLAM), object tracking and various computer vision tasks. Event cameras, known for their high temporal resolution and ability to capture asynchronous changes, have gained significant attention for their potential in feature tracking, especially in challenging conditions. However, event cameras lack the fine-grained texture information that conventional cameras provide, leading to error accumulation in tracking. To address this, we propose a novel framework, BlinkTrack, which integrates event data with RGB images for high-frequency feature tracking. Our method extends the traditional Kalman filter into a learning-based framework, utilizing differentiable Kalman filters in both event and image branches. This approach improves single-modality tracking, resolves ambiguities, and supports asynchronous data fusion. We also introduce new synthetic and augmented datasets to better evaluate our model. Experimental results indicate that BlinkTrack significantly outperforms existing event-based methods, exceeding 100 FPS with preprocessed event data and 80 FPS with multi-modality data.<|reference_end|>
arxiv
@article{shen2024blinktrack:, title={BlinkTrack: Feature Tracking over 100 FPS via Events and Images}, author={Yichen Shen, Yijin Li, Shuo Chen, Guanglin Li, Zhaoyang Huang, Hujun Bao, Zhaopeng Cui, Guofeng Zhang}, journal={arXiv preprint arXiv:2409.17981}, year={2024}, archivePrefix={arXiv}, eprint={2409.17981}, primaryClass={cs.CV} }
shen2024blinktrack:
arxiv-662400
2409.17985
Hypergame Theory for Decentralized Resource Allocation in Multi-user Semantic Communications
<|reference_start|>Hypergame Theory for Decentralized Resource Allocation in Multi-user Semantic Communications: Semantic communications (SC) is an emerging communication paradigm in which wireless devices can send only relevant information from a source of data while relying on computing resources to regenerate missing data points. However, the design of a multi-user SC system becomes more challenging because of the computing and communication overhead required for coordination. Existing solutions for learning the semantic language and performing resource allocation often fail to capture the computing and communication tradeoffs involved in multiuser SC. To address this gap, a novel framework for decentralized computing and communication resource allocation in multiuser SC systems is proposed. The challenge of efficiently allocating communication and computing resources (for reasoning) in a decentralized manner to maximize the quality of task experience for the end users is addressed through the application of Stackelberg hyper game theory. Leveraging the concept of second-level hyper games, novel analytical formulations are developed to model misperceptions of the users about each other's communication and control strategies. Further, equilibrium analysis of the learned resource allocation protocols examines the convergence of the computing and communication strategies to a local Stackelberg equilibria, considering misperceptions. Simulation results show that the proposed Stackelberg hyper game results in efficient usage of communication and computing resources while maintaining a high quality of experience for the users compared to state-of-the-art that does not account for the misperceptions.<|reference_end|>
arxiv
@article{thomas2024hypergame, title={Hypergame Theory for Decentralized Resource Allocation in Multi-user Semantic Communications}, author={Christo Kurisummoottil Thomas, Walid Saad}, journal={arXiv preprint arXiv:2409.17985}, year={2024}, archivePrefix={arXiv}, eprint={2409.17985}, primaryClass={cs.IT cs.LG math.IT} }
thomas2024hypergame