corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-663201
2409.19541
Unlabeled Debiasing in Downstream Tasks via Class-wise Low Variance Regularization
<|reference_start|>Unlabeled Debiasing in Downstream Tasks via Class-wise Low Variance Regularization: Language models frequently inherit societal biases from their training data. Numerous techniques have been proposed to mitigate these biases during both the pre-training and fine-tuning stages. However, fine-tuning a pre-trained debiased language model on a downstream task can reintroduce biases into the model. Additionally, existing debiasing methods for downstream tasks either (i) require labels of protected attributes (e.g., age, race, or political views) that are often not available or (ii) rely on indicators of bias, which restricts their applicability to gender debiasing since they rely on gender-specific words. To address this, we introduce a novel debiasing regularization technique based on the class-wise variance of embeddings. Crucially, our method does not require attribute labels and targets any attribute, thus addressing the shortcomings of existing debiasing methods. Our experiments on encoder language models and three datasets demonstrate that our method outperforms existing strong debiasing baselines that rely on target attribute labels while maintaining performance on the target task.<|reference_end|>
arxiv
@article{masoudian2024unlabeled, title={Unlabeled Debiasing in Downstream Tasks via Class-wise Low Variance Regularization}, author={Shahed Masoudian, Markus Frohmann, Navid Rekabsaz, Markus Schedl}, journal={arXiv preprint arXiv:2409.19541}, year={2024}, archivePrefix={arXiv}, eprint={2409.19541}, primaryClass={cs.CL cs.AI} }
masoudian2024unlabeled
arxiv-663202
2409.19542
BiPC: Bidirectional Probability Calibration for Unsupervised Domain Adaption
<|reference_start|>BiPC: Bidirectional Probability Calibration for Unsupervised Domain Adaption: Unsupervised Domain Adaptation (UDA) leverages a labeled source domain to solve tasks in an unlabeled target domain. While Transformer-based methods have shown promise in UDA, their application is limited to plain Transformers, excluding Convolutional Neural Networks (CNNs) and hierarchical Transformers. To address this issues, we propose Bidirectional Probability Calibration (BiPC) from a probability space perspective. We demonstrate that the probability outputs from a pre-trained head, after extensive pre-training, are robust against domain gaps and can adjust the probability distribution of the task head. Moreover, the task head can enhance the pre-trained head during adaptation training, improving model performance through bidirectional complementation. Technically, we introduce Calibrated Probability Alignment (CPA) to adjust the pre-trained head's probabilities, such as those from an ImageNet-1k pre-trained classifier. Additionally, we design a Calibrated Gini Impurity (CGI) loss to refine the task head, with calibrated coefficients learned from the pre-trained classifier. BiPC is a simple yet effective method applicable to various networks, including CNNs and Transformers. Experimental results demonstrate its remarkable performance across multiple UDA tasks. Our code will be available at: https://github.com/Wenlve-Zhou/BiPC.<|reference_end|>
arxiv
@article{zhou2024bipc:, title={BiPC: Bidirectional Probability Calibration for Unsupervised Domain Adaption}, author={Wenlve Zhou, Zhiheng Zhou, Junyuan Shang, Chang Niu, Mingyue Zhang, Xiyuan Tao, Tianlei Wang}, journal={arXiv preprint arXiv:2409.19542}, year={2024}, doi={10.1016/j.eswa.2024.125460}, archivePrefix={arXiv}, eprint={2409.19542}, primaryClass={cs.CV} }
zhou2024bipc:
arxiv-663203
2409.19543
Multi-Query Shortest-Path Problem in Graphs of Convex Sets
<|reference_start|>Multi-Query Shortest-Path Problem in Graphs of Convex Sets: The Shortest-Path Problem in Graph of Convex Sets (SPP in GCS) is a recently developed optimization framework that blends discrete and continuous decision making. Many relevant problems in robotics, such as collision-free motion planning, can be cast and solved as an SPP in GCS, yielding lower-cost solutions and faster runtimes than state-of-the-art algorithms. In this paper, we are motivated by motion planning of robot arms that must operate swiftly in static environments. We consider a multi-query extension of the SPP in GCS, where the goal is to efficiently precompute optimal paths between given sets of initial and target conditions. Our solution consists of two stages. Offline, we use semidefinite programming to compute a coarse lower bound on the problem's cost-to-go function. Then, online, this lower bound is used to incrementally generate feasible paths by solving short-horizon convex programs. For a robot arm with seven joints, our method designs higher quality trajectories up to two orders of magnitude faster than existing motion planners.<|reference_end|>
arxiv
@article{morozov2024multi-query, title={Multi-Query Shortest-Path Problem in Graphs of Convex Sets}, author={Savva Morozov, Tobia Marcucci, Alexandre Amice, Bernhard Paus Graesdal, Rohan Bosworth, Pablo A. Parrilo, Russ Tedrake}, journal={arXiv preprint arXiv:2409.19543}, year={2024}, archivePrefix={arXiv}, eprint={2409.19543}, primaryClass={cs.RO} }
morozov2024multi-query
arxiv-663204
2409.19545
Convergence-aware Clustered Federated Graph Learning Framework for Collaborative Inter-company Labor Market Forecasting
<|reference_start|>Convergence-aware Clustered Federated Graph Learning Framework for Collaborative Inter-company Labor Market Forecasting: Labor market forecasting on talent demand and supply is essential for business management and economic development. With accurate and timely forecasts, employers can adapt their recruitment strategies to align with the evolving labor market, and employees can have proactive career path planning according to future demand and supply. However, previous studies ignore the interconnection between demand-supply sequences among different companies and positions for predicting variations. Moreover, companies are reluctant to share their private human resource data for global labor market analysis due to concerns over jeopardizing competitive advantage, security threats, and potential ethical or legal violations. To this end, in this paper, we formulate the Federated Labor Market Forecasting (FedLMF) problem and propose a Meta-personalized Convergence-aware Clustered Federated Learning (MPCAC-FL) framework to provide accurate and timely collaborative talent demand and supply prediction in a privacy-preserving way. First, we design a graph-based sequential model to capture the inherent correlation between demand and supply sequences and company-position pairs. Second, we adopt meta-learning techniques to learn effective initial model parameters that can be shared across companies, allowing personalized models to be optimized for forecasting company-specific demand and supply, even when companies have heterogeneous data. Third, we devise a Convergence-aware Clustering algorithm to dynamically divide companies into groups according to model similarity and apply federated aggregation in each group. The heterogeneity can be alleviated for more stable convergence and better performance. Extensive experiments demonstrate that MPCAC-FL outperforms compared baselines on three real-world datasets and achieves over 97% of the state-of-the-art model, i.e., DH-GEM, without exposing private company data.<|reference_end|>
arxiv
@article{guo2024convergence-aware, title={Convergence-aware Clustered Federated Graph Learning Framework for Collaborative Inter-company Labor Market Forecasting}, author={Zhuoning Guo, Hao Liu, Le Zhang, Qi Zhang, Hengshu Zhu, Hui Xiong}, journal={arXiv preprint arXiv:2409.19545}, year={2024}, archivePrefix={arXiv}, eprint={2409.19545}, primaryClass={cs.LG} }
guo2024convergence-aware
arxiv-663205
2409.19546
Almost Sure Convergence of Average Reward Temporal Difference Learning
<|reference_start|>Almost Sure Convergence of Average Reward Temporal Difference Learning: Tabular average reward Temporal Difference (TD) learning is perhaps the simplest and the most fundamental policy evaluation algorithm in average reward reinforcement learning. After at least 25 years since its discovery, we are finally able to provide a long-awaited almost sure convergence analysis. Namely, we are the first to prove that, under very mild conditions, tabular average reward TD converges almost surely to a sample path dependent fixed point. Key to this success is a new general stochastic approximation result concerning nonexpansive mappings with Markovian and additive noise, built on recent advances in stochastic Krasnoselskii-Mann iterations.<|reference_end|>
arxiv
@article{blaser2024almost, title={Almost Sure Convergence of Average Reward Temporal Difference Learning}, author={Ethan Blaser, Shangtong Zhang}, journal={arXiv preprint arXiv:2409.19546}, year={2024}, archivePrefix={arXiv}, eprint={2409.19546}, primaryClass={cs.LG cs.AI math.OC stat.ML} }
blaser2024almost
arxiv-663206
2409.19548
Meta Learning to Rank for Sparsely Supervised Queries
<|reference_start|>Meta Learning to Rank for Sparsely Supervised Queries: Supervisory signals are a critical resource for training learning to rank models. In many real-world search and retrieval scenarios, these signals may not be readily available or could be costly to obtain for some queries. The examples include domains where labeling requires professional expertise, applications with strong privacy constraints, and user engagement information that are too scarce. We refer to these scenarios as sparsely supervised queries which pose significant challenges to traditional learning to rank models. In this work, we address sparsely supervised queries by proposing a novel meta learning to rank framework which leverages fast learning and adaption capability of meta-learning. The proposed approach accounts for the fact that different queries have different optimal parameters for their rankers, in contrast to traditional learning to rank models which only learn a global ranking model applied to all the queries. In consequence, the proposed method would yield significant advantages especially when new queries are of different characteristics with the training queries. Moreover, the proposed meta learning to rank framework is generic and flexible. We conduct a set of comprehensive experiments on both public datasets and a real-world e-commerce dataset. The results demonstrate that the proposed meta-learning approach can significantly enhance the performance of learning to rank models with sparsely labeled queries.<|reference_end|>
arxiv
@article{wu2024meta, title={Meta Learning to Rank for Sparsely Supervised Queries}, author={Xuyang Wu, Ajit Puthenputhussery, Hongwei Shang, Changsung Kang and Yi Fang}, journal={arXiv preprint arXiv:2409.19548}, year={2024}, doi={10.1145/3698876}, archivePrefix={arXiv}, eprint={2409.19548}, primaryClass={cs.IR} }
wu2024meta
arxiv-663207
2409.19550
Tailed Low-Rank Matrix Factorization for Similarity Matrix Completion
<|reference_start|>Tailed Low-Rank Matrix Factorization for Similarity Matrix Completion: Similarity matrix serves as a fundamental tool at the core of numerous downstream machine-learning tasks. However, missing data is inevitable and often results in an inaccurate similarity matrix. To address this issue, Similarity Matrix Completion (SMC) methods have been proposed, but they suffer from high computation complexity due to the Singular Value Decomposition (SVD) operation. To reduce the computation complexity, Matrix Factorization (MF) techniques are more explicit and frequently applied to provide a low-rank solution, but the exact low-rank optimal solution can not be guaranteed since it suffers from a non-convex structure. In this paper, we introduce a novel SMC framework that offers a more reliable and efficient solution. Specifically, beyond simply utilizing the unique Positive Semi-definiteness (PSD) property to guide the completion process, our approach further complements a carefully designed rank-minimization regularizer, aiming to achieve an optimal and low-rank solution. Based on the key insights that the underlying PSD property and Low-Rank property improve the SMC performance, we present two novel, scalable, and effective algorithms, SMCNN and SMCNmF, which investigate the PSD property to guide the estimation process and incorporate nonconvex low-rank regularizer to ensure the low-rank solution. Theoretical analysis ensures better estimation performance and convergence speed. Empirical results on real-world datasets demonstrate the superiority and efficiency of our proposed methods compared to various baseline methods.<|reference_end|>
arxiv
@article{ma2024tailed, title={Tailed Low-Rank Matrix Factorization for Similarity Matrix Completion}, author={Changyi Ma, Runsheng Yu, Xiao Chen, Youzhi Zhang}, journal={arXiv preprint arXiv:2409.19550}, year={2024}, archivePrefix={arXiv}, eprint={2409.19550}, primaryClass={cs.LG} }
ma2024tailed
arxiv-663208
2409.19552
A Universal Deep Learning Framework for Materials X-ray Absorption Spectra
<|reference_start|>A Universal Deep Learning Framework for Materials X-ray Absorption Spectra: X-ray absorption spectroscopy (XAS) is a powerful characterization technique for probing the local chemical environment of absorbing atoms. However, analyzing XAS data presents with significant challenges, often requiring extensive, computationally intensive simulations, as well as significant domain expertise. These limitations hinder the development of fast, robust XAS analysis pipelines that are essential in high-throughput studies and for autonomous experimentation. We address these challenges with a suite of transfer learning approaches for XAS prediction, each uniquely contributing to improved accuracy and efficiency, as demonstrated on K-edge spectra database covering eight 3d transition metals (Ti-Cu). Our framework is built upon three distinct strategies. First, we use M3GNet to derive latent representations of the local chemical environment of absorption sites as input for XAS prediction, achieving up to order-of-magnitude improvements over conventional featurization techniques. Second, we employ a hierarchical transfer learning strategy, training a universal multi-task model across elements before fine-tuning for element-specific predictions. This cascaded approach after element-wise fine-turning yields models that outperform element-specific models by up to 31\%. Third, we implement cross-fidelity transfer learning, adapting a universal model to predict spectra generated by simulation of a different fidelity with a much higher computational cost. This approach improves prediction accuracy by up to 24\% over models trained on the target fidelity alone. Our approach is extendable to XAS prediction for a broader range of elements and offers a generalizable transfer learning framework to enhance other deep-learning models in materials science.<|reference_end|>
arxiv
@article{kharel2024a, title={A Universal Deep Learning Framework for Materials X-ray Absorption Spectra}, author={Shubha R. Kharel, Fanchen Meng, Xiaohui Qu, Matthew R. Carbone, Deyu Lu}, journal={arXiv preprint arXiv:2409.19552}, year={2024}, archivePrefix={arXiv}, eprint={2409.19552}, primaryClass={cond-mat.mtrl-sci cs.AI cs.LG} }
kharel2024a
arxiv-663209
2409.19554
Tri-Cam: Practical Eye Gaze Tracking via Camera Network
<|reference_start|>Tri-Cam: Practical Eye Gaze Tracking via Camera Network: As human eyes serve as conduits of rich information, unveiling emotions, intentions, and even aspects of an individual's health and overall well-being, gaze tracking also enables various human-computer interaction applications, as well as insights in psychological and medical research. However, existing gaze tracking solutions fall short at handling free user movement, and also require laborious user effort in system calibration. We introduce Tri-Cam, a practical deep learning-based gaze tracking system using three affordable RGB webcams. It features a split network structure for efficient training, as well as designated network designs to handle the separated gaze tracking tasks. Tri-Cam is also equipped with an implicit calibration module, which makes use of mouse click opportunities to reduce calibration overhead on the user's end. We evaluate Tri-Cam against Tobii, the state-of-the-art commercial eye tracker, achieving comparable accuracy, while supporting a wider free movement area. In conclusion, Tri-Cam provides a user-friendly, affordable, and robust gaze tracking solution that could practically enable various applications.<|reference_end|>
arxiv
@article{yang2024tri-cam:, title={Tri-Cam: Practical Eye Gaze Tracking via Camera Network}, author={Sikai Yang, Wan Du}, journal={arXiv preprint arXiv:2409.19554}, year={2024}, archivePrefix={arXiv}, eprint={2409.19554}, primaryClass={cs.CV eess.IV} }
yang2024tri-cam:
arxiv-663210
2409.19560
Fast-Convergent and Communication-Alleviated Heterogeneous Hierarchical Federated Learning in Autonomous Driving
<|reference_start|>Fast-Convergent and Communication-Alleviated Heterogeneous Hierarchical Federated Learning in Autonomous Driving: Street Scene Semantic Understanding (denoted as TriSU) is a complex task for autonomous driving (AD). However, inference model trained from data in a particular geographical region faces poor generalization when applied in other regions due to inter-city data domain-shift. Hierarchical Federated Learning (HFL) offers a potential solution for improving TriSU model generalization by collaborative privacy-preserving training over distributed datasets from different cities. Unfortunately, it suffers from slow convergence because data from different cities are with disparate statistical properties. Going beyond existing HFL methods, we propose a Gaussian heterogeneous HFL algorithm (FedGau) to address inter-city data heterogeneity so that convergence can be accelerated. In the proposed FedGau algorithm, both single RGB image and RGB dataset are modelled as Gaussian distributions for aggregation weight design. This approach not only differentiates each RGB image by respective statistical distribution, but also exploits the statistics of dataset from each city in addition to the conventionally considered data volume. With the proposed approach, the convergence is accelerated by 35.5\%-40.6\% compared to existing state-of-the-art (SOTA) HFL methods. On the other hand, to reduce the involved communication resource, we further introduce a novel performance-aware adaptive resource scheduling (AdapRS) policy. Unlike the traditional static resource scheduling policy that exchanges a fixed number of models between two adjacent aggregations, AdapRS adjusts the number of model aggregation at different levels of HFL so that unnecessary communications are minimized. Extensive experiments demonstrate that AdapRS saves 29.65\% communication overhead compared to conventional static resource scheduling policy while maintaining almost the same performance.<|reference_end|>
arxiv
@article{kou2024fast-convergent, title={Fast-Convergent and Communication-Alleviated Heterogeneous Hierarchical Federated Learning in Autonomous Driving}, author={Wei-Bin Kou, Qingfeng Lin, Ming Tang, Rongguang Ye, Shuai Wang, Guangxu Zhu, Yik-Chung Wu}, journal={arXiv preprint arXiv:2409.19560}, year={2024}, archivePrefix={arXiv}, eprint={2409.19560}, primaryClass={cs.LG cs.RO} }
kou2024fast-convergent
arxiv-663211
2409.19561
Unifying back-propagation and forward-forward algorithms through model predictive control
<|reference_start|>Unifying back-propagation and forward-forward algorithms through model predictive control: We introduce a Model Predictive Control (MPC) framework for training deep neural networks, systematically unifying the Back-Propagation (BP) and Forward-Forward (FF) algorithms. At the same time, it gives rise to a range of intermediate training algorithms with varying look-forward horizons, leading to a performance-efficiency trade-off. We perform a precise analysis of this trade-off on a deep linear network, where the qualitative conclusions carry over to general networks. Based on our analysis, we propose a principled method to choose the optimization horizon based on given objectives and model specifications. Numerical results on various models and tasks demonstrate the versatility of our method.<|reference_end|>
arxiv
@article{ren2024unifying, title={Unifying back-propagation and forward-forward algorithms through model predictive control}, author={Lianhai Ren, Qianxiao Li}, journal={arXiv preprint arXiv:2409.19561}, year={2024}, archivePrefix={arXiv}, eprint={2409.19561}, primaryClass={cs.LG math.OC} }
ren2024unifying
arxiv-663212
2409.19563
CLIP-based Camera-Agnostic Feature Learning for Intra-camera Person Re-Identification
<|reference_start|>CLIP-based Camera-Agnostic Feature Learning for Intra-camera Person Re-Identification: Contrastive Language-Image Pre-Training (CLIP) model excels in traditional person re-identification (ReID) tasks due to its inherent advantage in generating textual descriptions for pedestrian images. However, applying CLIP directly to intra-camera supervised person re-identification (ICS ReID) presents challenges. ICS ReID requires independent identity labeling within each camera, without associations across cameras. This limits the effectiveness of text-based enhancements. To address this, we propose a novel framework called CLIP-based Camera-Agnostic Feature Learning (CCAFL) for ICS ReID. Accordingly, two custom modules are designed to guide the model to actively learn camera-agnostic pedestrian features: Intra-Camera Discriminative Learning (ICDL) and Inter-Camera Adversarial Learning (ICAL). Specifically, we first establish learnable textual prompts for intra-camera pedestrian images to obtain crucial semantic supervision signals for subsequent intra- and inter-camera learning. Then, we design ICDL to increase inter-class variation by considering the hard positive and hard negative samples within each camera, thereby learning intra-camera finer-grained pedestrian features. Additionally, we propose ICAL to reduce inter-camera pedestrian feature discrepancies by penalizing the model's ability to predict the camera from which a pedestrian image originates, thus enhancing the model's capability to recognize pedestrians from different viewpoints. Extensive experiments on popular ReID datasets demonstrate the effectiveness of our approach. Especially, on the challenging MSMT17 dataset, we arrive at 58.9\% in terms of mAP accuracy, surpassing state-of-the-art methods by 7.6\%. Code will be available at: https://github.com/Trangle12/CCAFL.<|reference_end|>
arxiv
@article{tan2024clip-based, title={CLIP-based Camera-Agnostic Feature Learning for Intra-camera Person Re-Identification}, author={Xuan Tan, Xun Gong, Yang Xiang}, journal={arXiv preprint arXiv:2409.19563}, year={2024}, archivePrefix={arXiv}, eprint={2409.19563}, primaryClass={cs.CV cs.AI} }
tan2024clip-based
arxiv-663213
2409.19564
Hamster: A Fast Synchronous Byzantine Fault Tolerance Protocol
<|reference_start|>Hamster: A Fast Synchronous Byzantine Fault Tolerance Protocol: This paper introduces Hamster, a novel synchronous Byzantine Fault Tolerance protocol that achieves better performance and has weaker dependency on synchrony. Specifically, Hamster employs coding techniques to significantly decrease communication complexity and addresses coding related security issues. Consequently, Hamster achieves a throughput gain that increases linearly with the number of nodes, compared to Sync HotStuff. By adjusting the block size, Hamster outperforms Sync HotStuff in terms of both throughput and latency. Moreover, With minor modifications, Hamster can also function effectively in mobile sluggish environments, further reducing its dependency on strict synchrony. We implement Hamster and the experimental results demonstrate its performance advantages. Specifically, Hamster's throughput in a network of $9$ nodes is $2.5\times$ that of Sync HotStuff, and this gain increases to $10$ as the network scales to $65$ nodes.<|reference_end|>
arxiv
@article{fu2024hamster:, title={Hamster: A Fast Synchronous Byzantine Fault Tolerance Protocol}, author={Ximing Fu, Mo Li, Qingming Zeng, Tianyang Li, Shenghao Yang, Yonghui Guan, Chuanyi Liu}, journal={arXiv preprint arXiv:2409.19564}, year={2024}, archivePrefix={arXiv}, eprint={2409.19564}, primaryClass={cs.DC} }
fu2024hamster:
arxiv-663214
2409.19566
Abstractive Summarization of Low resourced Nepali language using Multilingual Transformers
<|reference_start|>Abstractive Summarization of Low resourced Nepali language using Multilingual Transformers: Automatic text summarization in Nepali language is an unexplored area in natural language processing (NLP). Although considerable research has been dedicated to extractive summarization, the area of abstractive summarization, especially for low-resource languages such as Nepali, remains largely unexplored. This study explores the use of multilingual transformer models, specifically mBART and mT5, for generating headlines for Nepali news articles through abstractive summarization. The research addresses key challenges associated with summarizing texts in Nepali by first creating a summarization dataset through web scraping from various Nepali news portals. These multilingual models were then fine-tuned using different strategies. The performance of the fine-tuned models were then assessed using ROUGE scores and human evaluation to ensure the generated summaries were coherent and conveyed the original meaning. During the human evaluation, the participants were asked to select the best summary among those generated by the models, based on criteria such as relevance, fluency, conciseness, informativeness, factual accuracy, and coverage. During the evaluation with ROUGE scores, the 4-bit quantized mBART with LoRA model was found to be effective in generating better Nepali news headlines in comparison to other models and also it was selected 34.05% of the time during the human evaluation, outperforming all other fine-tuned models created for Nepali News headline generation.<|reference_end|>
arxiv
@article{dhakal2024abstractive, title={Abstractive Summarization of Low resourced Nepali language using Multilingual Transformers}, author={Prakash Dhakal, Daya Sagar Baral}, journal={arXiv preprint arXiv:2409.19566}, year={2024}, archivePrefix={arXiv}, eprint={2409.19566}, primaryClass={cs.CL cs.AI} }
dhakal2024abstractive
arxiv-663215
2409.19567
Variance-Reduced Gradient Estimator for Nonconvex Zeroth-Order Distributed Optimization
<|reference_start|>Variance-Reduced Gradient Estimator for Nonconvex Zeroth-Order Distributed Optimization: This paper investigates distributed zeroth-order optimization for smooth nonconvex problems. We propose a novel variance-reduced gradient estimator, which randomly renovates one orthogonal direction of the true gradient in each iteration while leveraging historical snapshots for variance correction. By integrating this estimator with gradient tracking mechanism, we address the trade-off between convergence rate and sampling cost per zeroth-order gradient estimation that exists in current zeroth-order distributed optimization algorithms, which rely on either the 2-point or $2d$-point gradient estimators. We derive a convergence rate of $\mathcal{O}(d^{\frac{5}{2}}/m)$ for smooth nonconvex functions in terms of sampling number $m$ and problem dimension $d$. Numerical simulations comparing our algorithm with existing methods confirm the effectiveness and efficiency of the proposed gradient estimator.<|reference_end|>
arxiv
@article{mu2024variance-reduced, title={Variance-Reduced Gradient Estimator for Nonconvex Zeroth-Order Distributed Optimization}, author={Huaiyi Mu, Yujie Tang, Zhongkui Li}, journal={arXiv preprint arXiv:2409.19567}, year={2024}, archivePrefix={arXiv}, eprint={2409.19567}, primaryClass={math.OC cs.MA cs.SY eess.SY} }
mu2024variance-reduced
arxiv-663216
2409.19568
Methods for Mitigating Uncertainty in Real-Time Operations of a Connected Microgrid
<|reference_start|>Methods for Mitigating Uncertainty in Real-Time Operations of a Connected Microgrid: In this paper, we compare the effectiveness of a two-stage control strategy for the energy management system (EMS) of a grid-connected microgrid under uncertain solar irradiance and load demand using a real-world dataset from an island in Southeast Asia (SEA). The first stage computes a day-ahead commitment for power profile exchanged with the main grid, while the second stage focuses on real-time controls to minimize the system operating cost. Given the challenges in accurately forecasting solar irradiance for a long time horizon, scenario-based stochastic programming (SP) is considered for the first stage. For the second stage, as the most recent weather conditions can be used, several methodologies to handle the uncertainties are investigated, including: (1) the rule-based method historically deployed on EMS, (2) model predictive controller (MPC) using either an explicit forecast or scenario-based stochastic forecast, and (3) Deep Reinforcement Learning (DRL) computing its own implicit forecast through a distribution of costs. Performances of these methodologies are compared in terms of precision with a reference control assuming perfect forecast -- i.e. representing the minimal achievable operation cost in theory. Obtained results show that MPC with a stochastic forecast outperforms MPC with a simple deterministic prediction. This suggests that using an explicit forecast, even within a short time window, is challenging. Using weather conditions can, however, be more efficient, as demonstrated by DRL (with implicit forecast), outperforming MPC with stochastic forecast by 1.3\%.<|reference_end|>
arxiv
@article{panda2024methods, title={Methods for Mitigating Uncertainty in Real-Time Operations of a Connected Microgrid}, author={Subrat Prasad Panda, Blaise Genest, Arvind Easwaran, R'emy Rigo-Mariani and PengFeng Lin}, journal={Sustainable Energy, Grids and Networks, 38, 101334 (2024)}, year={2024}, doi={10.1016/j.segan.2024.101334}, archivePrefix={arXiv}, eprint={2409.19568}, primaryClass={eess.SY cs.SY} }
panda2024methods
arxiv-663217
2409.19569
Fully Aligned Network for Referring Image Segmentation
<|reference_start|>Fully Aligned Network for Referring Image Segmentation: This paper focuses on the Referring Image Segmentation (RIS) task, which aims to segment objects from an image based on a given language description. The critical problem of RIS is achieving fine-grained alignment between different modalities to recognize and segment the target object. Recent advances using the attention mechanism for cross-modal interaction have achieved excellent progress. However, current methods tend to lack explicit principles of interaction design as guidelines, leading to inadequate cross-modal comprehension. Additionally, most previous works use a single-modal mask decoder for prediction, losing the advantage of full cross-modal alignment. To address these challenges, we present a Fully Aligned Network (FAN) that follows four cross-modal interaction principles. Under the guidance of reasonable rules, our FAN achieves state-of-the-art performance on the prevalent RIS benchmarks (RefCOCO, RefCOCO+, G-Ref) with a simple architecture.<|reference_end|>
arxiv
@article{liu2024fully, title={Fully Aligned Network for Referring Image Segmentation}, author={Yong Liu, Ruihao Xu, Yansong Tang}, journal={arXiv preprint arXiv:2409.19569}, year={2024}, archivePrefix={arXiv}, eprint={2409.19569}, primaryClass={cs.CV} }
liu2024fully
arxiv-663218
2409.19572
Mitigating the Negative Impact of Over-association for Conversational Query Production
<|reference_start|>Mitigating the Negative Impact of Over-association for Conversational Query Production: Conversational query generation aims at producing search queries from dialogue histories, which are then used to retrieve relevant knowledge from a search engine to help knowledge-based dialogue systems. Trained to maximize the likelihood of gold queries, previous models suffer from the data hunger issue, and they tend to both drop important concepts from dialogue histories and generate irrelevant concepts at inference time. We attribute these issues to the over-association phenomenon where a large number of gold queries are indirectly related to the dialogue topics, because annotators may unconsciously perform reasoning with their background knowledge when generating these gold queries. We carefully analyze the negative effects of this phenomenon on pretrained Seq2seq query producers and then propose effective instance-level weighting strategies for training to mitigate these issues from multiple perspectives. Experiments on two benchmarks, Wizard-of-Internet and DuSinc, show that our strategies effectively alleviate the negative effects and lead to significant performance gains (2%-5% across automatic metrics and human evaluation). Further analysis shows that our model selects better concepts from dialogue histories and is 10 times more data efficient than the baseline. The code is available at https://github.com/DeepLearnXMU/QG-OverAsso.<|reference_end|>
arxiv
@article{wang2024mitigating, title={Mitigating the Negative Impact of Over-association for Conversational Query Production}, author={Ante Wang, Linfeng Song, Zijun Min, Ge Xu, Xiaoli Wang, Junfeng Yao and Jinsong Su}, journal={arXiv preprint arXiv:2409.19572}, year={2024}, archivePrefix={arXiv}, eprint={2409.19572}, primaryClass={cs.CL cs.AI} }
wang2024mitigating
arxiv-663219
2409.19573
See then Tell: Enhancing Key Information Extraction with Vision Grounding
<|reference_start|>See then Tell: Enhancing Key Information Extraction with Vision Grounding: In the digital era, the ability to understand visually rich documents that integrate text, complex layouts, and imagery is critical. Traditional Key Information Extraction (KIE) methods primarily rely on Optical Character Recognition (OCR), which often introduces significant latency, computational overhead, and errors. Current advanced image-to-text approaches, which bypass OCR, typically yield plain text outputs without corresponding vision grounding. In this paper, we introduce STNet (See then Tell Net), a novel end-to-end model designed to deliver precise answers with relevant vision grounding. Distinctively, STNet utilizes a unique <see> token to observe pertinent image areas, aided by a decoder that interprets physical coordinates linked to this token. Positioned at the outset of the answer text, the <see> token allows the model to first see--observing the regions of the image related to the input question--and then tell--providing articulated textual responses. To enhance the model's seeing capabilities, we collect extensive structured table recognition datasets. Leveraging the advanced text processing prowess of GPT-4, we develop the TVG (TableQA with Vision Grounding) dataset, which not only provides text-based Question Answering (QA) pairs but also incorporates precise vision grounding for these pairs. Our approach demonstrates substantial advancements in KIE performance, achieving state-of-the-art results on publicly available datasets such as CORD, SROIE, and DocVQA. The code will also be made publicly available.<|reference_end|>
arxiv
@article{liu2024see, title={See then Tell: Enhancing Key Information Extraction with Vision Grounding}, author={Shuhang Liu, Zhenrong Zhang, Pengfei Hu, Jiefeng Ma, Jun Du, Qing Wang, Jianshu Zhang, Chenyu Liu}, journal={arXiv preprint arXiv:2409.19573}, year={2024}, archivePrefix={arXiv}, eprint={2409.19573}, primaryClass={cs.CV cs.AI} }
liu2024see
arxiv-663220
2409.19574
The Devil is in the Sources! Knowledge Enhanced Cross-Domain Recommendation in an Information Bottleneck Perspective
<|reference_start|>The Devil is in the Sources! Knowledge Enhanced Cross-Domain Recommendation in an Information Bottleneck Perspective: Cross-domain Recommendation (CDR) aims to alleviate the data sparsity and the cold-start problems in traditional recommender systems by leveraging knowledge from an informative source domain. However, previously proposed CDR models pursue an imprudent assumption that the entire information from the source domain is equally contributed to the target domain, neglecting the evil part that is completely irrelevant to users' intrinsic interest. To address this concern, in this paper, we propose a novel knowledge enhanced cross-domain recommendation framework named CoTrans, which remolds the core procedures of CDR models with: Compression on the knowledge from the source domain and Transfer of the purity to the target domain. Specifically, following the theory of Graph Information Bottleneck, CoTrans first compresses the source behaviors with the perception of information from the target domain. Then to preserve all the important information for the CDR task, the feedback signals from both domains are utilized to promote the effectiveness of the transfer procedure. Additionally, a knowledge-enhanced encoder is employed to narrow gaps caused by the non-overlapped items across separate domains. Comprehensive experiments on three widely used cross-domain datasets demonstrate that CoTrans significantly outperforms both single-domain and state-of-the-art cross-domain recommendation approaches.<|reference_end|>
arxiv
@article{hu2024the, title={The Devil is in the Sources! Knowledge Enhanced Cross-Domain Recommendation in an Information Bottleneck Perspective}, author={Binbin Hu, Weifan Wang, Hanshu Wang, Ziqi Liu, Bin Shen, Yong He, Jiawei Chen}, journal={arXiv preprint arXiv:2409.19574}, year={2024}, archivePrefix={arXiv}, eprint={2409.19574}, primaryClass={cs.IR} }
hu2024the
arxiv-663221
2409.19575
Quantitative Analysis of Audio-Visual Tasks: An Information-Theoretic Perspective
<|reference_start|>Quantitative Analysis of Audio-Visual Tasks: An Information-Theoretic Perspective: In the field of spoken language processing, audio-visual speech processing is receiving increasing research attention. Key components of this research include tasks such as lip reading, audio-visual speech recognition, and visual-to-speech synthesis. Although significant success has been achieved, theoretical analysis is still insufficient for audio-visual tasks. This paper presents a quantitative analysis based on information theory, focusing on information intersection between different modalities. Our results show that this analysis is valuable for understanding the difficulties of audio-visual processing tasks as well as the benefits that could be obtained by modality integration.<|reference_end|>
arxiv
@article{chen2024quantitative, title={Quantitative Analysis of Audio-Visual Tasks: An Information-Theoretic Perspective}, author={Chen Chen, Xiaolou Li, Zehua Liu, Lantian Li, Dong Wang}, journal={arXiv preprint arXiv:2409.19575}, year={2024}, archivePrefix={arXiv}, eprint={2409.19575}, primaryClass={cs.SD cs.CL cs.MM eess.AS} }
chen2024quantitative
arxiv-663222
2409.19576
Online and Offline Algorithms for Counting Distinct Closed Factors via Sliding Suffix Trees
<|reference_start|>Online and Offline Algorithms for Counting Distinct Closed Factors via Sliding Suffix Trees: A string is said to be closed if its length is one, or if it has a non-empty factor that occurs both as a prefix and as a suffix of the string, but does not occur elsewhere. The notion of closed words was introduced by [Fici, WORDS 2011]. Recently, the maximum number of distinct closed factors occurring in a string was investigated by [Parshina and Puzynina, Theor. Comput. Sci. 2024], and an asymptotic tight bound was proved. In this paper, we propose two algorithms to count the distinct closed factors in a string T of length n over an alphabet of size \sigma. The first algorithm runs in O(n log \sigma) time using O(n) space for string T given in an online manner. The second algorithm runs in O(n) time using O(n) space for string T given in an offline manner. Both algorithms utilize suffix trees for sliding windows.<|reference_end|>
arxiv
@article{mieno2024online, title={Online and Offline Algorithms for Counting Distinct Closed Factors via Sliding Suffix Trees}, author={Takuya Mieno, Shun Takahashi, Kazuhisa Seto, and Takashi Horiyama}, journal={arXiv preprint arXiv:2409.19576}, year={2024}, archivePrefix={arXiv}, eprint={2409.19576}, primaryClass={cs.DS} }
mieno2024online
arxiv-663223
2409.19579
Leveraging Surgical Activity Grammar for Primary Intention Prediction in Laparoscopy Procedures
<|reference_start|>Leveraging Surgical Activity Grammar for Primary Intention Prediction in Laparoscopy Procedures: Surgical procedures are inherently complex and dynamic, with intricate dependencies and various execution paths. Accurate identification of the intentions behind critical actions, referred to as Primary Intentions (PIs), is crucial to understanding and planning the procedure. This paper presents a novel framework that advances PI recognition in instructional videos by combining top-down grammatical structure with bottom-up visual cues. The grammatical structure is based on a rich corpus of surgical procedures, offering a hierarchical perspective on surgical activities. A grammar parser, utilizing the surgical activity grammar, processes visual data obtained from laparoscopic images through surgical action detectors, ensuring a more precise interpretation of the visual information. Experimental results on the benchmark dataset demonstrate that our method outperforms existing surgical activity detectors that rely solely on visual features. Our research provides a promising foundation for developing advanced robotic surgical systems with enhanced planning and automation capabilities.<|reference_end|>
arxiv
@article{zhang2024leveraging, title={Leveraging Surgical Activity Grammar for Primary Intention Prediction in Laparoscopy Procedures}, author={Jie Zhang, Song Zhou, Yiwei Wang, Chidan Wan, Huan Zhao, Xiong Cai and Han Ding}, journal={arXiv preprint arXiv:2409.19579}, year={2024}, archivePrefix={arXiv}, eprint={2409.19579}, primaryClass={cs.RO} }
zhang2024leveraging
arxiv-663224
2409.19580
High Quality Human Image Animation using Regional Supervision and Motion Blur Condition
<|reference_start|>High Quality Human Image Animation using Regional Supervision and Motion Blur Condition: Recent advances in video diffusion models have enabled realistic and controllable human image animation with temporal coherence. Although generating reasonable results, existing methods often overlook the need for regional supervision in crucial areas such as the face and hands, and neglect the explicit modeling for motion blur, leading to unrealistic low-quality synthesis. To address these limitations, we first leverage regional supervision for detailed regions to enhance face and hand faithfulness. Second, we model the motion blur explicitly to further improve the appearance quality. Third, we explore novel training strategies for high-resolution human animation to improve the overall fidelity. Experimental results demonstrate that our proposed method outperforms state-of-the-art approaches, achieving significant improvements upon the strongest baseline by more than 21.0% and 57.4% in terms of reconstruction precision (L1) and perceptual quality (FVD) on HumanDance dataset. Code and model will be made available.<|reference_end|>
arxiv
@article{xu2024high, title={High Quality Human Image Animation using Regional Supervision and Motion Blur Condition}, author={Zhongcong Xu, Chaoyue Song, Guoxian Song, Jianfeng Zhang, Jun Hao Liew, Hongyi Xu, You Xie, Linjie Luo, Guosheng Lin, Jiashi Feng, Mike Zheng Shou}, journal={arXiv preprint arXiv:2409.19580}, year={2024}, archivePrefix={arXiv}, eprint={2409.19580}, primaryClass={cs.CV} }
xu2024high
arxiv-663225
2409.19581
DiMB-RE: Mining the Scientific Literature for Diet-Microbiome Associations
<|reference_start|>DiMB-RE: Mining the Scientific Literature for Diet-Microbiome Associations: Motivation: The gut microbiota has recently emerged as a key factor that underpins certain connections between diet and human health. A tremendous amount of knowledge has been amassed from experimental studies on diet, human metabolism and microbiome. However, this evidence remains mostly buried in scientific publications, and biomedical literature mining in this domain remains scarce. We developed DiMB-RE, a comprehensive corpus annotated with 15 entity types (e.g., Nutrient, Microorganism) and 13 relation types (e.g., increases, improves) capturing diet-microbiome associations. We also trained and evaluated state-of-the-art natural language processing (NLP) models for named entity, trigger, and relation extraction as well as factuality detection using DiMB-RE. Results: DiMB-RE consists of 14,450 entities and 4,206 relationships from 165 articles. While NLP models performed reasonably well for named entity recognition (0.760 F$_{1}$), end-to-end relation extraction performance was modest (0.356 F$_{1}$), partly due to missed entities and triggers as well as cross-sentence relations. Conclusions: To our knowledge, DiMB-RE is largest and most diverse dataset focusing on diet-microbiome interactions. It can serve as a benchmark corpus for biomedical literature mining. Availability: DiMB-RE and the NLP models are available at https://github.com/ScienceNLP-Lab/DiMB-RE.<|reference_end|>
arxiv
@article{hong2024dimb-re:, title={DiMB-RE: Mining the Scientific Literature for Diet-Microbiome Associations}, author={Gibong Hong, Veronica Hindle, Nadine M. Veasley, Hannah D. Holscher, Halil Kilicoglu}, journal={arXiv preprint arXiv:2409.19581}, year={2024}, archivePrefix={arXiv}, eprint={2409.19581}, primaryClass={cs.CL} }
hong2024dimb-re:
arxiv-663226
2409.19582
Self-supervised Auxiliary Learning for Texture and Model-based Hybrid Robust and Fair Featuring in Face Analysis
<|reference_start|>Self-supervised Auxiliary Learning for Texture and Model-based Hybrid Robust and Fair Featuring in Face Analysis: In this work, we explore Self-supervised Learning (SSL) as an auxiliary task to blend the texture-based local descriptors into feature modelling for efficient face analysis. Combining a primary task and a self-supervised auxiliary task is beneficial for robust representation. Therefore, we used the SSL task of mask auto-encoder (MAE) as an auxiliary task to reconstruct texture features such as local patterns along with the primary task for robust and unbiased face analysis. We experimented with our hypothesis on three major paradigms of face analysis: face attribute and face-based emotion analysis, and deepfake detection. Our experiment results exhibit that better feature representation can be gleaned from our proposed model for fair and bias-less face analysis.<|reference_end|>
arxiv
@article{reddy2024self-supervised, title={Self-supervised Auxiliary Learning for Texture and Model-based Hybrid Robust and Fair Featuring in Face Analysis}, author={Shukesh Reddy, Nishit Poddar, Srijan Das, and Abhijit Das}, journal={arXiv preprint arXiv:2409.19582}, year={2024}, archivePrefix={arXiv}, eprint={2409.19582}, primaryClass={cs.CV} }
reddy2024self-supervised
arxiv-663227
2409.19583
Brain Tumor Classification on MRI in Light of Molecular Markers
<|reference_start|>Brain Tumor Classification on MRI in Light of Molecular Markers: In research findings, co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas. The ability to predict 1p19q status is critical for treatment planning and patient follow-up. This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection. Although public networks such as RestNet and AlexNet can effectively diagnose brain cancers using transfer learning, the model includes quite a few weights that have nothing to do with medical images. As a result, the diagnostic results are unreliable by the transfer learning model. To deal with the problem of trustworthiness, we create the model from the ground up, rather than depending on a pre-trained model. To enable flexibility, we combined convolution stacking with a dropout and full connect operation, it improved performance by reducing overfitting. During model training, we also supplement the given dataset and inject Gaussian noise. We use three--fold cross-validation to train the best selection model. Comparing InceptionV3, VGG16, and MobileNetV2 fine-tuned with pre-trained models, our model produces better results. On an validation set of 125 codeletion vs. 31 not codeletion images, the proposed network achieves 96.37\% percent F1-score, 97.46\% percent precision, and 96.34\% percent recall when classifying 1p/19q codeletion and not codeletion images.<|reference_end|>
arxiv
@article{liu2024brain, title={Brain Tumor Classification on MRI in Light of Molecular Markers}, author={Jun Liu, Geng Yuan, Weihao Zeng, Hao Tang, Wenbin Zhang, Xue Lin, XiaoLin Xu, Dong Huang and Yanzhi Wang}, journal={Springer Nature - Book Series: Transactions on Computational Science & Computational Intelligence, 2022}, year={2024}, archivePrefix={arXiv}, eprint={2409.19583}, primaryClass={eess.IV cs.CV cs.LG q-bio.QM} }
liu2024brain
arxiv-663228
2409.19584
Adaptive sampling accelerates the hybrid deviational particle simulations
<|reference_start|>Adaptive sampling accelerates the hybrid deviational particle simulations: To avoid ineffective collisions between the equilibrium states, the hybrid method with deviational particles (HDP) has been proposed to integrate the Fokker-Planck-Landau system, while leaving a new issue in sampling deviational particles from the high-dimensional source term. In this paper, we present an adaptive sampling (AS) strategy that first adaptively reconstructs a piecewise constant approximation of the source term based on sequential clustering via discrepancy estimation, and then samples deviational particles directly from the resulting adaptive piecewise constant function without rejection. The mixture discrepancy, which can be easily calculated thanks to its explicit analytical expression, is employed as a measure of uniformity instead of the star discrepancy the calculation of which is NP-hard. The resulting method, dubbed the HDP-AS method, runs approximately ten times faster than the HDP method while keeping the same accuracy in the Landau damping, two stream instability, bump on tail and Rosenbluth's test problem.<|reference_end|>
arxiv
@article{lei2024adaptive, title={Adaptive sampling accelerates the hybrid deviational particle simulations}, author={Zhengyang Lei, Sihong Shao}, journal={arXiv preprint arXiv:2409.19584}, year={2024}, archivePrefix={arXiv}, eprint={2409.19584}, primaryClass={physics.comp-ph cs.NA math.NA} }
lei2024adaptive
arxiv-663229
2409.19585
Two-stage Framework for Robust Speech Emotion Recognition Using Target Speaker Extraction in Human Speech Noise Conditions
<|reference_start|>Two-stage Framework for Robust Speech Emotion Recognition Using Target Speaker Extraction in Human Speech Noise Conditions: Developing a robust speech emotion recognition (SER) system in noisy conditions faces challenges posed by different noise properties. Most previous studies have not considered the impact of human speech noise, thus limiting the application scope of SER. In this paper, we propose a novel two-stage framework for the problem by cascading target speaker extraction (TSE) method and SER. We first train a TSE model to extract the speech of target speaker from a mixture. Then, in the second stage, we utilize the extracted speech for SER training. Additionally, we explore a joint training of TSE and SER models in the second stage. Our developed system achieves a 14.33% improvement in unweighted accuracy (UA) compared to a baseline without using TSE method, demonstrating the effectiveness of our framework in mitigating the impact of human speech noise. Moreover, we conduct experiments considering speaker gender, showing that our framework performs particularly well in different-gender mixture.<|reference_end|>
arxiv
@article{mi2024two-stage, title={Two-stage Framework for Robust Speech Emotion Recognition Using Target Speaker Extraction in Human Speech Noise Conditions}, author={Jinyi Mi, Xiaohan Shi, Ding Ma, Jiajun He, Takuya Fujimura and Tomoki Toda}, journal={arXiv preprint arXiv:2409.19585}, year={2024}, archivePrefix={arXiv}, eprint={2409.19585}, primaryClass={cs.SD cs.CL eess.AS} }
mi2024two-stage
arxiv-663230
2409.19587
Efficient Quality Control of Whole Slide Pathology Images with Human-in-the-loop Training
<|reference_start|>Efficient Quality Control of Whole Slide Pathology Images with Human-in-the-loop Training: Histopathology whole slide images (WSIs) are being widely used to develop deep learning-based diagnostic solutions, especially for precision oncology. Most of these diagnostic softwares are vulnerable to biases and impurities in the training and test data which can lead to inaccurate diagnoses. For instance, WSIs contain multiple types of tissue regions, at least some of which might not be relevant to the diagnosis. We introduce HistoROI, a robust yet lightweight deep learning-based classifier to segregate WSI into six broad tissue regions -- epithelium, stroma, lymphocytes, adipose, artifacts, and miscellaneous. HistoROI is trained using a novel human-in-the-loop and active learning paradigm that ensures variations in training data for labeling-efficient generalization. HistoROI consistently performs well across multiple organs, despite being trained on only a single dataset, demonstrating strong generalization. Further, we have examined the utility of HistoROI in improving the performance of downstream deep learning-based tasks using the CAMELYON breast cancer lymph node and TCGA lung cancer datasets. For the former dataset, the area under the receiver operating characteristic curve (AUC) for metastasis versus normal tissue of a neural network trained using weakly supervised learning increased from 0.88 to 0.92 by filtering the data using HistoROI. Similarly, the AUC increased from 0.88 to 0.93 for the classification between adenocarcinoma and squamous cell carcinoma on the lung cancer dataset. We also found that the performance of the HistoROI improves upon HistoQC for artifact detection on a test dataset of 93 annotated WSIs. The limitations of the proposed model are analyzed, and potential extensions are also discussed.<|reference_end|>
arxiv
@article{patil2024efficient, title={Efficient Quality Control of Whole Slide Pathology Images with Human-in-the-loop Training}, author={Abhijeet Patil, Harsh Diwakar, Jay Sawant, Nikhil Cherian Kurian, Subhash Yadav, Swapnil Rane, Tripti Bameta, Amit Sethi}, journal={Journal of Pathology Informatics, 2023}, year={2024}, doi={10.1016/j.jpi.2023.100306}, archivePrefix={arXiv}, eprint={2409.19587}, primaryClass={eess.IV cs.CV} }
patil2024efficient
arxiv-663231
2409.19589
Effective Diffusion Transformer Architecture for Image Super-Resolution
<|reference_start|>Effective Diffusion Transformer Architecture for Image Super-Resolution: Recent advances indicate that diffusion models hold great promise in image super-resolution. While the latest methods are primarily based on latent diffusion models with convolutional neural networks, there are few attempts to explore transformers, which have demonstrated remarkable performance in image generation. In this work, we design an effective diffusion transformer for image super-resolution (DiT-SR) that achieves the visual quality of prior-based methods, but through a training-from-scratch manner. In practice, DiT-SR leverages an overall U-shaped architecture, and adopts a uniform isotropic design for all the transformer blocks across different stages. The former facilitates multi-scale hierarchical feature extraction, while the latter reallocates the computational resources to critical layers to further enhance performance. Moreover, we thoroughly analyze the limitation of the widely used AdaLN, and present a frequency-adaptive time-step conditioning module, enhancing the model's capacity to process distinct frequency information at different time steps. Extensive experiments demonstrate that DiT-SR outperforms the existing training-from-scratch diffusion-based SR methods significantly, and even beats some of the prior-based methods on pretrained Stable Diffusion, proving the superiority of diffusion transformer in image super-resolution.<|reference_end|>
arxiv
@article{cheng2024effective, title={Effective Diffusion Transformer Architecture for Image Super-Resolution}, author={Kun Cheng, Lei Yu, Zhijun Tu, Xiao He, Liyu Chen, Yong Guo, Mingrui Zhu, Nannan Wang, Xinbo Gao, Jie Hu}, journal={arXiv preprint arXiv:2409.19589}, year={2024}, archivePrefix={arXiv}, eprint={2409.19589}, primaryClass={cs.CV} }
cheng2024effective
arxiv-663232
2409.19590
RoboNurse-VLA: Robotic Scrub Nurse System based on Vision-Language-Action Model
<|reference_start|>RoboNurse-VLA: Robotic Scrub Nurse System based on Vision-Language-Action Model: In modern healthcare, the demand for autonomous robotic assistants has grown significantly, particularly in the operating room, where surgical tasks require precision and reliability. Robotic scrub nurses have emerged as a promising solution to improve efficiency and reduce human error during surgery. However, challenges remain in terms of accurately grasping and handing over surgical instruments, especially when dealing with complex or difficult objects in dynamic environments. In this work, we introduce a novel robotic scrub nurse system, RoboNurse-VLA, built on a Vision-Language-Action (VLA) model by integrating the Segment Anything Model 2 (SAM 2) and the Llama 2 language model. The proposed RoboNurse-VLA system enables highly precise grasping and handover of surgical instruments in real-time based on voice commands from the surgeon. Leveraging state-of-the-art vision and language models, the system can address key challenges for object detection, pose optimization, and the handling of complex and difficult-to-grasp instruments. Through extensive evaluations, RoboNurse-VLA demonstrates superior performance compared to existing models, achieving high success rates in surgical instrument handovers, even with unseen tools and challenging items. This work presents a significant step forward in autonomous surgical assistance, showcasing the potential of integrating VLA models for real-world medical applications. More details can be found at https://robonurse-vla.github.io.<|reference_end|>
arxiv
@article{li2024robonurse-vla:, title={RoboNurse-VLA: Robotic Scrub Nurse System based on Vision-Language-Action Model}, author={Shunlei Li, Jin Wang, Rui Dai, Wanyu Ma, Wing Yin Ng, Yingbai Hu, and Zheng Li}, journal={arXiv preprint arXiv:2409.19590}, year={2024}, archivePrefix={arXiv}, eprint={2409.19590}, primaryClass={cs.RO} }
li2024robonurse-vla:
arxiv-663233
2409.19592
DiffCP: Ultra-Low Bit Collaborative Perception via Diffusion Model
<|reference_start|>DiffCP: Ultra-Low Bit Collaborative Perception via Diffusion Model: Collaborative perception (CP) is emerging as a promising solution to the inherent limitations of stand-alone intelligence. However, current wireless communication systems are unable to support feature-level and raw-level collaborative algorithms due to their enormous bandwidth demands. In this paper, we propose DiffCP, a novel CP paradigm that utilizes a specialized diffusion model to efficiently compress the sensing information of collaborators. By incorporating both geometric and semantic conditions into the generative model, DiffCP enables feature-level collaboration with an ultra-low communication cost, advancing the practical implementation of CP systems. This paradigm can be seamlessly integrated into existing CP algorithms to enhance a wide range of downstream tasks. Through extensive experimentation, we investigate the trade-offs between communication, computation, and performance. Numerical results demonstrate that DiffCP can significantly reduce communication costs by 14.5-fold while maintaining the same performance as the state-of-the-art algorithm.<|reference_end|>
arxiv
@article{mao2024diffcp:, title={DiffCP: Ultra-Low Bit Collaborative Perception via Diffusion Model}, author={Ruiqing Mao, Haotian Wu, Yukuan Jia, Zhaojun Nan, Yuxuan Sun, Sheng Zhou, Deniz G"und"uz and Zhisheng Niu}, journal={arXiv preprint arXiv:2409.19592}, year={2024}, archivePrefix={arXiv}, eprint={2409.19592}, primaryClass={cs.CV cs.LG cs.MA} }
mao2024diffcp:
arxiv-663234
2409.19594
MASKDROID: Robust Android Malware Detection with Masked Graph Representations
<|reference_start|>MASKDROID: Robust Android Malware Detection with Masked Graph Representations: Android malware attacks have posed a severe threat to mobile users, necessitating a significant demand for the automated detection system. Among the various tools employed in malware detection, graph representations (e.g., function call graphs) have played a pivotal role in characterizing the behaviors of Android apps. However, though achieving impressive performance in malware detection, current state-of-the-art graph-based malware detectors are vulnerable to adversarial examples. These adversarial examples are meticulously crafted by introducing specific perturbations to normal malicious inputs. To defend against adversarial attacks, existing defensive mechanisms are typically supplementary additions to detectors and exhibit significant limitations, often relying on prior knowledge of adversarial examples and failing to defend against unseen types of attacks effectively. In this paper, we propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware and remarkable robustness against adversarial attacks. Specifically, we introduce a masking mechanism into the Graph Neural Network (GNN) based framework, forcing MASKDROID to recover the whole input graph using a small portion (e.g., 20%) of randomly selected nodes.This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks. While capturing stable malicious semantics in the form of dependencies inside the graph structures, we further employ a contrastive module to encourage MASKDROID to learn more compact representations for both the benign and malicious classes to boost its discriminative power in detecting malware from benign apps and adversarial examples.<|reference_end|>
arxiv
@article{zheng2024maskdroid:, title={MASKDROID: Robust Android Malware Detection with Masked Graph Representations}, author={Jingnan Zheng, Jiaohao Liu, An Zhang, Jun Zeng, Ziqi Yang, Zhenkai Liang, Tat-Seng Chua}, journal={IEEE/ACM Automated Software Engineering Conference 2024}, year={2024}, doi={10.1145/3691620.3695008}, archivePrefix={arXiv}, eprint={2409.19594}, primaryClass={cs.CR cs.AI cs.SE} }
zheng2024maskdroid:
arxiv-663235
2409.19595
Solution for Temporal Sound Localisation Task of ECCV Second Perception Test Challenge 2024
<|reference_start|>Solution for Temporal Sound Localisation Task of ECCV Second Perception Test Challenge 2024: This report proposes an improved method for the Temporal Sound Localisation (TSL) task, which localizes and classifies the sound events occurring in the video according to a predefined set of sound classes. The champion solution from last year's first competition has explored the TSL by fusing audio and video modalities with the same weight. Considering the TSL task aims to localize sound events, we conduct relevant experiments that demonstrated the superiority of sound features (Section 3). Based on our findings, to enhance audio modality features, we employ various models to extract audio features, such as InterVideo, CaVMAE, and VideoMAE models. Our approach ranks first in the final test with a score of 0.4925.<|reference_end|>
arxiv
@article{gu2024solution, title={Solution for Temporal Sound Localisation Task of ECCV Second Perception Test Challenge 2024}, author={Haowei Gu, Weihao Zhu, Yang Yang}, journal={arXiv preprint arXiv:2409.19595}, year={2024}, archivePrefix={arXiv}, eprint={2409.19595}, primaryClass={cs.SD cs.LG eess.AS} }
gu2024solution
arxiv-663236
2409.19597
CELLmap: Enhancing LiDAR SLAM through Elastic and Lightweight Spherical Map Representation
<|reference_start|>CELLmap: Enhancing LiDAR SLAM through Elastic and Lightweight Spherical Map Representation: SLAM is a fundamental capability of unmanned systems, with LiDAR-based SLAM gaining widespread adoption due to its high precision. Current SLAM systems can achieve centimeter-level accuracy within a short period. However, there are still several challenges when dealing with largescale mapping tasks including significant storage requirements and difficulty of reusing the constructed maps. To address this, we first design an elastic and lightweight map representation called CELLmap, composed of several CELLs, each representing the local map at the corresponding location. Then, we design a general backend including CELL-based bidirectional registration module and loop closure detection module to improve global map consistency. Our experiments have demonstrated that CELLmap can represent the precise geometric structure of large-scale maps of KITTI dataset using only about 60 MB. Additionally, our general backend achieves up to a 26.88% improvement over various LiDAR odometry methods.<|reference_end|>
arxiv
@article{duan2024cellmap:, title={CELLmap: Enhancing LiDAR SLAM through Elastic and Lightweight Spherical Map Representation}, author={Yifan Duan, Xinran Zhang, Yao Li, Guoliang You, Xiaomeng Chu, Jianmin Ji, and Yanyong Zhang}, journal={arXiv preprint arXiv:2409.19597}, year={2024}, archivePrefix={arXiv}, eprint={2409.19597}, primaryClass={cs.RO} }
duan2024cellmap:
arxiv-663237
2409.19598
Upper-body musculoskeletal pain and eye strain among language professionals: a descriptive, cross-sectional study
<|reference_start|>Upper-body musculoskeletal pain and eye strain among language professionals: a descriptive, cross-sectional study: Language professionals spend long hours at the computer, which may have an impact on their short- and long-term physical health. In 2023, I ran a survey to investigate workstation ergonomics, eye and upper-body problems, and self-reported strategies that alleviate those problems among language professionals who work sitting or standing at a desk. Of the 791 respondents, about one third reported eye problems and over two-thirds reported upper-body aches or pains in the past 12 months, with significantly higher upper-body pain prevalence among females than males, and also among younger respondents than older ones. While the pain prevalence rate in the survey was similar to figures published in the literature, as was the sex risk factor, the association of higher pain prevalence among younger people contrasted with other studies that have found increasing age to be a risk factor for pain. In this article I share the survey results in detail and discuss possible explanations for the findings.<|reference_end|>
arxiv
@article{goldsmith2024upper-body, title={Upper-body musculoskeletal pain and eye strain among language professionals: a descriptive, cross-sectional study}, author={Emma Goldsmith}, journal={arXiv preprint arXiv:2409.19598}, year={2024}, archivePrefix={arXiv}, eprint={2409.19598}, primaryClass={cs.HC} }
goldsmith2024upper-body
arxiv-663238
2409.19599
Gradient is All You Need: Gradient-Based Attention Fusion for Infrared Small Target Detection
<|reference_start|>Gradient is All You Need: Gradient-Based Attention Fusion for Infrared Small Target Detection: Infrared small target detection (IRSTD) is widely used in civilian and military applications. However, IRSTD encounters several challenges, including the tendency for small and dim targets to be obscured by complex backgrounds. To address this issue, we propose the Gradient Network (GaNet), which aims to extract and preserve edge and gradient information of small targets. GaNet employs the Gradient Transformer (GradFormer) module, simulating central difference convolutions (CDC) to extract and integrate gradient features with deeper features. Furthermore, we propose a global feature extraction model (GFEM) that offers a comprehensive perspective to prevent the network from focusing solely on details while neglecting the background information. We compare the network with state-of-the-art (SOTA) approaches, and the results demonstrate that our method performs effectively. Our source code is available at https://github.com/greekinRoma/Gradient-Transformer.<|reference_end|>
arxiv
@article{hu2024gradient, title={Gradient is All You Need: Gradient-Based Attention Fusion for Infrared Small Target Detection}, author={Chen Hu, Yian Huang, Kexuan Li, Luping Zhang, Yiming Zhu, Yufei Peng, Tian Pu, and Zhenming Peng}, journal={arXiv preprint arXiv:2409.19599}, year={2024}, archivePrefix={arXiv}, eprint={2409.19599}, primaryClass={cs.CV} }
hu2024gradient
arxiv-663239
2409.19600
An Unbiased Risk Estimator for Partial Label Learning with Augmented Classes
<|reference_start|>An Unbiased Risk Estimator for Partial Label Learning with Augmented Classes: Partial Label Learning (PLL) is a typical weakly supervised learning task, which assumes each training instance is annotated with a set of candidate labels containing the ground-truth label. Recent PLL methods adopt identification-based disambiguation to alleviate the influence of false positive labels and achieve promising performance. However, they require all classes in the test set to have appeared in the training set, ignoring the fact that new classes will keep emerging in real applications. To address this issue, in this paper, we focus on the problem of Partial Label Learning with Augmented Class (PLLAC), where one or more augmented classes are not visible in the training stage but appear in the inference stage. Specifically, we propose an unbiased risk estimator with theoretical guarantees for PLLAC, which estimates the distribution of augmented classes by differentiating the distribution of known classes from unlabeled data and can be equipped with arbitrary PLL loss functions. Besides, we provide a theoretical analysis of the estimation error bound of the estimator, which guarantees the convergence of the empirical risk minimizer to the true risk minimizer as the number of training data tends to infinity. Furthermore, we add a risk-penalty regularization term in the optimization objective to alleviate the influence of the over-fitting issue caused by negative empirical risk. Extensive experiments on benchmark, UCI and real-world datasets demonstrate the effectiveness of the proposed approach.<|reference_end|>
arxiv
@article{hu2024an, title={An Unbiased Risk Estimator for Partial Label Learning with Augmented Classes}, author={Jiayu Hu, Senlin Shu, Beibei Li, Tao Xiang, and Zhongshi He}, journal={arXiv preprint arXiv:2409.19600}, year={2024}, archivePrefix={arXiv}, eprint={2409.19600}, primaryClass={cs.LG cs.AI stat.ML} }
hu2024an
arxiv-663240
2409.19601
Infighting in the Dark: Multi-Labels Backdoor Attack in Federated Learning
<|reference_start|>Infighting in the Dark: Multi-Labels Backdoor Attack in Federated Learning: Federated Learning (FL) has been demonstrated to be vulnerable to backdoor attacks. As a decentralized machine learning framework, most research focuses on the Single-Label Backdoor Attack (SBA), where adversaries share the same target but neglect the fact that adversaries may be unaware of each other's existence and hold different targets, i.e., Multi-Label Backdoor Attack (MBA). Unfortunately, directly applying prior work to the MBA would not only be ineffective but also potentially mitigate each other. In this paper, we first investigate the limitations of applying previous work to the MBA. Subsequently, we propose M2M, a novel multi-label backdoor attack in federated learning (FL), which adversarially adapts the backdoor trigger to ensure that the backdoored sample is processed as clean target samples in the global model. Our key intuition is to establish a connection between the trigger pattern and the target class distribution, allowing different triggers to activate backdoors along clean activation paths of the target class without concerns about potential mitigation. Extensive evaluations comprehensively demonstrate that M2M outperforms various state-of-the-art attack methods. This work aims to alert researchers and developers to this potential threat and to inspire the design of effective detection methods. Our code will be made available later.<|reference_end|>
arxiv
@article{li2024infighting, title={Infighting in the Dark: Multi-Labels Backdoor Attack in Federated Learning}, author={Ye Li, Yanchao Zhao, Chengcheng Zhu and Jiale Zhang}, journal={arXiv preprint arXiv:2409.19601}, year={2024}, archivePrefix={arXiv}, eprint={2409.19601}, primaryClass={cs.CR} }
li2024infighting
arxiv-663241
2409.19603
One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos
<|reference_start|>One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos: We introduce VideoLISA, a video-based multimodal large language model designed to tackle the problem of language-instructed reasoning segmentation in videos. Leveraging the reasoning capabilities and world knowledge of large language models, and augmented by the Segment Anything Model, VideoLISA generates temporally consistent segmentation masks in videos based on language instructions. Existing image-based methods, such as LISA, struggle with video tasks due to the additional temporal dimension, which requires temporal dynamic understanding and consistent segmentation across frames. VideoLISA addresses these challenges by integrating a Sparse Dense Sampling strategy into the video-LLM, which balances temporal context and spatial detail within computational constraints. Additionally, we propose a One-Token-Seg-All approach using a specially designed <TRK> token, enabling the model to segment and track objects across multiple frames. Extensive evaluations on diverse benchmarks, including our newly introduced ReasonVOS benchmark, demonstrate VideoLISA's superior performance in video object segmentation tasks involving complex reasoning, temporal understanding, and object tracking. While optimized for videos, VideoLISA also shows promising generalization to image segmentation, revealing its potential as a unified foundation model for language-instructed object segmentation. Code and model will be available at: https://github.com/showlab/VideoLISA.<|reference_end|>
arxiv
@article{bai2024one, title={One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos}, author={Zechen Bai, Tong He, Haiyang Mei, Pichao Wang, Ziteng Gao, Joya Chen, Lei Liu, Zheng Zhang, Mike Zheng Shou}, journal={arXiv preprint arXiv:2409.19603}, year={2024}, archivePrefix={arXiv}, eprint={2409.19603}, primaryClass={cs.CV cs.AI} }
bai2024one
arxiv-663242
2409.19605
The Crucial Role of Samplers in Online Direct Preference Optimization
<|reference_start|>The Crucial Role of Samplers in Online Direct Preference Optimization: Direct Preference Optimization (DPO) has emerged as a stable, scalable, and efficient solution for language model alignment. Despite its empirical success, the $\textit{optimization}$ properties, particularly the impact of samplers on its convergence rates, remain underexplored. In this paper, we provide a rigorous analysis of DPO's $\textit{convergence rates}$ with different sampling strategies under the exact gradient setting, revealing a surprising separation: uniform sampling achieves $\textit{linear}$ convergence, while our proposed online sampler achieves $\textit{quadratic}$ convergence. We further adapt the sampler to practical settings by incorporating posterior distributions and $\textit{logit mixing}$, demonstrating significant improvements over previous approaches. On Safe-RLHF dataset, our method exhibits a $4.5$% improvement over vanilla DPO and a $3.0$% improvement over on-policy DPO; on Iterative-Prompt, our approach outperforms vanilla DPO, on-policy DPO, and Hybrid GSHF by over $4.2$%. Our results not only offer insights into the theoretical standing of DPO but also pave the way for potential algorithm designs in the future.<|reference_end|>
arxiv
@article{shi2024the, title={The Crucial Role of Samplers in Online Direct Preference Optimization}, author={Ruizhe Shi, Runlong Zhou, Simon S. Du}, journal={arXiv preprint arXiv:2409.19605}, year={2024}, archivePrefix={arXiv}, eprint={2409.19605}, primaryClass={cs.LG cs.CL} }
shi2024the
arxiv-663243
2409.19606
Hyper-Connections
<|reference_start|>Hyper-Connections: We present hyper-connections, a simple yet effective method that can serve as an alternative to residual connections. This approach specifically addresses common drawbacks observed in residual connection variants, such as the seesaw effect between gradient vanishing and representation collapse. Theoretically, hyper-connections allow the network to adjust the strength of connections between features at different depths and dynamically rearrange layers. We conduct experiments focusing on the pre-training of large language models, including dense and sparse models, where hyper-connections show significant performance improvements over residual connections. Additional experiments conducted on vision tasks also demonstrate similar improvements. We anticipate that this method will be broadly applicable and beneficial across a wide range of AI problems.<|reference_end|>
arxiv
@article{zhu2024hyper-connections, title={Hyper-Connections}, author={Defa Zhu, Hongzhi Huang, Zihao Huang, Yutao Zeng, Yunyao Mao, Banggu Wu, Qiyang Min, Xun Zhou}, journal={arXiv preprint arXiv:2409.19606}, year={2024}, archivePrefix={arXiv}, eprint={2409.19606}, primaryClass={cs.LG cs.CL cs.CV cs.NE} }
zhu2024hyper-connections
arxiv-663244
2409.19608
Causal Deciphering and Inpainting in Spatio-Temporal Dynamics via Diffusion Model
<|reference_start|>Causal Deciphering and Inpainting in Spatio-Temporal Dynamics via Diffusion Model: Spatio-temporal (ST) prediction has garnered a De facto attention in earth sciences, such as meteorological prediction, human mobility perception. However, the scarcity of data coupled with the high expenses involved in sensor deployment results in notable data imbalances. Furthermore, models that are excessively customized and devoid of causal connections further undermine the generalizability and interpretability. To this end, we establish a causal framework for ST predictions, termed CaPaint, which targets to identify causal regions in data and endow model with causal reasoning ability in a two-stage process. Going beyond this process, we utilize the back-door adjustment to specifically address the sub-regions identified as non-causal in the upstream phase. Specifically, we employ a novel image inpainting technique. By using a fine-tuned unconditional Diffusion Probabilistic Model (DDPM) as the generative prior, we in-fill the masks defined as environmental parts, offering the possibility of reliable extrapolation for potential data distributions. CaPaint overcomes the high complexity dilemma of optimal ST causal discovery models by reducing the data generation complexity from exponential to quasi-linear levels. Extensive experiments conducted on five real-world ST benchmarks demonstrate that integrating the CaPaint concept allows models to achieve improvements ranging from 4.3% to 77.3%. Moreover, compared to traditional mainstream ST augmenters, CaPaint underscores the potential of diffusion models in ST enhancement, offering a novel paradigm for this field. Our project is available at https://anonymous.4open.science/r/12345-DFCC.<|reference_end|>
arxiv
@article{duan2024causal, title={Causal Deciphering and Inpainting in Spatio-Temporal Dynamics via Diffusion Model}, author={Yifan Duan, Jian Zhao, pengcheng, Junyuan Mao, Hao Wu, Jingyu Xu, Shilong Wang, Caoyuan Ma, Kai Wang, Kun Wang, Xuelong Li}, journal={arXiv preprint arXiv:2409.19608}, year={2024}, archivePrefix={arXiv}, eprint={2409.19608}, primaryClass={cs.CV} }
duan2024causal
arxiv-663245
2409.19609
An Enhanced Semidefinite Relaxation Model Combined with Clique Graph Merging Strategy for Efficient AC Optimal Power Flow Solution
<|reference_start|>An Enhanced Semidefinite Relaxation Model Combined with Clique Graph Merging Strategy for Efficient AC Optimal Power Flow Solution: Semidefinite programming (SDP) is widely acknowledged as one of the most effective methods for deriving the tightest lower bounds of the optimal power flow (OPF) problems. In this paper, an enhanced semidefinite relaxation model that integrates tighter {\lambda}-based quadratic convex relaxation, valid inequalities, and optimality-based bound tightening algorithms derived in accordance with the branch thermal limit boundary surface into the SDP framework is presented to further tighten the lower bounds of the feasible region of OPF problems, effectively combining the advantages of these recent advancements. Additionally, the utilization of chordal decomposition in the complex matrix formulation of SDP can significantly accelerate the solution time. Notably, for the same SDP problem, different chordal decompositions can result in varying solution time. To address this problem, this paper proposes a clique graph merging strategy within the complex matrix SDP framework, which assesses clique sizes and the computational burden on interior-point solvers, as well as reducing the need for hyperparameter tuning and further enhancing the solution efficiency. Finally, the proposed hybrid relaxation model is evaluated using MATPOWER and PGLib-OPF test cases, demonstrating its effectiveness in reducing the optimality gap and validating its computational performance on test cases with up to 13659-node.<|reference_end|>
arxiv
@article{ruan2024an, title={An Enhanced Semidefinite Relaxation Model Combined with Clique Graph Merging Strategy for Efficient AC Optimal Power Flow Solution}, author={Zhaojun Ruan, and Libao Shi}, journal={arXiv preprint arXiv:2409.19609}, year={2024}, archivePrefix={arXiv}, eprint={2409.19609}, primaryClass={eess.SY cs.SY math.OC} }
ruan2024an
arxiv-663246
2409.19610
Federated Learning from Vision-Language Foundation Models: Theoretical Analysis and Method
<|reference_start|>Federated Learning from Vision-Language Foundation Models: Theoretical Analysis and Method: Integrating pretrained vision-language foundation models like CLIP into federated learning has attracted significant attention for enhancing generalization across diverse tasks. Typically, federated learning of vision-language models employs prompt learning to reduce communication and computational costs, i.e., prompt-based federated learning. However, there is limited theoretical analysis to understand the performance of prompt-based federated learning. In this work, we construct a theoretical analysis framework for prompt-based federated learning via feature learning theory. Specifically, we monitor the evolution of signal learning and noise memorization in prompt-based federated learning, demonstrating that performance can be assessed by the ratio of task-relevant to task-irrelevant coefficients. Furthermore, we draw an analogy between income and risk in portfolio optimization and the task-relevant and task-irrelevant terms in feature learning. Leveraging inspiration from portfolio optimization that combining two independent assets will maintain the income while reducing the risk, we introduce two prompts: global prompt and local prompt to construct a prompt portfolio to balance the generalization and personalization. Consequently, we showed the performance advantage of the prompt portfolio and derived the optimal mixing coefficient. These theoretical claims have been further supported by empirical experiments.<|reference_end|>
arxiv
@article{pan2024federated, title={Federated Learning from Vision-Language Foundation Models: Theoretical Analysis and Method}, author={Bikang Pan, Wei Huang, Ye Shi}, journal={arXiv preprint arXiv:2409.19610}, year={2024}, archivePrefix={arXiv}, eprint={2409.19610}, primaryClass={cs.LG cs.CL cs.CV} }
pan2024federated
arxiv-663247
2409.19611
Learning Attentional Mixture of LoRAs for Language Model Continual Learning
<|reference_start|>Learning Attentional Mixture of LoRAs for Language Model Continual Learning: Fine-tuning large language models (LLMs) with Low-Rank adaption (LoRA) is widely acknowledged as an effective approach for continual learning for new tasks. However, it often suffers from catastrophic forgetting when dealing with multiple tasks sequentially. To this end, we propose Attentional Mixture of LoRAs (AM-LoRA), a continual learning approach tailored for LLMs. Specifically, AM-LoRA learns a sequence of LoRAs for a series of tasks to continually learn knowledge from different tasks. The key of our approach is that we devise an attention mechanism as a knowledge mixture module to adaptively integrate information from each LoRA. With the attention mechanism, AM-LoRA can efficiently leverage the distinctive contributions of each LoRA, while mitigating the risk of mutually negative interactions among them that may lead to catastrophic forgetting. Moreover, we further introduce $L1$ norm in the learning process to make the attention vector more sparse. The sparse constraints can enable the model to lean towards selecting a few highly relevant LoRAs, rather than aggregating and weighting all LoRAs collectively, which can further reduce the impact stemming from mutual interference. Experimental results on continual learning benchmarks indicate the superiority of our proposed method.<|reference_end|>
arxiv
@article{liu2024learning, title={Learning Attentional Mixture of LoRAs for Language Model Continual Learning}, author={Jialin Liu, Jianhua Wu, Jie Liu, Yutai Duan}, journal={arXiv preprint arXiv:2409.19611}, year={2024}, archivePrefix={arXiv}, eprint={2409.19611}, primaryClass={cs.CL} }
liu2024learning
arxiv-663248
2409.19613
Hybrid Mamba for Few-Shot Segmentation
<|reference_start|>Hybrid Mamba for Few-Shot Segmentation: Many few-shot segmentation (FSS) methods use cross attention to fuse support foreground (FG) into query features, regardless of the quadratic complexity. A recent advance Mamba can also well capture intra-sequence dependencies, yet the complexity is only linear. Hence, we aim to devise a cross (attention-like) Mamba to capture inter-sequence dependencies for FSS. A simple idea is to scan on support features to selectively compress them into the hidden state, which is then used as the initial hidden state to sequentially scan query features. Nevertheless, it suffers from (1) support forgetting issue: query features will also gradually be compressed when scanning on them, so the support features in hidden state keep reducing, and many query pixels cannot fuse sufficient support features; (2) intra-class gap issue: query FG is essentially more similar to itself rather than to support FG, i.e., query may prefer not to fuse support features but their own ones from the hidden state, yet the success of FSS relies on the effective use of support information. To tackle them, we design a hybrid Mamba network (HMNet), including (1) a support recapped Mamba to periodically recap the support features when scanning query, so the hidden state can always contain rich support information; (2) a query intercepted Mamba to forbid the mutual interactions among query pixels, and encourage them to fuse more support features from the hidden state. Consequently, the support information is better utilized, leading to better performance. Extensive experiments have been conducted on two public benchmarks, showing the superiority of HMNet. The code is available at https://github.com/Sam1224/HMNet.<|reference_end|>
arxiv
@article{xu2024hybrid, title={Hybrid Mamba for Few-Shot Segmentation}, author={Qianxiong Xu, Xuanyi Liu, Lanyun Zhu, Guosheng Lin, Cheng Long, Ziyue Li, Rui Zhao}, journal={arXiv preprint arXiv:2409.19613}, year={2024}, archivePrefix={arXiv}, eprint={2409.19613}, primaryClass={cs.CV} }
xu2024hybrid
arxiv-663249
2409.19614
Improved Architecture for High-resolution Piano Transcription to Efficiently Capture Acoustic Characteristics of Music Signals
<|reference_start|>Improved Architecture for High-resolution Piano Transcription to Efficiently Capture Acoustic Characteristics of Music Signals: Automatic music transcription (AMT), aiming to convert musical signals into musical notation, is one of the important tasks in music information retrieval. Recently, previous works have applied high-resolution labels, i.e., the continuous onset and offset times of piano notes, as training targets, achieving substantial improvements in transcription performance. However, there still remain some issues to be addressed, e.g., the harmonics of notes are sometimes recognized as false positive notes, and the size of AMT model tends to be larger to improve the transcription performance. To address these issues, we propose an improved high-resolution piano transcription model to well capture specific acoustic characteristics of music signals. First, we employ the Constant-Q Transform as the input representation to better adapt to musical signals. Moreover, we have designed two architectures: the first is based on a convolutional recurrent neural network (CRNN) with dilated convolution, and the second is an encoder-decoder architecture that combines CRNN with a non-autoregressive Transformer decoder. We conduct systematic experiments for our models. Compared to the high-resolution AMT system used as a baseline, our models effectively achieve 1) consistent improvement in note-level metrics, and 2) the significant smaller model size, which shed lights on future work.<|reference_end|>
arxiv
@article{mi2024improved, title={Improved Architecture for High-resolution Piano Transcription to Efficiently Capture Acoustic Characteristics of Music Signals}, author={Jinyi Mi, Sehun Kim and Tomoki Toda}, journal={arXiv preprint arXiv:2409.19614}, year={2024}, archivePrefix={arXiv}, eprint={2409.19614}, primaryClass={cs.SD eess.AS} }
mi2024improved
arxiv-663250
2409.19616
DuoGNN: Topology-aware Graph Neural Network with Homophily and Heterophily Interaction-Decoupling
<|reference_start|>DuoGNN: Topology-aware Graph Neural Network with Homophily and Heterophily Interaction-Decoupling: Graph Neural Networks (GNNs) have proven effective in various medical imaging applications, such as automated disease diagnosis. However, due to the local neighborhood aggregation paradigm in message passing which characterizes these models, they inherently suffer from two fundamental limitations: first, indistinguishable node embeddings due to heterophilic node aggregation (known as over-smoothing), and second, impaired message passing due to aggregation through graph bottlenecks (known as over-squashing). These challenges hinder the model expressiveness and prevent us from using deeper models to capture long-range node dependencies within the graph. Popular solutions in the literature are either too expensive to process large graphs due to high time complexity or do not generalize across all graph topologies. To address these limitations, we propose DuoGNN, a scalable and generalizable architecture which leverages topology to decouple homophilic and heterophilic edges and capture both short-range and long-range interactions. Our three core contributions introduce (i) a topological edge-filtering algorithm which extracts homophilic interactions and enables the model to generalize well for any graph topology, (ii) a heterophilic graph condensation technique which extracts heterophilic interactions and ensures scalability, and (iii) a dual homophilic and heterophilic aggregation pipeline which prevents over-smoothing and over-squashing during the message passing. We benchmark our model on medical and non-medical node classification datasets and compare it with its variants, showing consistent improvements across all tasks. Our DuoGNN code is available at https://github.com/basiralab/DuoGNN.<|reference_end|>
arxiv
@article{mancini2024duognn:, title={DuoGNN: Topology-aware Graph Neural Network with Homophily and Heterophily Interaction-Decoupling}, author={K. Mancini and I. Rekik}, journal={arXiv preprint arXiv:2409.19616}, year={2024}, archivePrefix={arXiv}, eprint={2409.19616}, primaryClass={cs.LG cs.SI} }
mancini2024duognn:
arxiv-663251
2409.19617
LiRA: Light-Robust Adversary for Model-based Reinforcement Learning in Real World
<|reference_start|>LiRA: Light-Robust Adversary for Model-based Reinforcement Learning in Real World: Model-based reinforcement learning has attracted much attention due to its high sample efficiency and is expected to be applied to real-world robotic applications. In the real world, as unobservable disturbances can lead to unexpected situations, robot policies should be taken to improve not only control performance but also robustness. Adversarial learning is an effective way to improve robustness, but excessive adversary would increase the risk of malfunction, and make the control performance too conservative. Therefore, this study addresses a new adversarial learning framework to make reinforcement learning robust moderately and not conservative too much. To this end, the adversarial learning is first rederived with variational inference. In addition, light robustness, which allows for maximizing robustness within an acceptable performance degradation, is utilized as a constraint. As a result, the proposed framework, so-called LiRA, can automatically adjust adversary level, balancing robustness and conservativeness. The expected behaviors of LiRA are confirmed in numerical simulations. In addition, LiRA succeeds in learning a force-reactive gait control of a quadrupedal robot only with real-world data collected less than two hours.<|reference_end|>
arxiv
@article{kobayashi2024lira:, title={LiRA: Light-Robust Adversary for Model-based Reinforcement Learning in Real World}, author={Taisuke Kobayashi}, journal={arXiv preprint arXiv:2409.19617}, year={2024}, archivePrefix={arXiv}, eprint={2409.19617}, primaryClass={cs.RO} }
kobayashi2024lira:
arxiv-663252
2409.19619
Discerning the Chaos: Detecting Adversarial Perturbations while Disentangling Intentional from Unintentional Noises
<|reference_start|>Discerning the Chaos: Detecting Adversarial Perturbations while Disentangling Intentional from Unintentional Noises: Deep learning models, such as those used for face recognition and attribute prediction, are susceptible to manipulations like adversarial noise and unintentional noise, including Gaussian and impulse noise. This paper introduces CIAI, a Class-Independent Adversarial Intent detection network built on a modified vision transformer with detection layers. CIAI employs a novel loss function that combines Maximum Mean Discrepancy and Center Loss to detect both intentional (adversarial attacks) and unintentional noise, regardless of the image class. It is trained in a multi-step fashion. We also introduce the aspect of intent during detection that can act as an added layer of security. We further showcase the performance of our proposed detector on CelebA, CelebA-HQ, LFW, AgeDB, and CIFAR-10 datasets. Our detector is able to detect both intentional (like FGSM, PGD, and DeepFool) and unintentional (like Gaussian and Salt & Pepper noises) perturbations.<|reference_end|>
arxiv
@article{jain2024discerning, title={Discerning the Chaos: Detecting Adversarial Perturbations while Disentangling Intentional from Unintentional Noises}, author={Anubhooti Jain, Susim Roy, Kwanit Gupta, Mayank Vatsa, and Richa Singh}, journal={arXiv preprint arXiv:2409.19619}, year={2024}, archivePrefix={arXiv}, eprint={2409.19619}, primaryClass={cs.CV cs.AI} }
jain2024discerning
arxiv-663253
2409.19620
DropEdge not Foolproof: Effective Augmentation Method for Signed Graph Neural Networks
<|reference_start|>DropEdge not Foolproof: Effective Augmentation Method for Signed Graph Neural Networks: The paper discusses signed graphs, which model friendly or antagonistic relationships using edges marked with positive or negative signs, focusing on the task of link sign prediction. While Signed Graph Neural Networks (SGNNs) have advanced, they face challenges like graph sparsity and unbalanced triangles. The authors propose using data augmentation (DA) techniques to address these issues, although many existing methods are not suitable for signed graphs due to a lack of side information. They highlight that the random DropEdge method, a rare DA approach applicable to signed graphs, does not enhance link sign prediction performance. In response, they introduce the Signed Graph Augmentation (SGA) framework, which includes a structure augmentation module to identify candidate edges and a strategy for selecting beneficial candidates, ultimately improving SGNN training. Experimental results show that SGA significantly boosts the performance of SGNN models, with a notable 32.3% improvement in F1-micro for SGCN on the Slashdot dataset.<|reference_end|>
arxiv
@article{zhang2024dropedge, title={DropEdge not Foolproof: Effective Augmentation Method for Signed Graph Neural Networks}, author={Zeyu Zhang, Lu Li, Shuyan Wan, Sijie Wang, Zhiyi Wang, Zhiyuan Lu, Dong Hao, Wanli Li}, journal={arXiv preprint arXiv:2409.19620}, year={2024}, archivePrefix={arXiv}, eprint={2409.19620}, primaryClass={cs.LG cs.AI} }
zhang2024dropedge
arxiv-663254
2409.19621
LDPC Codes for Quantitative Group Testing with a Non-Binary Alphabet
<|reference_start|>LDPC Codes for Quantitative Group Testing with a Non-Binary Alphabet: We propose and analyze a novel scheme based on LDPC codes for quantitative group testing. The key underlying idea is to augment the bipartite graph by introducing hidden non-binary variables to strengthen the message-passing decoder. This is achieved by grouping items into bundles of size q within the test matrix, while keeping the testing procedure unaffected. The decoder, inspired by some works on counter braids, passes lower and upper bounds on the bundle values along the edges of the graph, with the gap between the two shrinking with the decoder iterations. Through a density evolution analysis and finite length simulations, we show that the proposed scheme significantly outperforms its binary counterpart with limited increase in complexity.<|reference_end|>
arxiv
@article{mashauri2024ldpc, title={LDPC Codes for Quantitative Group Testing with a Non-Binary Alphabet}, author={Mgeni Makambi Mashauri, Alexandre Graell i Amat and Michael Lentmaier}, journal={arXiv preprint arXiv:2409.19621}, year={2024}, archivePrefix={arXiv}, eprint={2409.19621}, primaryClass={cs.IT math.IT} }
mashauri2024ldpc
arxiv-663255
2409.19622
Programming on Bitcoin: A Survey of Layer 1 and Layer 2 Technologies in Bitcoin Ecosystem
<|reference_start|>Programming on Bitcoin: A Survey of Layer 1 and Layer 2 Technologies in Bitcoin Ecosystem: This paper surveys innovative protocols that enhance the programming functionality of the Bitcoin blockchain, a key part of the "Bitcoin Ecosystem." Bitcoin utilizes the Unspent Transaction Output (UTXO) model and a stack-based script language for efficient peer-to-peer payments, but it faces limitations in programming capability and throughput. The 2021 Taproot upgrade introduced the Schnorr signature algorithm and P2TR transaction type, significantly improving Bitcoin's privacy and programming capabilities. This upgrade has led to the development of protocols like Ordinals, Atomicals, and BitVM, which enhance Bitcoin's programming functionality and enrich its ecosystem. We explore the technical aspects of the Taproot upgrade and examine Bitcoin Layer 1 protocols that leverage Taproot's features to program non-fungible tokens (NFTs) into transactions, including Ordinals and Atomicals, along with the fungible token standards BRC-20 and ARC-20. Additionally, we categorize certain Bitcoin ecosystem protocols as Layer 2 solutions similar to Ethereum's, analyzing their impact on Bitcoin's performance. By analyzing data from the Bitcoin blockchain, we gather metrics on block capacity, miner fees, and the growth of Taproot transactions. Our findings confirm the positive effects of these protocols on Bitcoin's mainnet, bridging gaps in the literature regarding Bitcoin's programming capabilities and ecosystem protocols and providing valuable insights for practitioners and researchers.<|reference_end|>
arxiv
@article{liao2024programming, title={Programming on Bitcoin: A Survey of Layer 1 and Layer 2 Technologies in Bitcoin Ecosystem}, author={Guofu Liao, Taotao Wang, Qing Yang, Yihan Xia, Long Shi, Xiang Zhao, Xiaoxiao Wu, Shengli Zhang, Anthony Chan, and Richard Yuen}, journal={arXiv preprint arXiv:2409.19622}, year={2024}, archivePrefix={arXiv}, eprint={2409.19622}, primaryClass={cs.CR} }
liao2024programming
arxiv-663256
2409.19623
MCDDPM: Multichannel Conditional Denoising Diffusion Model for Unsupervised Anomaly Detection in Brain MRI
<|reference_start|>MCDDPM: Multichannel Conditional Denoising Diffusion Model for Unsupervised Anomaly Detection in Brain MRI: Detecting anomalies in brain MRI scans using supervised deep learning methods presents challenges due to anatomical diversity and labor-intensive requirement of pixel-level annotations. Generative models like Denoising Diffusion Probabilistic Model (DDPM) and their variants like pDDPM, mDDPM, cDDPM have recently emerged to be powerful alternatives to perform unsupervised anomaly detection in brain MRI scans. These methods leverage frame-level labels of healthy brains to generate healthy tissues in brain MRI scans. During inference, when an anomalous (or unhealthy) scan image is presented as an input, these models generate a healthy scan image corresponding to the input anomalous scan, and the difference map between the generated healthy scan image and the original anomalous scan image provide the necessary pixel level identification of abnormal tissues. The generated healthy images from the DDPM, pDDPM and mDDPM models however suffer from fidelity issues and contain artifacts that do not have medical significance. While cDDPM achieves slightly better fidelity and artifact suppression, it requires huge memory footprint and is computationally expensive than the other DDPM based models. In this work, we propose an improved version of DDPM called Multichannel Conditional Denoising Diffusion Probabilistic Model (MCDDPM) for unsupervised anomaly detection in brain MRI scans. Our proposed model achieves high fidelity by making use of additional information from the healthy images during the training process, enriching the representation power of DDPM models, with a computational cost and memory requirements on par with DDPM, pDDPM and mDDPM models. Experimental results on multiple datasets (e.g. BraTS20, BraTS21) demonstrate promising performance of the proposed method. The code is available at https://github.com/vivekkumartri/MCDDPM.<|reference_end|>
arxiv
@article{trivedi2024mcddpm:, title={MCDDPM: Multichannel Conditional Denoising Diffusion Model for Unsupervised Anomaly Detection in Brain MRI}, author={Vivek Kumar Trivedi, Bheeshm Sharma, P. Balamurugan}, journal={arXiv preprint arXiv:2409.19623}, year={2024}, archivePrefix={arXiv}, eprint={2409.19623}, primaryClass={eess.IV cs.AI cs.CV} }
trivedi2024mcddpm:
arxiv-663257
2409.19624
Storynizor: Consistent Story Generation via Inter-Frame Synchronized and Shuffled ID Injection
<|reference_start|>Storynizor: Consistent Story Generation via Inter-Frame Synchronized and Shuffled ID Injection: Recent advances in text-to-image diffusion models have spurred significant interest in continuous story image generation. In this paper, we introduce Storynizor, a model capable of generating coherent stories with strong inter-frame character consistency, effective foreground-background separation, and diverse pose variation. The core innovation of Storynizor lies in its key modules: ID-Synchronizer and ID-Injector. The ID-Synchronizer employs an auto-mask self-attention module and a mask perceptual loss across inter-frame images to improve the consistency of character generation, vividly representing their postures and backgrounds. The ID-Injector utilize a Shuffling Reference Strategy (SRS) to integrate ID features into specific locations, enhancing ID-based consistent character generation. Additionally, to facilitate the training of Storynizor, we have curated a novel dataset called StoryDB comprising 100, 000 images. This dataset contains single and multiple-character sets in diverse environments, layouts, and gestures with detailed descriptions. Experimental results indicate that Storynizor demonstrates superior coherent story generation with high-fidelity character consistency, flexible postures, and vivid backgrounds compared to other character-specific methods.<|reference_end|>
arxiv
@article{ma2024storynizor:, title={Storynizor: Consistent Story Generation via Inter-Frame Synchronized and Shuffled ID Injection}, author={Yuhang Ma, Wenting Xu, Chaoyi Zhao, Keqiang Sun, Qinfeng Jin, Zeng Zhao, Changjie Fan, Zhipeng Hu}, journal={arXiv preprint arXiv:2409.19624}, year={2024}, archivePrefix={arXiv}, eprint={2409.19624}, primaryClass={cs.CV cs.AI} }
ma2024storynizor:
arxiv-663258
2409.19625
An action language-based formalisation of an abstract argumentation framework
<|reference_start|>An action language-based formalisation of an abstract argumentation framework: An abstract argumentation framework is a commonly used formalism to provide a static representation of a dialogue. However, the order of enunciation of the arguments in an argumentative dialogue is very important and can affect the outcome of this dialogue. In this paper, we propose a new framework for modelling abstract argumentation graphs, a model that incorporates the order of enunciation of arguments. By taking this order into account, we have the means to deduce a unique outcome for each dialogue, called an extension. We also establish several properties, such as termination and correctness, and discuss two notions of completeness. In particular, we propose a modification of the previous transformation based on a "last enunciated last updated" strategy, which verifies the second form of completeness.<|reference_end|>
arxiv
@article{munro2024an, title={An action language-based formalisation of an abstract argumentation framework}, author={Yann Munro, Camilo Sarmiento, Isabelle Bloch, Gauvain Bourgne, Catherine Pelachaud, Marie-Jeanne Lesot}, journal={arXiv preprint arXiv:2409.19625}, year={2024}, archivePrefix={arXiv}, eprint={2409.19625}, primaryClass={cs.AI cs.LO} }
munro2024an
arxiv-663259
2409.19627
IDEAW: Robust Neural Audio Watermarking with Invertible Dual-Embedding
<|reference_start|>IDEAW: Robust Neural Audio Watermarking with Invertible Dual-Embedding: The audio watermarking technique embeds messages into audio and accurately extracts messages from the watermarked audio. Traditional methods develop algorithms based on expert experience to embed watermarks into the time-domain or transform-domain of signals. With the development of deep neural networks, deep learning-based neural audio watermarking has emerged. Compared to traditional algorithms, neural audio watermarking achieves better robustness by considering various attacks during training. However, current neural watermarking methods suffer from low capacity and unsatisfactory imperceptibility. Additionally, the issue of watermark locating, which is extremely important and even more pronounced in neural audio watermarking, has not been adequately studied. In this paper, we design a dual-embedding watermarking model for efficient locating. We also consider the impact of the attack layer on the invertible neural network in robustness training, improving the model to enhance both its reasonableness and stability. Experiments show that the proposed model, IDEAW, can withstand various attacks with higher capacity and more efficient locating ability compared to existing methods.<|reference_end|>
arxiv
@article{li2024ideaw:, title={IDEAW: Robust Neural Audio Watermarking with Invertible Dual-Embedding}, author={Pengcheng Li, Xulong Zhang, Jing Xiao, Jianzong Wang}, journal={arXiv preprint arXiv:2409.19627}, year={2024}, archivePrefix={arXiv}, eprint={2409.19627}, primaryClass={cs.MM cs.CR cs.SD eess.AS} }
li2024ideaw:
arxiv-663260
2409.19629
A Survey on Graph Neural Networks for Remaining Useful Life Prediction: Methodologies, Evaluation and Future Trends
<|reference_start|>A Survey on Graph Neural Networks for Remaining Useful Life Prediction: Methodologies, Evaluation and Future Trends: Remaining Useful Life (RUL) prediction is a critical aspect of Prognostics and Health Management (PHM), aimed at predicting the future state of a system to enable timely maintenance and prevent unexpected failures. While existing deep learning methods have shown promise, they often struggle to fully leverage the spatial information inherent in complex systems, limiting their effectiveness in RUL prediction. To address this challenge, recent research has explored the use of Graph Neural Networks (GNNs) to model spatial information for more accurate RUL prediction. This paper presents a comprehensive review of GNN techniques applied to RUL prediction, summarizing existing methods and offering guidance for future research. We first propose a novel taxonomy based on the stages of adapting GNNs to RUL prediction, systematically categorizing approaches into four key stages: graph construction, graph modeling, graph information processing, and graph readout. By organizing the field in this way, we highlight the unique challenges and considerations at each stage of the GNN pipeline. Additionally, we conduct a thorough evaluation of various state-of-the-art (SOTA) GNN methods, ensuring consistent experimental settings for fair comparisons. This rigorous analysis yields valuable insights into the strengths and weaknesses of different approaches, serving as an experimental guide for researchers and practitioners working in this area. Finally, we identify and discuss several promising research directions that could further advance the field, emphasizing the potential for GNNs to revolutionize RUL prediction and enhance the effectiveness of PHM strategies. The benchmarking codes are available in GitHub: https://github.com/Frank-Wang-oss/GNN\_RUL\_Benchmarking.<|reference_end|>
arxiv
@article{wang2024a, title={A Survey on Graph Neural Networks for Remaining Useful Life Prediction: Methodologies, Evaluation and Future Trends}, author={Yucheng Wang, Min Wu, Xiaoli Li, Lihua Xie, Zhenghua Chen}, journal={arXiv preprint arXiv:2409.19629}, year={2024}, archivePrefix={arXiv}, eprint={2409.19629}, primaryClass={cs.LG cs.AI} }
wang2024a
arxiv-663261
2409.19635
Temporal Source Recovery for Time-Series Source-Free Unsupervised Domain Adaptation
<|reference_start|>Temporal Source Recovery for Time-Series Source-Free Unsupervised Domain Adaptation: Source-Free Unsupervised Domain Adaptation (SFUDA) has gained popularity for its ability to adapt pretrained models to target domains without accessing source domains, ensuring source data privacy. While SFUDA is well-developed in visual tasks, its application to Time-Series SFUDA (TS-SFUDA) remains limited due to the challenge of transferring crucial temporal dependencies across domains. Although a few researchers begin to explore this area, they rely on specific source domain designs, which are impractical as source data owners cannot be expected to follow particular pretraining protocols. To solve this, we propose Temporal Source Recovery (TemSR), a framework that transfers temporal dependencies for effective TS-SFUDA without requiring source-specific designs. TemSR features a recovery process that leverages masking, recovery, and optimization to generate a source-like distribution with recovered source temporal dependencies. To ensure effective recovery, we further design segment-based regularization to restore local dependencies and anchor-based recovery diversity maximization to enhance the diversity of the source-like distribution. The source-like distribution is then adapted to the target domain using traditional UDA techniques. Extensive experiments across multiple TS tasks demonstrate the effectiveness of TemSR, even surpassing existing TS-SFUDA method that requires source domain designs. Code is available in https://github.com/Frank-Wang-oss/TemSR.<|reference_end|>
arxiv
@article{wang2024temporal, title={Temporal Source Recovery for Time-Series Source-Free Unsupervised Domain Adaptation}, author={Yucheng Wang, Peiliang Gong, Min Wu, Felix Ott, Xiaoli Li, Lihua Xie, Zhenghua Chen}, journal={arXiv preprint arXiv:2409.19635}, year={2024}, archivePrefix={arXiv}, eprint={2409.19635}, primaryClass={cs.LG cs.CV} }
wang2024temporal
arxiv-663262
2409.19637
A Globalized Inexact Semismooth Newton Method for Nonsmooth Fixed-point Equations involving Variational Inequalities
<|reference_start|>A Globalized Inexact Semismooth Newton Method for Nonsmooth Fixed-point Equations involving Variational Inequalities: We develop a semismooth Newton framework for the numerical solution of fixed-point equations that are posed in Banach spaces. The framework is motivated by applications in the field of obstacle-type quasi-variational inequalities and implicit obstacle problems. It is discussed in a general functional analytic setting and allows for inexact function evaluations and Newton steps. Moreover, if a certain contraction assumption holds, we show that it is possible to globalize the algorithm by means of the Banach fixed-point theorem and to ensure $q$-superlinear convergence to the problem solution for arbitrary starting values. By means of a localization technique, our Newton method can also be used to determine solutions of fixed-point equations that are only locally contractive and not uniquely solvable. We apply our algorithm to a quasi-variational inequality which arises in thermoforming and which not only involves the obstacle problem as a source of nonsmoothness but also a semilinear PDE containing a nondifferentiable Nemytskii operator. Our analysis is accompanied by numerical experiments that illustrate the mesh-independence and $q$-superlinear convergence of the developed solution algorithm.<|reference_end|>
arxiv
@article{alphonse2024a, title={A Globalized Inexact Semismooth Newton Method for Nonsmooth Fixed-point Equations involving Variational Inequalities}, author={Amal Alphonse, Constantin Christof, Michael Hinterm"uller, Ioannis P. A. Papadopoulos}, journal={arXiv preprint arXiv:2409.19637}, year={2024}, archivePrefix={arXiv}, eprint={2409.19637}, primaryClass={math.NA cs.NA math.AP} }
alphonse2024a
arxiv-663263
2409.19638
BadHMP: Backdoor Attack against Human Motion Prediction
<|reference_start|>BadHMP: Backdoor Attack against Human Motion Prediction: Precise future human motion prediction over subsecond horizons from past observations is crucial for various safety-critical applications. To date, only one study has examined the vulnerability of human motion prediction to evasion attacks. In this paper, we propose BadHMP, the first backdoor attack that targets specifically human motion prediction. Our approach involves generating poisoned training samples by embedding a localized backdoor trigger in one arm of the skeleton, causing selected joints to remain relatively still or follow predefined motion in historical time steps. Subsequently, the future sequences are globally modified to the target sequences, and the entire training dataset is traversed to select the most suitable samples for poisoning. Our carefully designed backdoor triggers and targets guarantee the smoothness and naturalness of the poisoned samples, making them stealthy enough to evade detection by the model trainer while keeping the poisoned model unobtrusive in terms of prediction fidelity to untainted sequences. The target sequences can be successfully activated by the designed input sequences even with a low poisoned sample injection ratio. Experimental results on two datasets (Human3.6M and CMU-Mocap) and two network architectures (LTD and HRI) demonstrate the high-fidelity, effectiveness, and stealthiness of BadHMP. Robustness of our attack against fine-tuning defense is also verified.<|reference_end|>
arxiv
@article{xu2024badhmp:, title={BadHMP: Backdoor Attack against Human Motion Prediction}, author={Chaohui Xu, Si Wang and Chip-Hong Chang}, journal={arXiv preprint arXiv:2409.19638}, year={2024}, archivePrefix={arXiv}, eprint={2409.19638}, primaryClass={cs.CV cs.AI} }
xu2024badhmp:
arxiv-663264
2409.19641
fCOP: Focal Length Estimation from Category-level Object Priors
<|reference_start|>fCOP: Focal Length Estimation from Category-level Object Priors: In the realm of computer vision, the perception and reconstruction of the 3D world through vision signals heavily rely on camera intrinsic parameters, which have long been a subject of intense research within the community. In practical applications, without a strong scene geometry prior like the Manhattan World assumption or special artificial calibration patterns, monocular focal length estimation becomes a challenging task. In this paper, we propose a method for monocular focal length estimation using category-level object priors. Based on two well-studied existing tasks: monocular depth estimation and category-level object canonical representation learning, our focal solver takes depth priors and object shape priors from images containing objects and estimates the focal length from triplets of correspondences in closed form. Our experiments on simulated and real world data demonstrate that the proposed method outperforms the current state-of-the-art, offering a promising solution to the long-standing monocular focal length estimation problem.<|reference_end|>
arxiv
@article{zhang2024fcop:, title={fCOP: Focal Length Estimation from Category-level Object Priors}, author={Xinyue Zhang, Jiaqi Yang, Xiangting Meng, Abdelrahman Mohamed, Laurent Kneip}, journal={arXiv preprint arXiv:2409.19641}, year={2024}, archivePrefix={arXiv}, eprint={2409.19641}, primaryClass={cs.CV} }
zhang2024fcop:
arxiv-663265
2409.19642
Solving Fredholm Integral Equations of the Second Kind via Wasserstein Gradient Flows
<|reference_start|>Solving Fredholm Integral Equations of the Second Kind via Wasserstein Gradient Flows: Motivated by a recent method for approximate solution of Fredholm equations of the first kind, we develop a corresponding method for a class of Fredholm equations of the \emph{second kind}. In particular, we consider the class of equations for which the solution is a probability measure. The approach centres around specifying a functional whose gradient flow admits a minimizer corresponding to a regularized version of the solution of the underlying equation and using a mean-field particle system to approximately simulate that flow. Theoretical support for the method is presented, along with some illustrative numerical results.<|reference_end|>
arxiv
@article{crucinio2024solving, title={Solving Fredholm Integral Equations of the Second Kind via Wasserstein Gradient Flows}, author={Francesca R. Crucinio, Adam M. Johansen}, journal={arXiv preprint arXiv:2409.19642}, year={2024}, archivePrefix={arXiv}, eprint={2409.19642}, primaryClass={stat.CO cs.NA math.NA math.OC stat.ME} }
crucinio2024solving
arxiv-663266
2409.19647
Fine-Tuning Hybrid Physics-Informed Neural Networks for Vehicle Dynamics Model Estimation
<|reference_start|>Fine-Tuning Hybrid Physics-Informed Neural Networks for Vehicle Dynamics Model Estimation: Accurate dynamic modeling is critical for autonomous racing vehicles, especially during high-speed and agile maneuvers where precise motion prediction is essential for safety. Traditional parameter estimation methods face limitations such as reliance on initial guesses, labor-intensive fitting procedures, and complex testing setups. On the other hand, purely data-driven machine learning methods struggle to capture inherent physical constraints and typically require large datasets for optimal performance. To address these challenges, this paper introduces the Fine-Tuning Hybrid Dynamics (FTHD) method, which integrates supervised and unsupervised Physics-Informed Neural Networks (PINNs), combining physics-based modeling with data-driven techniques. FTHD fine-tunes a pre-trained Deep Dynamics Model (DDM) using a smaller training dataset, delivering superior performance compared to state-of-the-art methods such as the Deep Pacejka Model (DPM) and outperforming the original DDM. Furthermore, an Extended Kalman Filter (EKF) is embedded within FTHD (EKF-FTHD) to effectively manage noisy real-world data, ensuring accurate denoising while preserving the vehicle's essential physical characteristics. The proposed FTHD framework is validated through scaled simulations using the BayesRace Physics-based Simulator and full-scale real-world experiments from the Indy Autonomous Challenge. Results demonstrate that the hybrid approach significantly improves parameter estimation accuracy, even with reduced data, and outperforms existing models. EKF-FTHD enhances robustness by denoising real-world data while maintaining physical insights, representing a notable advancement in vehicle dynamics modeling for high-speed autonomous racing.<|reference_end|>
arxiv
@article{fang2024fine-tuning, title={Fine-Tuning Hybrid Physics-Informed Neural Networks for Vehicle Dynamics Model Estimation}, author={Shiming Fang and Kaiyan Yu}, journal={arXiv preprint arXiv:2409.19647}, year={2024}, archivePrefix={arXiv}, eprint={2409.19647}, primaryClass={cs.RO cs.AI cs.SY eess.SY} }
fang2024fine-tuning
arxiv-663267
2409.19648
OrientedFormer: An End-to-End Transformer-Based Oriented Object Detector in Remote Sensing Images
<|reference_start|>OrientedFormer: An End-to-End Transformer-Based Oriented Object Detector in Remote Sensing Images: Oriented object detection in remote sensing images is a challenging task due to objects being distributed in multi-orientation. Recently, end-to-end transformer-based methods have achieved success by eliminating the need for post-processing operators compared to traditional CNN-based methods. However, directly extending transformers to oriented object detection presents three main issues: 1) objects rotate arbitrarily, necessitating the encoding of angles along with position and size; 2) the geometric relations of oriented objects are lacking in self-attention, due to the absence of interaction between content and positional queries; and 3) oriented objects cause misalignment, mainly between values and positional queries in cross-attention, making accurate classification and localization difficult. In this paper, we propose an end-to-end transformer-based oriented object detector, consisting of three dedicated modules to address these issues. First, Gaussian positional encoding is proposed to encode the angle, position, and size of oriented boxes using Gaussian distributions. Second, Wasserstein self-attention is proposed to introduce geometric relations and facilitate interaction between content and positional queries by utilizing Gaussian Wasserstein distance scores. Third, oriented cross-attention is proposed to align values and positional queries by rotating sampling points around the positional query according to their angles. Experiments on six datasets DIOR-R, a series of DOTA, HRSC2016 and ICDAR2015 show the effectiveness of our approach. Compared with previous end-to-end detectors, the OrientedFormer gains 1.16 and 1.21 AP$_{50}$ on DIOR-R and DOTA-v1.0 respectively, while reducing training epochs from 3$\times$ to 1$\times$. The codes are available at https://github.com/wokaikaixinxin/OrientedFormer.<|reference_end|>
arxiv
@article{zhao2024orientedformer:, title={OrientedFormer: An End-to-End Transformer-Based Oriented Object Detector in Remote Sensing Images}, author={Jiaqi Zhao, Zeyu Ding, Yong Zhou, Hancheng Zhu, Wen-Liang Du, Rui Yao, and Abdulmotaleb El Saddik}, journal={arXiv preprint arXiv:2409.19648}, year={2024}, archivePrefix={arXiv}, eprint={2409.19648}, primaryClass={cs.CV} }
zhao2024orientedformer:
arxiv-663268
2409.19650
Grounding 3D Scene Affordance From Egocentric Interactions
<|reference_start|>Grounding 3D Scene Affordance From Egocentric Interactions: Grounding 3D scene affordance aims to locate interactive regions in 3D environments, which is crucial for embodied agents to interact intelligently with their surroundings. Most existing approaches achieve this by mapping semantics to 3D instances based on static geometric structure and visual appearance. This passive strategy limits the agent's ability to actively perceive and engage with the environment, making it reliant on predefined semantic instructions. In contrast, humans develop complex interaction skills by observing and imitating how others interact with their surroundings. To empower the model with such abilities, we introduce a novel task: grounding 3D scene affordance from egocentric interactions, where the goal is to identify the corresponding affordance regions in a 3D scene based on an egocentric video of an interaction. This task faces the challenges of spatial complexity and alignment complexity across multiple sources. To address these challenges, we propose the Egocentric Interaction-driven 3D Scene Affordance Grounding (Ego-SAG) framework, which utilizes interaction intent to guide the model in focusing on interaction-relevant sub-regions and aligns affordance features from different sources through a bidirectional query decoder mechanism. Furthermore, we introduce the Egocentric Video-3D Scene Affordance Dataset (VSAD), covering a wide range of common interaction types and diverse 3D environments to support this task. Extensive experiments on VSAD validate both the feasibility of the proposed task and the effectiveness of our approach.<|reference_end|>
arxiv
@article{liu2024grounding, title={Grounding 3D Scene Affordance From Egocentric Interactions}, author={Cuiyu Liu, Wei Zhai, Yuhang Yang, Hongchen Luo, Sen Liang, Yang Cao, and Zheng-Jun Zha}, journal={arXiv preprint arXiv:2409.19650}, year={2024}, archivePrefix={arXiv}, eprint={2409.19650}, primaryClass={cs.CV cs.AI} }
liu2024grounding
arxiv-663269
2409.19653
Data-Centric Design: Introducing An Informatics Domain Model And Core Data Ontology For Computational Systems
<|reference_start|>Data-Centric Design: Introducing An Informatics Domain Model And Core Data Ontology For Computational Systems: The Core Data Ontology (CDO) and the Informatics Domain Model represent a transformative approach to computational systems, shifting from traditional node-centric designs to a data-centric paradigm. This paper introduces a framework where data is categorized into four modalities: objects, events, concepts, and actions. This quadrimodal structure enhances data security, semantic interoperability, and scalability across distributed data ecosystems. The CDO offers a comprehensive ontology that supports AI development, role-based access control, and multimodal data management. By focusing on the intrinsic value of data, the Informatics Domain Model redefines system architectures to prioritize data security, provenance, and auditability, addressing vulnerabilities in current models. The paper outlines the methodology for developing the CDO, explores its practical applications in fields such as AI, robotics, and legal compliance, and discusses future directions for scalable, decentralized, and interoperable data ecosystems.<|reference_end|>
arxiv
@article{knowles2024data-centric, title={Data-Centric Design: Introducing An Informatics Domain Model And Core Data Ontology For Computational Systems}, author={Paul Knowles and Bart Gajderowicz and Keith Dugas}, journal={arXiv preprint arXiv:2409.19653}, year={2024}, doi={10.5121/csit.2024.141720}, archivePrefix={arXiv}, eprint={2409.19653}, primaryClass={cs.DC} }
knowles2024data-centric
arxiv-663270
2409.19655
Assessment and manipulation of latent constructs in pre-trained language models using psychometric scales
<|reference_start|>Assessment and manipulation of latent constructs in pre-trained language models using psychometric scales: Human-like personality traits have recently been discovered in large language models, raising the hypothesis that their (known and as yet undiscovered) biases conform with human latent psychological constructs. While large conversational models may be tricked into answering psychometric questionnaires, the latent psychological constructs of thousands of simpler transformers, trained for other tasks, cannot be assessed because appropriate psychometric methods are currently lacking. Here, we show how standard psychological questionnaires can be reformulated into natural language inference prompts, and we provide a code library to support the psychometric assessment of arbitrary models. We demonstrate, using a sample of 88 publicly available models, the existence of human-like mental health-related constructs (including anxiety, depression, and Sense of Coherence) which conform with standard theories in human psychology and show similar correlations and mitigation strategies. The ability to interpret and rectify the performance of language models by using psychological tools can boost the development of more explainable, controllable, and trustworthy models.<|reference_end|>
arxiv
@article{reuben2024assessment, title={Assessment and manipulation of latent constructs in pre-trained language models using psychometric scales}, author={Maor Reuben, Ortal Slobodin, Aviad Elyshar, Idan-Chaim Cohen, Orna Braun-Lewensohn, Odeya Cohen, Rami Puzis}, journal={arXiv preprint arXiv:2409.19655}, year={2024}, archivePrefix={arXiv}, eprint={2409.19655}, primaryClass={cs.CL cs.AI} }
reuben2024assessment
arxiv-663271
2409.19656
Multimodal Misinformation Detection by Learning from Synthetic Data with Multimodal LLMs
<|reference_start|>Multimodal Misinformation Detection by Learning from Synthetic Data with Multimodal LLMs: Detecting multimodal misinformation, especially in the form of image-text pairs, is crucial. Obtaining large-scale, high-quality real-world fact-checking datasets for training detectors is costly, leading researchers to use synthetic datasets generated by AI technologies. However, the generalizability of detectors trained on synthetic data to real-world scenarios remains unclear due to the distribution gap. To address this, we propose learning from synthetic data for detecting real-world multimodal misinformation through two model-agnostic data selection methods that match synthetic and real-world data distributions. Experiments show that our method enhances the performance of a small MLLM (13B) on real-world fact-checking datasets, enabling it to even surpass GPT-4V~\cite{GPT-4V}.<|reference_end|>
arxiv
@article{zeng2024multimodal, title={Multimodal Misinformation Detection by Learning from Synthetic Data with Multimodal LLMs}, author={Fengzhu Zeng, Wenqian Li, Wei Gao, Yan Pang}, journal={arXiv preprint arXiv:2409.19656}, year={2024}, archivePrefix={arXiv}, eprint={2409.19656}, primaryClass={cs.CL} }
zeng2024multimodal
arxiv-663272
2409.19658
Dual-Attention Frequency Fusion at Multi-Scale for Joint Segmentation and Deformable Medical Image Registration
<|reference_start|>Dual-Attention Frequency Fusion at Multi-Scale for Joint Segmentation and Deformable Medical Image Registration: Deformable medical image registration is a crucial aspect of medical image analysis. In recent years, researchers have begun leveraging auxiliary tasks (such as supervised segmentation) to provide anatomical structure information for the primary registration task, addressing complex deformation challenges in medical image registration. In this work, we propose a multi-task learning framework based on multi-scale dual attention frequency fusion (DAFF-Net), which simultaneously achieves the segmentation masks and dense deformation fields in a single-step estimation. DAFF-Net consists of a global encoder, a segmentation decoder, and a coarse-to-fine pyramid registration decoder. During the registration decoding process, we design the dual attention frequency feature fusion (DAFF) module to fuse registration and segmentation features at different scales, fully leveraging the correlation between the two tasks. The DAFF module optimizes the features through global and local weighting mechanisms. During local weighting, it incorporates both high-frequency and low-frequency information to further capture the features that are critical for the registration task. With the aid of segmentation, the registration learns more precise anatomical structure information, thereby enhancing the anatomical consistency of the warped images after registration. Additionally, due to the DAFF module's outstanding ability to extract effective feature information, we extend its application to unsupervised registration. Extensive experiments on three public 3D brain magnetic resonance imaging (MRI) datasets demonstrate that the proposed DAFF-Net and its unsupervised variant outperform state-of-the-art registration methods across several evaluation metrics, demonstrating the effectiveness of our approach in deformable medical image registration.<|reference_end|>
arxiv
@article{zhou2024dual-attention, title={Dual-Attention Frequency Fusion at Multi-Scale for Joint Segmentation and Deformable Medical Image Registration}, author={Hongchao Zhou and Shunbo Hu}, journal={arXiv preprint arXiv:2409.19658}, year={2024}, archivePrefix={arXiv}, eprint={2409.19658}, primaryClass={cs.CV} }
zhou2024dual-attention
arxiv-663273
2409.19659
Flipped Classroom: Aligning Teacher Attention with Student in Generalized Category Discovery
<|reference_start|>Flipped Classroom: Aligning Teacher Attention with Student in Generalized Category Discovery: Recent advancements have shown promise in applying traditional Semi-Supervised Learning strategies to the task of Generalized Category Discovery (GCD). Typically, this involves a teacher-student framework in which the teacher imparts knowledge to the student to classify categories, even in the absence of explicit labels. Nevertheless, GCD presents unique challenges, particularly the absence of priors for new classes, which can lead to the teacher's misguidance and unsynchronized learning with the student, culminating in suboptimal outcomes. In our work, we delve into why traditional teacher-student designs falter in open-world generalized category discovery as compared to their success in closed-world semi-supervised learning. We identify inconsistent pattern learning across attention layers as the crux of this issue and introduce FlipClass, a method that dynamically updates the teacher to align with the student's attention, instead of maintaining a static teacher reference. Our teacher-student attention alignment strategy refines the teacher's focus based on student feedback from an energy perspective, promoting consistent pattern recognition and synchronized learning across old and new classes. Extensive experiments on a spectrum of benchmarks affirm that FlipClass significantly surpasses contemporary GCD methods, establishing new standards for the field.<|reference_end|>
arxiv
@article{lin2024flipped, title={Flipped Classroom: Aligning Teacher Attention with Student in Generalized Category Discovery}, author={Haonan Lin, Wenbin An, Jiahao Wang, Yan Chen, Feng Tian, Mengmeng Wang, Guang Dai, Qianying Wang, Jingdong Wang}, journal={arXiv preprint arXiv:2409.19659}, year={2024}, archivePrefix={arXiv}, eprint={2409.19659}, primaryClass={cs.CV} }
lin2024flipped
arxiv-663274
2409.19660
All-in-One Image Coding for Joint Human-Machine Vision with Multi-Path Aggregation
<|reference_start|>All-in-One Image Coding for Joint Human-Machine Vision with Multi-Path Aggregation: Image coding for multi-task applications, catering to both human perception and machine vision, has been extensively investigated. Existing methods often rely on multiple task-specific encoder-decoder pairs, leading to high overhead of parameter and bitrate usage, or face challenges in multi-objective optimization under a unified representation, failing to achieve both performance and efficiency. To this end, we propose Multi-Path Aggregation (MPA) integrated into existing coding models for joint human-machine vision, unifying the feature representation with an all-in-one architecture. MPA employs a predictor to allocate latent features among task-specific paths based on feature importance varied across tasks, maximizing the utility of shared features while preserving task-specific features for subsequent refinement. Leveraging feature correlations, we develop a two-stage optimization strategy to alleviate multi-task performance degradation. Upon the reuse of shared features, as low as 1.89% parameters are further augmented and fine-tuned for a specific task, which completely avoids extensive optimization of the entire model. Experimental results show that MPA achieves performance comparable to state-of-the-art methods in both task-specific and multi-objective optimization across human viewing and machine analysis tasks. Moreover, our all-in-one design supports seamless transitions between human- and machine-oriented reconstruction, enabling task-controllable interpretation without altering the unified model. Code is available at https://github.com/NJUVISION/MPA.<|reference_end|>
arxiv
@article{zhang2024all-in-one, title={All-in-One Image Coding for Joint Human-Machine Vision with Multi-Path Aggregation}, author={Xu Zhang, Peiyao Guo, Ming Lu, Zhan Ma}, journal={arXiv preprint arXiv:2409.19660}, year={2024}, archivePrefix={arXiv}, eprint={2409.19660}, primaryClass={cs.CV eess.IV} }
zhang2024all-in-one
arxiv-663275
2409.19663
Identifying Knowledge Editing Types in Large Language Models
<|reference_start|>Identifying Knowledge Editing Types in Large Language Models: Knowledge editing has emerged as an efficient technology for updating the knowledge of large language models (LLMs), attracting increasing attention in recent years. However, there is a lack of effective measures to prevent the malicious misuse of this technology, which could lead to harmful edits in LLMs. These malicious modifications could cause LLMs to generate toxic content, misleading users into inappropriate actions. In front of this risk, we introduce a new task, Knowledge Editing Type Identification (KETI), aimed at identifying different types of edits in LLMs, thereby providing timely alerts to users when encountering illicit edits. As part of this task, we propose KETIBench, which includes five types of harmful edits covering most popular toxic types, as well as one benign factual edit. We develop four classical classification models and three BERT-based models as baseline identifiers for both open-source and closed-source LLMs. Our experimental results, across 42 trials involving two models and three knowledge editing methods, demonstrate that all seven baseline identifiers achieve decent identification performance, highlighting the feasibility of identifying malicious edits in LLMs. Additional analyses reveal that the performance of the identifiers is independent of the reliability of the knowledge editing methods and exhibits cross-domain generalization, enabling the identification of edits from unknown sources. All data and code are available in https://github.com/xpq-tech/KETI. Warning: This paper contains examples of toxic text.<|reference_end|>
arxiv
@article{li2024identifying, title={Identifying Knowledge Editing Types in Large Language Models}, author={Xiaopeng Li, Shangwen Wang, Shezheng Song, Bin Ji, Huijun Liu, Shasha Li, Jun Ma, Jie Yu}, journal={arXiv preprint arXiv:2409.19663}, year={2024}, archivePrefix={arXiv}, eprint={2409.19663}, primaryClass={cs.CL cs.AI} }
li2024identifying
arxiv-663276
2409.19667
Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models
<|reference_start|>Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models: The need to analyze graphs is ubiquitous across various fields, from social networks to biological research and recommendation systems. Therefore, enabling the ability of large language models (LLMs) to process graphs is an important step toward more advanced general intelligence. However, current LLM benchmarks on graph analysis require models to directly reason over the prompts describing graph topology, and are thus limited to small graphs with only a few dozens of nodes. In contrast, human experts typically write programs based on popular libraries for task solving, and can thus handle graphs with different scales. To this end, a question naturally arises: can LLMs analyze graphs like professionals? In this paper, we introduce ProGraph, a manually crafted benchmark containing 3 categories of graph tasks. The benchmark expects solutions based on programming instead of directly reasoning over raw inputs. Our findings reveal that the performance of current LLMs is unsatisfactory, with the best model achieving only 36% accuracy. To bridge this gap, we propose LLM4Graph datasets, which include crawled documents and auto-generated codes based on 6 widely used graph libraries. By augmenting closed-source LLMs with document retrieval and fine-tuning open-source ones on the codes, we show 11-32% absolute improvements in their accuracies. Our results underscore that the capabilities of LLMs in handling structured data are still under-explored, and show the effectiveness of LLM4Graph in enhancing LLMs' proficiency of graph analysis. The benchmark, datasets and enhanced open-source models are available at https://github.com/BUPT-GAMMA/ProGraph.<|reference_end|>
arxiv
@article{li2024can, title={Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models}, author={Xin Li, Weize Chen, Qizhi Chu, Haopeng Li, Zhaojun Sun, Ran Li, Chen Qian, Yiwei Wei, Zhiyuan Liu, Chuan Shi, Maosong Sun, Cheng Yang}, journal={arXiv preprint arXiv:2409.19667}, year={2024}, archivePrefix={arXiv}, eprint={2409.19667}, primaryClass={cs.CL cs.AI} }
li2024can
arxiv-663277
2409.19668
Local Search for Integer Quadratic Programming
<|reference_start|>Local Search for Integer Quadratic Programming: Integer Quadratic Programming (IQP) is an important problem in operations research. Local search is a powerful method for solving hard problems, but the research on local search algorithms for IQP solving is still on its early stage. This paper develops an efficient local search solver for solving general IQP, called LS-IQCQP. We propose four new local search operators for IQP that can handle quadratic terms in the objective function, constraints or both. Furthermore, a two-mode local search algorithm is introduced, utilizing newly designed scoring functions to enhance the search process. Experiments are conducted on standard IQP benchmarks QPLIB and MINLPLIB, comparing LS-IQCQP with several state-of-the-art IQP solvers. Experimental results demonstrate that LS-IQCQP is competitive with the most powerful commercial solver Gurobi and outperforms other state-of-the-art solvers. Moreover, LS-IQCQP has established 6 new records for QPLIB and MINLPLIB open instances.<|reference_end|>
arxiv
@article{he2024local, title={Local Search for Integer Quadratic Programming}, author={Xiang He, Peng Lin, Shaowei Cai}, journal={arXiv preprint arXiv:2409.19668}, year={2024}, archivePrefix={arXiv}, eprint={2409.19668}, primaryClass={cs.AI} }
he2024local
arxiv-663278
2409.19671
Nonideality-aware training makes memristive networks more robust to adversarial attacks
<|reference_start|>Nonideality-aware training makes memristive networks more robust to adversarial attacks: Neural networks are now deployed in a wide number of areas from object classification to natural language systems. Implementations using analog devices like memristors promise better power efficiency, potentially bringing these applications to a greater number of environments. However, such systems suffer from more frequent device faults and overall, their exposure to adversarial attacks has not been studied extensively. In this work, we investigate how nonideality-aware training - a common technique to deal with physical nonidealities - affects adversarial robustness. We find that adversarial robustness is significantly improved, even with limited knowledge of what nonidealities will be encountered during test time.<|reference_end|>
arxiv
@article{joksas2024nonideality-aware, title={Nonideality-aware training makes memristive networks more robust to adversarial attacks}, author={Dovydas Joksas, Luis Mu~noz-Gonz'alez, Emil Lupu, Adnan Mehonic}, journal={arXiv preprint arXiv:2409.19671}, year={2024}, archivePrefix={arXiv}, eprint={2409.19671}, primaryClass={cs.ET cs.CR cs.LG} }
joksas2024nonideality-aware
arxiv-663279
2409.19672
Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding
<|reference_start|>Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding: Modeling and leveraging layout reading order in visually-rich documents (VrDs) is critical in document intelligence as it captures the rich structure semantics within documents. Previous works typically formulated layout reading order as a permutation of layout elements, i.e. a sequence containing all the layout elements. However, we argue that this formulation does not adequately convey the complete reading order information in the layout, which may potentially lead to performance decline in downstream VrD tasks. To address this issue, we propose to model the layout reading order as ordering relations over the set of layout elements, which have sufficient expressive capability for the complete reading order information. To enable empirical evaluation on methods towards the improved form of reading order prediction (ROP), we establish a comprehensive benchmark dataset including the reading order annotation as relations over layout elements, together with a relation-extraction-based method that outperforms previous methods. Moreover, to highlight the practical benefits of introducing the improved form of layout reading order, we propose a reading-order-relation-enhancing pipeline to improve model performance on any arbitrary VrD task by introducing additional reading order relation inputs. Comprehensive results demonstrate that the pipeline generally benefits downstream VrD tasks: (1) with utilizing the reading order relation information, the enhanced downstream models achieve SOTA results on both two task settings of the targeted dataset; (2) with utilizing the pseudo reading order information generated by the proposed ROP model, the performance of the enhanced models has improved across all three models and eight cross-domain VrD-IE/QA task settings without targeted optimization.<|reference_end|>
arxiv
@article{zhang2024modeling, title={Modeling Layout Reading Order as Ordering Relations for Visually-rich Document Understanding}, author={Chong Zhang, Yi Tu, Yixi Zhao, Chenshu Yuan, Huan Chen, Yue Zhang, Mingxu Chai, Ya Guo, Huijia Zhu, Qi Zhang, Tao Gui}, journal={arXiv preprint arXiv:2409.19672}, year={2024}, archivePrefix={arXiv}, eprint={2409.19672}, primaryClass={cs.CL cs.MM} }
zhang2024modeling
arxiv-663280
2409.19674
Alternating Maximization Algorithm for Mismatch Capacity with Oblivious Relaying
<|reference_start|>Alternating Maximization Algorithm for Mismatch Capacity with Oblivious Relaying: Reliable communication over a discrete memoryless channel with the help of a relay has aroused interest due to its widespread applications in practical scenarios. By considering the system with a mismatched decoder, previous works have provided optimization models to evaluate the mismatch capacity in these scenarios. The proposed models, however, are difficult due to the complicated structure of the mismatched decoding problem with the information flows in hops given by the relay. Existing methods, such as the grid search, become impractical as they involve finding all roots of a nonlinear system, with the growing size of the alphabet. To address this problem, we reformulate the max-min optimization model as a consistent maximization form, by considering the dual form of the inner minimization problem and the Lagrangian with a fixed multiplier. Based on the proposed formulation, an alternating maximization framework is designed, which provides the closed-form solution with simple iterations in each step by introducing a suitable variable transformation. The effectiveness of the proposed approach is demonstrated by the simulations over practical scenarios, including Quaternary and Gaussian channels. Moreover, the simulation results of the transitional probability also shed light on the promising application attribute to the quantizer design in the relay node.<|reference_end|>
arxiv
@article{li2024alternating, title={Alternating Maximization Algorithm for Mismatch Capacity with Oblivious Relaying}, author={Xinwei Li, Lingyi Chen, Shitong Wu, Huihui Wu, Hao Wu, and Wenyi Zhang}, journal={arXiv preprint arXiv:2409.19674}, year={2024}, archivePrefix={arXiv}, eprint={2409.19674}, primaryClass={cs.IT cs.NA math.IT math.NA} }
li2024alternating
arxiv-663281
2409.19676
See Detail Say Clear: Towards Brain CT Report Generation via Pathological Clue-driven Representation Learning
<|reference_start|>See Detail Say Clear: Towards Brain CT Report Generation via Pathological Clue-driven Representation Learning: Brain CT report generation is significant to aid physicians in diagnosing cranial diseases. Recent studies concentrate on handling the consistency between visual and textual pathological features to improve the coherence of report. However, there exist some challenges: 1) Redundant visual representing: Massive irrelevant areas in 3D scans distract models from representing salient visual contexts. 2) Shifted semantic representing: Limited medical corpus causes difficulties for models to transfer the learned textual representations to generative layers. This study introduces a Pathological Clue-driven Representation Learning (PCRL) model to build cross-modal representations based on pathological clues and naturally adapt them for accurate report generation. Specifically, we construct pathological clues from perspectives of segmented regions, pathological entities, and report themes, to fully grasp visual pathological patterns and learn cross-modal feature representations. To adapt the representations for the text generation task, we bridge the gap between representation learning and report generation by using a unified large language model (LLM) with task-tailored instructions. These crafted instructions enable the LLM to be flexibly fine-tuned across tasks and smoothly transfer the semantic representation for report generation. Experiments demonstrate that our method outperforms previous methods and achieves SoTA performance. Our code is available at "https://github.com/Chauncey-Jheng/PCRL-MRG".<|reference_end|>
arxiv
@article{zheng2024see, title={See Detail Say Clear: Towards Brain CT Report Generation via Pathological Clue-driven Representation Learning}, author={Chengxin Zheng, Junzhong Ji, Yanzhao Shi, Xiaodan Zhang, Liangqiong Qu}, journal={arXiv preprint arXiv:2409.19676}, year={2024}, archivePrefix={arXiv}, eprint={2409.19676}, primaryClass={cs.CV cs.AI} }
zheng2024see
arxiv-663282
2409.19679
SemiDDM-Weather: A Semi-supervised Learning Framework for All-in-one Adverse Weather Removal
<|reference_start|>SemiDDM-Weather: A Semi-supervised Learning Framework for All-in-one Adverse Weather Removal: Adverse weather removal aims to restore clear vision under adverse weather conditions. Existing methods are mostly tailored for specific weather types and rely heavily on extensive labeled data. In dealing with these two limitations, this paper presents a pioneering semi-supervised all-in-one adverse weather removal framework built on the teacher-student network with a Denoising Diffusion Model (DDM) as the backbone, termed SemiDDM-Weather. As for the design of DDM backbone in our SemiDDM-Weather, we adopt the SOTA Wavelet Diffusion Model-Wavediff with customized inputs and loss functions, devoted to facilitating the learning of many-to-one mapping distributions for efficient all-in-one adverse weather removal with limited label data. To mitigate the risk of misleading model training due to potentially inaccurate pseudo-labels generated by the teacher network in semi-supervised learning, we introduce quality assessment and content consistency constraints to screen the "optimal" outputs from the teacher network as the pseudo-labels, thus more effectively guiding the student network training with unlabeled data. Experimental results show that on both synthetic and real-world datasets, our SemiDDM-Weather consistently delivers high visual quality and superior adverse weather removal, even when compared to fully supervised competitors. Our code and pre-trained model are available at this repository.<|reference_end|>
arxiv
@article{long2024semiddm-weather:, title={SemiDDM-Weather: A Semi-supervised Learning Framework for All-in-one Adverse Weather Removal}, author={Fang Long, Wenkang Su, Zixuan Li, Lei Cai, Mingjie Li, Yuan-Gen Wang, and Xiaochun Cao}, journal={arXiv preprint arXiv:2409.19679}, year={2024}, archivePrefix={arXiv}, eprint={2409.19679}, primaryClass={cs.CV} }
long2024semiddm-weather:
arxiv-663283
2409.19680
Instruction Embedding: Latent Representations of Instructions Towards Task Identification
<|reference_start|>Instruction Embedding: Latent Representations of Instructions Towards Task Identification: Instruction data is crucial for improving the capability of Large Language Models (LLMs) to align with human-level performance. Recent research LIMA demonstrates that alignment is essentially a process where the model adapts instructions' interaction style or format to solve various tasks, leveraging pre-trained knowledge and skills. Therefore, for instructional data, the most important aspect is the task it represents, rather than the specific semantics and knowledge information. The latent representations of instructions play roles for some instruction-related tasks like data selection and demonstrations retrieval. However, they are always derived from text embeddings, encompass overall semantic information that influences the representation of task categories. In this work, we introduce a new concept, instruction embedding, and construct Instruction Embedding Benchmark (IEB) for its training and evaluation. Then, we propose a baseline Prompt-based Instruction Embedding (PIE) method to make the representations more attention on tasks. The evaluation of PIE, alongside other embedding methods on IEB with two designed tasks, demonstrates its superior performance in accurately identifying task categories. Moreover, the application of instruction embeddings in four downstream tasks showcases its effectiveness and suitability for instruction-related tasks.<|reference_end|>
arxiv
@article{li2024instruction, title={Instruction Embedding: Latent Representations of Instructions Towards Task Identification}, author={Yiwei Li, Jiayi Shi, Shaoxiong Feng, Peiwen Yuan, Xinglin Wang, Boyuan Pan, Heda Wang, Yao Hu, Kan Li}, journal={arXiv preprint arXiv:2409.19680}, year={2024}, archivePrefix={arXiv}, eprint={2409.19680}, primaryClass={cs.CL cs.AI} }
li2024instruction
arxiv-663284
2409.19681
Simple and Fast Distillation of Diffusion Models
<|reference_start|>Simple and Fast Distillation of Diffusion Models: Diffusion-based generative models have demonstrated their powerful performance across various tasks, but this comes at a cost of the slow sampling speed. To achieve both efficient and high-quality synthesis, various distillation-based accelerated sampling methods have been developed recently. However, they generally require time-consuming fine tuning with elaborate designs to achieve satisfactory performance in a specific number of function evaluation (NFE), making them difficult to employ in practice. To address this issue, we propose Simple and Fast Distillation (SFD) of diffusion models, which simplifies the paradigm used in existing methods and largely shortens their fine-tuning time up to 1000$\times$. We begin with a vanilla distillation-based sampling method and boost its performance to state of the art by identifying and addressing several small yet vital factors affecting the synthesis efficiency and quality. Our method can also achieve sampling with variable NFEs using a single distilled model. Extensive experiments demonstrate that SFD strikes a good balance between the sample quality and fine-tuning costs in few-step image generation task. For example, SFD achieves 4.53 FID (NFE=2) on CIFAR-10 with only 0.64 hours of fine-tuning on a single NVIDIA A100 GPU. Our code is available at https://github.com/zju-pi/diff-sampler.<|reference_end|>
arxiv
@article{zhou2024simple, title={Simple and Fast Distillation of Diffusion Models}, author={Zhenyu Zhou, Defang Chen, Can Wang, Chun Chen, Siwei Lyu}, journal={arXiv preprint arXiv:2409.19681}, year={2024}, archivePrefix={arXiv}, eprint={2409.19681}, primaryClass={cs.CV} }
zhou2024simple
arxiv-663285
2409.19684
MedViLaM: A multimodal large language model with advanced generalizability and explainability for medical data understanding and generation
<|reference_start|>MedViLaM: A multimodal large language model with advanced generalizability and explainability for medical data understanding and generation: Medicine is inherently multimodal and multitask, with diverse data modalities spanning text, imaging. However, most models in medical field are unimodal single tasks and lack good generalizability and explainability. In this study, we introduce MedViLaM, a unified vision-language model towards a generalist model for medical data that can flexibly encode and interpret various forms of medical data, including clinical language and imaging, all using the same set of model weights. To facilitate the creation of such multi-task model, we have curated MultiMedBench, a comprehensive pretaining dataset and benchmark consisting of several distinct tasks, i.e., continuous question-answering, multi-label disease classification, disease localization, generation and summarization of radiology reports. MedViLaM demonstrates strong performance across all MultiMedBench tasks, frequently outpacing other generalist models by a significant margin. Additionally, we present instances of zero-shot generalization to new medical concepts and tasks, effective transfer learning across different tasks, and the emergence of zero-shot medical reasoning.<|reference_end|>
arxiv
@article{xu2024medvilam:, title={MedViLaM: A multimodal large language model with advanced generalizability and explainability for medical data understanding and generation}, author={Lijian Xu and Hao Sun and Ziyu Ni and Hongsheng Li and Shaoting Zhang}, journal={arXiv preprint arXiv:2409.19684}, year={2024}, archivePrefix={arXiv}, eprint={2409.19684}, primaryClass={cs.CV} }
xu2024medvilam:
arxiv-663286
2409.19685
Underwater Organism Color Enhancement via Color Code Decomposition, Adaptation and Interpolation
<|reference_start|>Underwater Organism Color Enhancement via Color Code Decomposition, Adaptation and Interpolation: Underwater images often suffer from quality degradation due to absorption and scattering effects. Most existing underwater image enhancement algorithms produce a single, fixed-color image, limiting user flexibility and application. To address this limitation, we propose a method called \textit{ColorCode}, which enhances underwater images while offering a range of controllable color outputs. Our approach involves recovering an underwater image to a reference enhanced image through supervised training and decomposing it into color and content codes via self-reconstruction and cross-reconstruction. The color code is explicitly constrained to follow a Gaussian distribution, allowing for efficient sampling and interpolation during inference. ColorCode offers three key features: 1) color enhancement, producing an enhanced image with a fixed color; 2) color adaptation, enabling controllable adjustments of long-wavelength color components using guidance images; and 3) color interpolation, allowing for the smooth generation of multiple colors through continuous sampling of the color code. Quantitative and visual evaluations on popular and challenging benchmark datasets demonstrate the superiority of ColorCode over existing methods in providing diverse, controllable, and color-realistic enhancement results. The source code is available at https://github.com/Xiaofeng-life/ColorCode.<|reference_end|>
arxiv
@article{cong2024underwater, title={Underwater Organism Color Enhancement via Color Code Decomposition, Adaptation and Interpolation}, author={Xiaofeng Cong, Jing Zhang, Yeying Jin, Junming Hou, Yu Zhao, Jie Gui, James Tin-Yau Kwok, Yuan Yan Tang}, journal={arXiv preprint arXiv:2409.19685}, year={2024}, archivePrefix={arXiv}, eprint={2409.19685}, primaryClass={cs.CV} }
cong2024underwater
arxiv-663287
2409.19686
Text-driven Human Motion Generation with Motion Masked Diffusion Model
<|reference_start|>Text-driven Human Motion Generation with Motion Masked Diffusion Model: Text-driven human motion generation is a multimodal task that synthesizes human motion sequences conditioned on natural language. It requires the model to satisfy textual descriptions under varying conditional inputs, while generating plausible and realistic human actions with high diversity. Existing diffusion model-based approaches have outstanding performance in the diversity and multimodality of generation. However, compared to autoregressive methods that train motion encoders before inference, diffusion methods lack in fitting the distribution of human motion features which leads to an unsatisfactory FID score. One insight is that the diffusion model lack the ability to learn the motion relations among spatio-temporal semantics through contextual reasoning. To solve this issue, in this paper, we proposed Motion Masked Diffusion Model \textbf{(MMDM)}, a novel human motion masked mechanism for diffusion model to explicitly enhance its ability to learn the spatio-temporal relationships from contextual joints among motion sequences. Besides, considering the complexity of human motion data with dynamic temporal characteristics and spatial structure, we designed two mask modeling strategies: \textbf{time frames mask} and \textbf{body parts mask}. During training, MMDM masks certain tokens in the motion embedding space. Then, the diffusion decoder is designed to learn the whole motion sequence from masked embedding in each sampling step, this allows the model to recover a complete sequence from incomplete representations. Experiments on HumanML3D and KIT-ML dataset demonstrate that our mask strategy is effective by balancing motion quality and text-motion consistency.<|reference_end|>
arxiv
@article{chen2024text-driven, title={Text-driven Human Motion Generation with Motion Masked Diffusion Model}, author={Xingyu Chen}, journal={arXiv preprint arXiv:2409.19686}, year={2024}, archivePrefix={arXiv}, eprint={2409.19686}, primaryClass={cs.CV} }
chen2024text-driven
arxiv-663288
2409.19688
Machine Learning for Raman Spectroscopy-based Cyber-Marine Fish Biochemical Composition Analysis
<|reference_start|>Machine Learning for Raman Spectroscopy-based Cyber-Marine Fish Biochemical Composition Analysis: The rapid and accurate detection of biochemical compositions in fish is a crucial real-world task that facilitates optimal utilization and extraction of high-value products in the seafood industry. Raman spectroscopy provides a promising solution for quickly and non-destructively analyzing the biochemical composition of fish by associating Raman spectra with biochemical reference data using machine learning regression models. This paper investigates different regression models to address this task and proposes a new design of Convolutional Neural Networks (CNNs) for jointly predicting water, protein, and lipids yield. To the best of our knowledge, we are the first to conduct a successful study employing CNNs to analyze the biochemical composition of fish based on a very small Raman spectroscopic dataset. Our approach combines a tailored CNN architecture with the comprehensive data preparation procedure, effectively mitigating the challenges posed by extreme data scarcity. The results demonstrate that our CNN can significantly outperform two state-of-the-art CNN models and multiple traditional machine learning models, paving the way for accurate and automated analysis of fish biochemical composition.<|reference_end|>
arxiv
@article{zhou2024machine, title={Machine Learning for Raman Spectroscopy-based Cyber-Marine Fish Biochemical Composition Analysis}, author={Yun Zhou, Gang Chen, Bing Xue, Mengjie Zhang, Jeremy S. Rooney, Kirill Lagutin, Andrew MacKenzie, Keith C. Gordon, Daniel P. Killeen}, journal={arXiv preprint arXiv:2409.19688}, year={2024}, archivePrefix={arXiv}, eprint={2409.19688}, primaryClass={cs.LG cs.AI eess.SP} }
zhou2024machine
arxiv-663289
2409.19689
InfantCryNet: A Data-driven Framework for Intelligent Analysis of Infant Cries
<|reference_start|>InfantCryNet: A Data-driven Framework for Intelligent Analysis of Infant Cries: Understanding the meaning of infant cries is a significant challenge for young parents in caring for their newborns. The presence of background noise and the lack of labeled data present practical challenges in developing systems that can detect crying and analyze its underlying reasons. In this paper, we present a novel data-driven framework, "InfantCryNet," for accomplishing these tasks. To address the issue of data scarcity, we employ pre-trained audio models to incorporate prior knowledge into our model. We propose the use of statistical pooling and multi-head attention pooling techniques to extract features more effectively. Additionally, knowledge distillation and model quantization are applied to enhance model efficiency and reduce the model size, better supporting industrial deployment in mobile devices. Experiments on real-life datasets demonstrate the superior performance of the proposed framework, outperforming state-of-the-art baselines by 4.4% in classification accuracy. The model compression effectively reduces the model size by 7% without compromising performance and by up to 28% with only an 8% decrease in accuracy, offering practical insights for model selection and system design.<|reference_end|>
arxiv
@article{hong2024infantcrynet:, title={InfantCryNet: A Data-driven Framework for Intelligent Analysis of Infant Cries}, author={Mengze Hong, Chen Jason Zhang, Lingxiao Yang, Yuanfeng Song, Di Jiang}, journal={arXiv preprint arXiv:2409.19689}, year={2024}, archivePrefix={arXiv}, eprint={2409.19689}, primaryClass={cs.SD cs.AI cs.CV cs.LG eess.AS} }
hong2024infantcrynet:
arxiv-663290
2409.19690
Neural-Polyptych: Content Controllable Painting Recreation for Diverse Genres
<|reference_start|>Neural-Polyptych: Content Controllable Painting Recreation for Diverse Genres: To bridge the gap between artists and non-specialists, we present a unified framework, Neural-Polyptych, to facilitate the creation of expansive, high-resolution paintings by seamlessly incorporating interactive hand-drawn sketches with fragments from original paintings. We have designed a multi-scale GAN-based architecture to decompose the generation process into two parts, each responsible for identifying global and local features. To enhance the fidelity of semantic details generated from users' sketched outlines, we introduce a Correspondence Attention module utilizing our Reference Bank strategy. This ensures the creation of high-quality, intricately detailed elements within the artwork. The final result is achieved by carefully blending these local elements while preserving coherent global consistency. Consequently, this methodology enables the production of digital paintings at megapixel scale, accommodating diverse artistic expressions and enabling users to recreate content in a controlled manner. We validate our approach to diverse genres of both Eastern and Western paintings. Applications such as large painting extension, texture shuffling, genre switching, mural art restoration, and recomposition can be successfully based on our framework.<|reference_end|>
arxiv
@article{zhao2024neural-polyptych:, title={Neural-Polyptych: Content Controllable Painting Recreation for Diverse Genres}, author={Yiming Zhao, Dewen Guo, Zhouhui Lian, Yue Gao, Jianhong Han, Jie Feng, Guoping Wang, Bingfeng Zhou, Sheng Li}, journal={Computational Visual Media, 2024}, year={2024}, archivePrefix={arXiv}, eprint={2409.19690}, primaryClass={cs.CV cs.GR} }
zhao2024neural-polyptych:
arxiv-663291
2409.19691
CERD: A Comprehensive Chinese Rhetoric Dataset for Rhetorical Understanding and Generation in Essays
<|reference_start|>CERD: A Comprehensive Chinese Rhetoric Dataset for Rhetorical Understanding and Generation in Essays: Existing rhetorical understanding and generation datasets or corpora primarily focus on single coarse-grained categories or fine-grained categories, neglecting the common interrelations between different rhetorical devices by treating them as independent sub-tasks. In this paper, we propose the Chinese Essay Rhetoric Dataset (CERD), consisting of 4 commonly used coarse-grained categories including metaphor, personification, hyperbole and parallelism and 23 fine-grained categories across both form and content levels. CERD is a manually annotated and comprehensive Chinese rhetoric dataset with five interrelated sub-tasks. Unlike previous work, our dataset aids in understanding various rhetorical devices, recognizing corresponding rhetorical components, and generating rhetorical sentences under given conditions, thereby improving the author's writing proficiency and language usage skills. Extensive experiments are conducted to demonstrate the interrelations between multiple tasks in CERD, as well as to establish a benchmark for future research on rhetoric. The experimental results indicate that Large Language Models achieve the best performance across most tasks, and jointly fine-tuning with multiple tasks further enhances performance.<|reference_end|>
arxiv
@article{liu2024cerd:, title={CERD: A Comprehensive Chinese Rhetoric Dataset for Rhetorical Understanding and Generation in Essays}, author={Nuowei Liu, Xinhao Chen, Hongyi Wu, Changzhi Sun, Man Lan, Yuanbin Wu, Xiaopeng Bai, Shaoguang Mao, Yan Xia}, journal={arXiv preprint arXiv:2409.19691}, year={2024}, archivePrefix={arXiv}, eprint={2409.19691}, primaryClass={cs.CL} }
liu2024cerd:
arxiv-663292
2409.19696
Vision-Language Models are Strong Noisy Label Detectors
<|reference_start|>Vision-Language Models are Strong Noisy Label Detectors: Recent research on fine-tuning vision-language models has demonstrated impressive performance in various downstream tasks. However, the challenge of obtaining accurately labeled data in real-world applications poses a significant obstacle during the fine-tuning process. To address this challenge, this paper presents a Denoising Fine-Tuning framework, called DeFT, for adapting vision-language models. DeFT utilizes the robust alignment of textual and visual features pre-trained on millions of auxiliary image-text pairs to sieve out noisy labels. The proposed framework establishes a noisy label detector by learning positive and negative textual prompts for each class. The positive prompt seeks to reveal distinctive features of the class, while the negative prompt serves as a learnable threshold for separating clean and noisy samples. We employ parameter-efficient fine-tuning for the adaptation of a pre-trained visual encoder to promote its alignment with the learned textual prompts. As a general framework, DeFT can seamlessly fine-tune many pre-trained models to downstream tasks by utilizing carefully selected clean samples. Experimental results on seven synthetic and real-world noisy datasets validate the effectiveness of DeFT in both noisy label detection and image classification.<|reference_end|>
arxiv
@article{wei2024vision-language, title={Vision-Language Models are Strong Noisy Label Detectors}, author={Tong Wei and Hao-Tian Li and Chun-Shu Li and Jiang-Xin Shi and Yu-Feng Li and Min-Ling Zhang}, journal={arXiv preprint arXiv:2409.19696}, year={2024}, archivePrefix={arXiv}, eprint={2409.19696}, primaryClass={cs.LG cs.CV} }
wei2024vision-language
arxiv-663293
2409.19700
2D-TPE: Two-Dimensional Positional Encoding Enhances Table Understanding for Large Language Models
<|reference_start|>2D-TPE: Two-Dimensional Positional Encoding Enhances Table Understanding for Large Language Models: Tables are ubiquitous across various domains for concisely representing structured information. Empowering large language models (LLMs) to reason over tabular data represents an actively explored direction. However, since typical LLMs only support one-dimensional~(1D) inputs, existing methods often flatten the two-dimensional~(2D) table structure into a sequence of tokens, which can severely disrupt the spatial relationships and result in an inevitable loss of vital contextual information. In this paper, we first empirically demonstrate the detrimental impact of such flattening operations on the performance of LLMs in capturing the spatial information of tables through two elaborate proxy tasks. Subsequently, we introduce a simple yet effective positional encoding method, termed ``2D-TPE'' (Two-Dimensional Table Positional Encoding), to address this challenge. 2D-TPE enables each attention head to dynamically select a permutation order of tokens within the context for attending to them, where each permutation represents a distinct traversal mode for the table, such as column-wise or row-wise traversal. 2D-TPE effectively mitigates the risk of losing essential spatial information while preserving computational efficiency, thus better preserving the table structure. Extensive experiments across five benchmarks demonstrate that 2D-TPE outperforms strong baselines, underscoring the importance of preserving the table structure for accurate table comprehension. Comprehensive analysis further reveals the substantially better scalability of 2D-TPE to large tables than baselines.<|reference_end|>
arxiv
@article{li20242d-tpe:, title={2D-TPE: Two-Dimensional Positional Encoding Enhances Table Understanding for Large Language Models}, author={Jia-Nan Li, Jian Guan, Wei Wu, Zhengtao Yu, Rui Yan}, journal={arXiv preprint arXiv:2409.19700}, year={2024}, archivePrefix={arXiv}, eprint={2409.19700}, primaryClass={cs.CL} }
li20242d-tpe:
arxiv-663294
2409.19701
Hyperspectral Unmixing of Agricultural Images taken from UAV Using Adapted U-Net Architecture
<|reference_start|>Hyperspectral Unmixing of Agricultural Images taken from UAV Using Adapted U-Net Architecture: The hyperspectral unmixing method is an algorithm that extracts material (usually called endmember) data from hyperspectral data cube pixels along with their abundances. Due to a lower spatial resolution of hyperspectral sensors data in each of the pixels may contain mixed information from multiple endmembers. In this paper we create a hyperspectral unmixing dataset, created from blueberry field data gathered by a hyperspectral camera mounted on a UAV. We also propose a hyperspectral unmixing algorithm based on U-Net network architecture to achieve more accurate unmixing results on existing and newly created hyperspectral unmixing datasets.<|reference_end|>
arxiv
@article{paura2024hyperspectral, title={Hyperspectral Unmixing of Agricultural Images taken from UAV Using Adapted U-Net Architecture}, author={Vytautas Paura, Virginijus Marcinkeviv{c}ius}, journal={arXiv preprint arXiv:2409.19701}, year={2024}, archivePrefix={arXiv}, eprint={2409.19701}, primaryClass={eess.IV cs.CV} }
paura2024hyperspectral
arxiv-663295
2409.19702
RNG: Relightable Neural Gaussians
<|reference_start|>RNG: Relightable Neural Gaussians: 3D Gaussian Splatting (3DGS) has shown its impressive power in novel view synthesis. However, creating relightable 3D assets, especially for objects with ill-defined shapes (e.g., fur), is still a challenging task. For these scenes, the decomposition between the light, geometry, and material is more ambiguous, as neither the surface constraints nor the analytical shading model hold. To address this issue, we propose RNG, a novel representation of relightable neural Gaussians, enabling the relighting of objects with both hard surfaces or fluffy boundaries. We avoid any assumptions in the shading model but maintain feature vectors, which can be further decoded by an MLP into colors, in each Gaussian point. Following prior work, we utilize a point light to reduce the ambiguity and introduce a shadow-aware condition to the network. We additionally propose a depth refinement network to help the shadow computation under the 3DGS framework, leading to better shadow effects under point lights. Furthermore, to avoid the blurriness brought by the alpha-blending in 3DGS, we design a hybrid forward-deferred optimization strategy. As a result, we achieve about $20\times$ faster in training and about $600\times$ faster in rendering than prior work based on neural radiance fields, with $60$ frames per second on an RTX4090.<|reference_end|>
arxiv
@article{fan2024rng:, title={RNG: Relightable Neural Gaussians}, author={Jiahui Fan, Fujun Luan, Jian Yang, Milov{s} Hav{s}an, Beibei Wang}, journal={arXiv preprint arXiv:2409.19702}, year={2024}, archivePrefix={arXiv}, eprint={2409.19702}, primaryClass={cs.CV cs.GR} }
fan2024rng:
arxiv-663296
2409.19703
Applying the Lower-Biased Teacher Model in Semi-Supervised Object Detection
<|reference_start|>Applying the Lower-Biased Teacher Model in Semi-Supervised Object Detection: I present the Lower Biased Teacher model, an enhancement of the Unbiased Teacher model, specifically tailored for semi-supervised object detection tasks. The primary innovation of this model is the integration of a localization loss into the teacher model, which significantly improves the accuracy of pseudo-label generation. By addressing key issues such as class imbalance and the precision of bounding boxes, the Lower Biased Teacher model demonstrates superior performance in object detection tasks. Extensive experiments on multiple semi-supervised object detection datasets show that the Lower Biased Teacher model not only reduces the pseudo-labeling bias caused by class imbalances but also mitigates errors arising from incorrect bounding boxes. As a result, the model achieves higher mAP scores and more reliable detection outcomes compared to existing methods. This research underscores the importance of accurate pseudo-label generation and provides a robust framework for future advancements in semi-supervised learning for object detection.<|reference_end|>
arxiv
@article{wang2024applying, title={Applying the Lower-Biased Teacher Model in Semi-Supervised Object Detection}, author={Shuang Wang}, journal={arXiv preprint arXiv:2409.19703}, year={2024}, archivePrefix={arXiv}, eprint={2409.19703}, primaryClass={cs.CV} }
wang2024applying
arxiv-663297
2409.19708
A Certified Robust Watermark For Large Language Models
<|reference_start|>A Certified Robust Watermark For Large Language Models: The effectiveness of watermark algorithms in AI-generated text identification has garnered significant attention. Concurrently, an increasing number of watermark algorithms have been proposed to enhance the robustness against various watermark attacks. However, these watermark algorithms remain susceptible to adaptive or unseen attacks. To address this issue, to our best knowledge, we propose the first certified robust watermark algorithm for large language models based on randomized smoothing, which can provide provable guarantees for watermarked text. Specifically, we utilize two different models respectively for watermark generation and detection and add Gaussian and Uniform noise respectively in the embedding and permutation space during the training and inference stages of the watermark detector to enhance the certified robustness of our watermark detector and derive certified radius. To evaluate the empirical robustness and certified robustness of our watermark algorithm, we conducted comprehensive experiments. The results indicate that our watermark algorithm shows comparable performance to baseline algorithms while our algorithm can derive substantial certified robustness, which means that our watermark can not be removed even under significant alterations.<|reference_end|>
arxiv
@article{feng2024a, title={A Certified Robust Watermark For Large Language Models}, author={Xianheng Feng, Jian Liu, Kui Ren, Chun Chen}, journal={arXiv preprint arXiv:2409.19708}, year={2024}, archivePrefix={arXiv}, eprint={2409.19708}, primaryClass={cs.CR} }
feng2024a
arxiv-663298
2409.19709
Obstacle-Aware Quadrupedal Locomotion With Resilient Multi-Modal Reinforcement Learning
<|reference_start|>Obstacle-Aware Quadrupedal Locomotion With Resilient Multi-Modal Reinforcement Learning: Quadrupedal robots hold promising potential for applications in navigating cluttered environments with resilience akin to their animal counterparts. However, their floating base configuration makes them vulnerable to real-world uncertainties, yielding substantial challenges in their locomotion control. Deep reinforcement learning has become one of the plausible alternatives for realizing a robust locomotion controller. However, the approaches that rely solely on proprioception sacrifice collision-free locomotion because they require front-feet contact to detect the presence of stairs to adapt the locomotion gait. Meanwhile, incorporating exteroception necessitates a precisely modeled map observed by exteroceptive sensors over a period of time. Therefore, this work proposes a novel method to fuse proprioception and exteroception featuring a resilient multi-modal reinforcement learning. The proposed method yields a controller that showcases agile locomotion performance on a quadrupedal robot over a myriad of real-world courses, including rough terrains, steep slopes, and high-rise stairs, while retaining its robustness against out-of-distribution situations.<|reference_end|>
arxiv
@article{nahrendra2024obstacle-aware, title={Obstacle-Aware Quadrupedal Locomotion With Resilient Multi-Modal Reinforcement Learning}, author={I Made Aswin Nahrendra, Byeongho Yu, Minho Oh, Dongkyu Lee, Seunghyun Lee, Hyeonwoo Lee, Hyungtae Lim, Hyun Myung}, journal={arXiv preprint arXiv:2409.19709}, year={2024}, archivePrefix={arXiv}, eprint={2409.19709}, primaryClass={cs.RO cs.SY eess.SY} }
nahrendra2024obstacle-aware
arxiv-663299
2409.19710
A multimodal LLM for the non-invasive decoding of spoken text from brain recordings
<|reference_start|>A multimodal LLM for the non-invasive decoding of spoken text from brain recordings: Brain-related research topics in artificial intelligence have recently gained popularity, particularly due to the expansion of what multimodal architectures can do from computer vision to natural language processing. Our main goal in this work is to explore the possibilities and limitations of these architectures in spoken text decoding from non-invasive fMRI recordings. Contrary to vision and textual data, fMRI data represent a complex modality due to the variety of brain scanners, which implies (i) the variety of the recorded signal formats, (ii) the low resolution and noise of the raw signals, and (iii) the scarcity of pretrained models that can be leveraged as foundation models for generative learning. These points make the problem of the non-invasive decoding of text from fMRI recordings very challenging. In this paper, we propose and end-to-end multimodal LLM for decoding spoken text from fMRI signals. The proposed architecture is founded on (i) an encoder derived from a specific transformer incorporating an augmented embedding layer for the encoder and a better-adjusted attention mechanism than that present in the state of the art, and (ii) a frozen large language model adapted to align the embedding of the input text and the encoded embedding of brain activity to decode the output text. A benchmark in performed on a corpus consisting of a set of interactions human-human and human-robot interactions where fMRI and conversational signals are recorded synchronously. The obtained results are very promising, as our proposal outperforms the evaluated models, and is able to generate text capturing more accurate semantics present in the ground truth. The implementation code is provided in https://github.com/Hmamouche/brain_decode.<|reference_end|>
arxiv
@article{hmamouche2024a, title={A multimodal LLM for the non-invasive decoding of spoken text from brain recordings}, author={Youssef Hmamouche, Ismail Chihab, Lahoucine Kdouri, Amal El Fallah Seghrouchni}, journal={arXiv preprint arXiv:2409.19710}, year={2024}, archivePrefix={arXiv}, eprint={2409.19710}, primaryClass={q-bio.NC cs.CL cs.LG cs.SD eess.AS eess.SP q-bio.QM} }
hmamouche2024a
arxiv-663300
2409.19713
Generating peak-aware pseudo-measurements for low-voltage feeders using metadata of distribution system operators
<|reference_start|>Generating peak-aware pseudo-measurements for low-voltage feeders using metadata of distribution system operators: Distribution system operators (DSOs) must cope with new challenges such as the reconstruction of distribution grids along climate neutrality pathways or the ability to manage and control consumption and generation in the grid. In order to meet the challenges, measurements within the distribution grid often form the basis for DSOs. Hence, it is an urgent problem that measurement devices are not installed in many low-voltage (LV) grids. In order to overcome this problem, we present an approach to estimate pseudo-measurements for non-measured LV feeders based on the metadata of the respective feeder using regression models. The feeder metadata comprise information about the number of grid connection points, the installed power of consumers and producers, and billing data in the downstream LV grid. Additionally, we use weather data, calendar data and timestamp information as model features. The existing measurements are used as model target. We extensively evaluate the estimated pseudo-measurements on a large real-world dataset with 2,323 LV feeders characterized by both consumption and feed-in. For this purpose, we introduce peak metrics inspired by the BigDEAL challenge for the peak magnitude, timing and shape for both consumption and feed-in. As regression models, we use XGBoost, a multilayer perceptron (MLP) and a linear regression (LR). We observe that XGBoost and MLP outperform the LR. Furthermore, the results show that the approach adapts to different weather, calendar and timestamp conditions and produces realistic load curves based on the feeder metadata. In the future, the approach can be adapted to other grid levels like substation transformers and can supplement research fields like load modeling, state estimation and LV load forecasting.<|reference_end|>
arxiv
@article{treutlein2024generating, title={Generating peak-aware pseudo-measurements for low-voltage feeders using metadata of distribution system operators}, author={Manuel Treutlein, Marc Schmidt, Roman Hahn, Matthias Hertel, Benedikt Heidrich, Ralf Mikut, and Veit Hagenmeyer}, journal={arXiv preprint arXiv:2409.19713}, year={2024}, archivePrefix={arXiv}, eprint={2409.19713}, primaryClass={cs.LG cs.SY eess.SY} }
treutlein2024generating