corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-667901 | 2410.07577 | 3D Vision-Language Gaussian Splatting | <|reference_start|>3D Vision-Language Gaussian Splatting: Recent advancements in 3D reconstruction methods and vision-language models have propelled the development of multi-modal 3D scene understanding, which has vital applications in robotics, autonomous driving, and virtual/augmented reality. However, current multi-modal scene understanding approaches have naively embedded semantic representations into 3D reconstruction methods without striking a balance between visual and language modalities, which leads to unsatisfying semantic rasterization of translucent or reflective objects, as well as over-fitting on color modality. To alleviate these limitations, we propose a solution that adequately handles the distinct visual and semantic modalities, i.e., a 3D vision-language Gaussian splatting model for scene understanding, to put emphasis on the representation learning of language modality. We propose a novel cross-modal rasterizer, using modality fusion along with a smoothed semantic indicator for enhancing semantic rasterization. We also employ a camera-view blending technique to improve semantic consistency between existing and synthesized views, thereby effectively mitigating over-fitting. Extensive experiments demonstrate that our method achieves state-of-the-art performance in open-vocabulary semantic segmentation, surpassing existing methods by a significant margin.<|reference_end|> | arxiv | @article{peng20243d,
title={3D Vision-Language Gaussian Splatting},
author={Qucheng Peng, Benjamin Planche, Zhongpai Gao, Meng Zheng, Anwesa
Choudhuri, Terrence Chen, Chen Chen, Ziyan Wu},
journal={arXiv preprint arXiv:2410.07577},
year={2024},
archivePrefix={arXiv},
eprint={2410.07577},
primaryClass={cs.CV}
} | peng20243d |
arxiv-667902 | 2410.07579 | Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated Matching | <|reference_start|>Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated Matching: Dataset distillation or condensation refers to compressing a large-scale dataset into a much smaller one, enabling models trained on this synthetic dataset to generalize effectively on real data. Tackling this challenge, as defined, relies on a bi-level optimization algorithm: a novel model is trained in each iteration within a nested loop, with gradients propagated through an unrolled computation graph. However, this approach incurs high memory and time complexity, posing difficulties in scaling up to large datasets such as ImageNet. Addressing these concerns, this paper introduces Teddy, a Taylor-approximated dataset distillation framework designed to handle large-scale dataset and enhance efficiency. On the one hand, backed up by theoretical analysis, we propose a memory-efficient approximation derived from Taylor expansion, which transforms the original form dependent on multi-step gradients to a first-order one. On the other hand, rather than repeatedly training a novel model in each iteration, we unveil that employing a pre-cached pool of weak models, which can be generated from a single base model, enhances both time efficiency and performance concurrently, particularly when dealing with large-scale datasets. Extensive experiments demonstrate that the proposed Teddy attains state-of-the-art efficiency and performance on the Tiny-ImageNet and original-sized ImageNet-1K dataset, notably surpassing prior methods by up to 12.8%, while reducing 46.6% runtime. Our code will be available at https://github.com/Lexie-YU/Teddy.<|reference_end|> | arxiv | @article{yu2024teddy:,
title={Teddy: Efficient Large-Scale Dataset Distillation via
Taylor-Approximated Matching},
author={Ruonan Yu, Songhua Liu, Jingwen Ye, Xinchao Wang},
journal={arXiv preprint arXiv:2410.07579},
year={2024},
archivePrefix={arXiv},
eprint={2410.07579},
primaryClass={cs.CV}
} | yu2024teddy: |
arxiv-667903 | 2410.07582 | Detecting Training Data of Large Language Models via Expectation Maximization | <|reference_start|>Detecting Training Data of Large Language Models via Expectation Maximization: The widespread deployment of large language models (LLMs) has led to impressive advancements, yet information about their training data, a critical factor in their performance, remains undisclosed. Membership inference attacks (MIAs) aim to determine whether a specific instance was part of a target model's training data. MIAs can offer insights into LLM outputs and help detect and address concerns such as data contamination and compliance with privacy and copyright standards. However, applying MIAs to LLMs presents unique challenges due to the massive scale of pre-training data and the ambiguous nature of membership. Additionally, creating appropriate benchmarks to evaluate MIA methods is not straightforward, as training and test data distributions are often unknown. In this paper, we introduce EM-MIA, a novel MIA method for LLMs that iteratively refines membership scores and prefix scores via an expectation-maximization algorithm, leveraging the duality that the estimates of these scores can be improved by each other. Membership scores and prefix scores assess how each instance is likely to be a member and discriminative as a prefix, respectively. Our method achieves state-of-the-art results on the WikiMIA dataset. To further evaluate EM-MIA, we present OLMoMIA, a benchmark built from OLMo resources, which allows us to control the difficulty of MIA tasks with varying degrees of overlap between training and test data distributions. We believe that EM-MIA serves as a robust MIA method for LLMs and that OLMoMIA provides a valuable resource for comprehensively evaluating MIA approaches, thereby driving future research in this critical area.<|reference_end|> | arxiv | @article{kim2024detecting,
title={Detecting Training Data of Large Language Models via Expectation
Maximization},
author={Gyuwan Kim, Yang Li, Evangelia Spiliopoulou, Jie Ma, Miguel
Ballesteros, William Yang Wang},
journal={arXiv preprint arXiv:2410.07582},
year={2024},
archivePrefix={arXiv},
eprint={2410.07582},
primaryClass={cs.CL cs.AI cs.CR cs.LG}
} | kim2024detecting |
arxiv-667904 | 2410.07584 | Imitation Learning with Limited Actions via Diffusion Planners and Deep Koopman Controllers | <|reference_start|>Imitation Learning with Limited Actions via Diffusion Planners and Deep Koopman Controllers: Recent advances in diffusion-based robot policies have demonstrated significant potential in imitating multi-modal behaviors. However, these approaches typically require large quantities of demonstration data paired with corresponding robot action labels, creating a substantial data collection burden. In this work, we propose a plan-then-control framework aimed at improving the action-data efficiency of inverse dynamics controllers by leveraging observational demonstration data. Specifically, we adopt a Deep Koopman Operator framework to model the dynamical system and utilize observation-only trajectories to learn a latent action representation. This latent representation can then be effectively mapped to real high-dimensional continuous actions using a linear action decoder, requiring minimal action-labeled data. Through experiments on simulated robot manipulation tasks and a real robot experiment with multi-modal expert demonstrations, we demonstrate that our approach significantly enhances action-data efficiency and achieves high task success rates with limited action data.<|reference_end|> | arxiv | @article{bi2024imitation,
title={Imitation Learning with Limited Actions via Diffusion Planners and Deep
Koopman Controllers},
author={Jianxin Bi, Kelvin Lim, Kaiqi Chen, Yifei Huang, and Harold Soh},
journal={arXiv preprint arXiv:2410.07584},
year={2024},
archivePrefix={arXiv},
eprint={2410.07584},
primaryClass={cs.RO cs.LG}
} | bi2024imitation |
arxiv-667905 | 2410.07586 | A Cloud in the Sky: Geo-Aware On-board Data Services for LEO Satellites | <|reference_start|>A Cloud in the Sky: Geo-Aware On-board Data Services for LEO Satellites: We propose an architecture with accompanying protocol for on-board satellite data infrastructure designed for Low Earth Orbit (LEO) constellations offering communication services, such as direct-to-cell connectivity. Our design leverages the unused or under-used computing and communication resources of LEO satellites that are orbiting over uninhabited parts of the earth, like the oceans. We show how blockchain-backed distributed transactions can be run efficiently on this architecture to offer smart contract services. A key aspect of the proposed architecture that sets it apart from other blockchain systems is that migration of the ledger is not done solely to recover from failures. Rather, migration is also performed periodically and continuously as the satellites circle around in their orbits and enter and leave the blockchain service area. We show in simulations how message and blockchain processing overhead can be contained using different sizes of dynamic geo-aware service areas.<|reference_end|> | arxiv | @article{sandholm2024a,
title={A Cloud in the Sky: Geo-Aware On-board Data Services for LEO Satellites},
author={Thomas Sandholm, Sayandev Mukherjee, Bernardo A Huberman},
journal={arXiv preprint arXiv:2410.07586},
year={2024},
archivePrefix={arXiv},
eprint={2410.07586},
primaryClass={cs.DC}
} | sandholm2024a |
arxiv-667906 | 2410.07588 | Careful About What App Promotion Ads Recommend! Detecting and Explaining Malware Promotion via App Promotion Graph | <|reference_start|>Careful About What App Promotion Ads Recommend! Detecting and Explaining Malware Promotion via App Promotion Graph: In Android apps, their developers frequently place app promotion ads, namely advertisements to promote other apps. Unfortunately, the inadequate vetting of ad content allows malicious developers to exploit app promotion ads as a new distribution channel for malware. To help detect malware distributed via app promotion ads, in this paper, we propose a novel approach, named ADGPE, that synergistically integrates app user interface (UI) exploration with graph learning to automatically collect app promotion ads, detect malware promoted by these ads, and explain the promotion mechanisms employed by the detected malware. Our evaluation on 18, 627 app promotion ads demonstrates the substantial risks in the app promotion ecosystem.<|reference_end|> | arxiv | @article{ma2024careful,
title={Careful About What App Promotion Ads Recommend! Detecting and Explaining
Malware Promotion via App Promotion Graph},
author={Shang Ma, Chaoran Chen, Shao Yang, Shifu Hou, Toby Jia-Jun Li, Xusheng
Xiao, Tao Xie, Yanfang Ye},
journal={arXiv preprint arXiv:2410.07588},
year={2024},
archivePrefix={arXiv},
eprint={2410.07588},
primaryClass={cs.CR cs.CY}
} | ma2024careful |
arxiv-667907 | 2410.07589 | No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in LLMs, Even for Vigilant Users | <|reference_start|>No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in LLMs, Even for Vigilant Users: Retrieval-Augmented Generation (RAG) is widely adopted for its effectiveness and cost-efficiency in mitigating hallucinations and enhancing the domain-specific generation capabilities of large language models (LLMs). However, is this effectiveness and cost-efficiency truly a free lunch? In this study, we comprehensively investigate the fairness costs associated with RAG by proposing a practical three-level threat model from the perspective of user awareness of fairness. Specifically, varying levels of user fairness awareness result in different degrees of fairness censorship on the external dataset. We examine the fairness implications of RAG using uncensored, partially censored, and fully censored datasets. Our experiments demonstrate that fairness alignment can be easily undermined through RAG without the need for fine-tuning or retraining. Even with fully censored and supposedly unbiased external datasets, RAG can lead to biased outputs. Our findings underscore the limitations of current alignment methods in the context of RAG-based LLMs and highlight the urgent need for new strategies to ensure fairness. We propose potential mitigations and call for further research to develop robust fairness safeguards in RAG-based LLMs.<|reference_end|> | arxiv | @article{hu2024no,
title={No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in
LLMs, Even for Vigilant Users},
author={Mengxuan Hu and Hongyi Wu and Zihan Guan and Ronghang Zhu and
Dongliang Guo and Daiqing Qi and Sheng Li},
journal={arXiv preprint arXiv:2410.07589},
year={2024},
archivePrefix={arXiv},
eprint={2410.07589},
primaryClass={cs.IR cs.CL}
} | hu2024no |
arxiv-667908 | 2410.07590 | TurboRAG: Accelerating Retrieval-Augmented Generation with Precomputed KV Caches for Chunked Text | <|reference_start|>TurboRAG: Accelerating Retrieval-Augmented Generation with Precomputed KV Caches for Chunked Text: Current Retrieval-Augmented Generation (RAG) systems concatenate and process numerous retrieved document chunks for prefill which requires a large volume of computation, therefore leading to significant latency in time-to-first-token (TTFT). To reduce the computation overhead as well as TTFT, we introduce TurboRAG, a novel RAG system that redesigns the inference paradigm of the current RAG system by first pre-computing and storing the key-value (KV) caches of documents offline, and then directly retrieving the saved KV cache for prefill. Hence, online computation of KV caches is eliminated during inference. In addition, we provide a number of insights into the mask matrix and positional embedding mechanisms, plus fine-tune a pretrained language model to maintain model accuracy of TurboRAG. Our approach is applicable to most existing large language models and their applications without any requirement in modification of models and inference systems. Experimental results across a suite of RAG benchmarks demonstrate that TurboRAG reduces TTFT by up to 9.4x compared to the conventional RAG systems (on an average of 8.6x), but reserving comparable performance to the standard RAG systems.<|reference_end|> | arxiv | @article{lu2024turborag:,
title={TurboRAG: Accelerating Retrieval-Augmented Generation with Precomputed
KV Caches for Chunked Text},
author={Songshuo Lu and Hua Wang and Yutian Rong and Zhi Chen and Yaohua Tang},
journal={arXiv preprint arXiv:2410.07590},
year={2024},
archivePrefix={arXiv},
eprint={2410.07590},
primaryClass={cs.CV cs.CL}
} | lu2024turborag: |
arxiv-667909 | 2410.07592 | Diversified and Adaptive Negative Sampling on Knowledge Graphs | <|reference_start|>Diversified and Adaptive Negative Sampling on Knowledge Graphs: In knowledge graph embedding, aside from positive triplets (ie: facts in the knowledge graph), the negative triplets used for training also have a direct influence on the model performance. In reality, since knowledge graphs are sparse and incomplete, negative triplets often lack explicit labels, and thus they are often obtained from various sampling strategies (eg: randomly replacing an entity in a positive triplet). An ideal sampled negative triplet should be informative enough to help the model train better. However, existing methods often ignore diversity and adaptiveness in their sampling process, which harms the informativeness of negative triplets. As such, we propose a generative adversarial approach called Diversified and Adaptive Negative Sampling DANS on knowledge graphs. DANS is equipped with a two-way generator that generates more diverse negative triplets through two pathways, and an adaptive mechanism that produces more fine-grained examples by localizing the global generator for different entities and relations. On the one hand, the two-way generator increase the overall informativeness with more diverse negative examples; on the other hand, the adaptive mechanism increases the individual sample-wise informativeness with more fine-grained sampling. Finally, we evaluate the performance of DANS on three benchmark knowledge graphs to demonstrate its effectiveness through quantitative and qualitative experiments.<|reference_end|> | arxiv | @article{liu2024diversified,
title={Diversified and Adaptive Negative Sampling on Knowledge Graphs},
author={Ran Liu, Zhongzhou Liu, Xiaoli Li, Hao Wu and Yuan Fang},
journal={arXiv preprint arXiv:2410.07592},
year={2024},
archivePrefix={arXiv},
eprint={2410.07592},
primaryClass={cs.AI}
} | liu2024diversified |
arxiv-667910 | 2410.07593 | A Unified Debiasing Approach for Vision-Language Models across Modalities and Tasks | <|reference_start|>A Unified Debiasing Approach for Vision-Language Models across Modalities and Tasks: Recent advancements in Vision-Language Models (VLMs) have enabled complex multimodal tasks by processing text and image data simultaneously, significantly enhancing the field of artificial intelligence. However, these models often exhibit biases that can skew outputs towards societal stereotypes, thus necessitating debiasing strategies. Existing debiasing methods focus narrowly on specific modalities or tasks, and require extensive retraining. To address these limitations, this paper introduces Selective Feature Imputation for Debiasing (SFID), a novel methodology that integrates feature pruning and low confidence imputation (LCI) to effectively reduce biases in VLMs. SFID is versatile, maintaining the semantic integrity of outputs and costly effective by eliminating the need for retraining. Our experimental results demonstrate SFID's effectiveness across various VLMs tasks including zero-shot classification, text-to-image retrieval, image captioning, and text-to-image generation, by significantly reducing gender biases without compromising performance. This approach not only enhances the fairness of VLMs applications but also preserves their efficiency and utility across diverse scenarios.<|reference_end|> | arxiv | @article{jung2024a,
title={A Unified Debiasing Approach for Vision-Language Models across
Modalities and Tasks},
author={Hoin Jung, Taeuk Jang, Xiaoqian Wang},
journal={arXiv preprint arXiv:2410.07593},
year={2024},
archivePrefix={arXiv},
eprint={2410.07593},
primaryClass={cs.CV cs.AI}
} | jung2024a |
arxiv-667911 | 2410.07594 | Design and Characterization of High Efficiency Single-stage Electromagnetic Coil Guns | <|reference_start|>Design and Characterization of High Efficiency Single-stage Electromagnetic Coil Guns: This study presents several novel approaches to improve the efficiency of a single-stage coil gun. Conventional designs typically feature a uniformly wound solenoid and a ferrite projectile. For our research, we constructed a microcontroller-based prototype to test several new enhancements, including the use of a bipolar current pulse, a stepped multilayer coil with non-uniform winding densities, and the replacement of conventional ferrite projectiles with a neodymium permanent magnet. These modifications were designed to reduce energy loss and improve projectile acceleration by changing magnetic field strength and effectively controlling the magnetic flux. The experimental results show that the proposed methods resulted in significant efficiency improvements, with the varying current pulse and stepped coil design providing enhanced magnetic force at key points in the projectile's path, and the permanent magnet projectile contributing to higher velocities and efficiencies by leveraging the current pulses. Our findings suggest that combining these enhancements significantly improves coil gun performance, achieving higher velocities and efficiencies. These findings can be applied to future coil gun developments, such as multi-stage coil gun systems.<|reference_end|> | arxiv | @article{chen2024design,
title={Design and Characterization of High Efficiency Single-stage
Electromagnetic Coil Guns},
author={Sophia Chen, Annie Peng, Ava Chen, and Takyiu Liu},
journal={arXiv preprint arXiv:2410.07594},
year={2024},
archivePrefix={arXiv},
eprint={2410.07594},
primaryClass={eess.SY cs.SY}
} | chen2024design |
arxiv-667912 | 2410.07595 | Geometric structure and transversal logic of quantum Reed-Muller codes | <|reference_start|>Geometric structure and transversal logic of quantum Reed-Muller codes: Designing efficient and noise-tolerant quantum computation protocols generally begins with an understanding of quantum error-correcting codes and their native logical operations. The simplest class of native operations are transversal gates, which are naturally fault-tolerant. In this paper, we aim to characterize the transversal gates of quantum Reed-Muller (RM) codes by exploiting the well-studied properties of their classical counterparts. We start our work by establishing a new geometric characterization of quantum RM codes via the Boolean hypercube and its associated subcube complex. More specifically, a set of stabilizer generators for a quantum RM code can be described via transversal $X$ and $Z$ operators acting on subcubes of particular dimensions. This characterization leads us to define subcube operators composed of single-qubit $\pi/2^k$ $Z$-rotations that act on subcubes of given dimensions. We first characterize the action of subcube operators on the code space: depending on the dimension of the subcube, these operators either (1) act as a logical identity on the code space, (2) implement non-trivial logic, or (3) rotate a state away from the code space. Second, and more remarkably, we uncover that the logic implemented by these operators corresponds to circuits of multi-controlled-$Z$ gates that have an explicit and simple combinatorial description. Overall, this suite of results yields a comprehensive understanding of a class of natural transversal operators for quantum RM codes.<|reference_end|> | arxiv | @article{barg2024geometric,
title={Geometric structure and transversal logic of quantum Reed-Muller codes},
author={Alexander Barg, Nolan J. Coble, Dominik Hangleiter, Christopher Kang},
journal={arXiv preprint arXiv:2410.07595},
year={2024},
archivePrefix={arXiv},
eprint={2410.07595},
primaryClass={quant-ph cs.IT math.CO math.IT}
} | barg2024geometric |
arxiv-667913 | 2410.07597 | Fine-detailed Neural Indoor Scene Reconstruction using multi-level importance sampling and multi-view consistency | <|reference_start|>Fine-detailed Neural Indoor Scene Reconstruction using multi-level importance sampling and multi-view consistency: Recently, neural implicit 3D reconstruction in indoor scenarios has become popular due to its simplicity and impressive performance. Previous works could produce complete results leveraging monocular priors of normal or depth. However, they may suffer from over-smoothed reconstructions and long-time optimization due to unbiased sampling and inaccurate monocular priors. In this paper, we propose a novel neural implicit surface reconstruction method, named FD-NeuS, to learn fine-detailed 3D models using multi-level importance sampling strategy and multi-view consistency methodology. Specifically, we leverage segmentation priors to guide region-based ray sampling, and use piecewise exponential functions as weights to pilot 3D points sampling along the rays, ensuring more attention on important regions. In addition, we introduce multi-view feature consistency and multi-view normal consistency as supervision and uncertainty respectively, which further improve the reconstruction of details. Extensive quantitative and qualitative results show that FD-NeuS outperforms existing methods in various scenes.<|reference_end|> | arxiv | @article{li2024fine-detailed,
title={Fine-detailed Neural Indoor Scene Reconstruction using multi-level
importance sampling and multi-view consistency},
author={Xinghui Li, Yuchen Ji, Xiansong Lai, Wanting Zhang},
journal={arXiv preprint arXiv:2410.07597},
year={2024},
doi={10.1109/ICIP51287.2024.10648179},
archivePrefix={arXiv},
eprint={2410.07597},
primaryClass={cs.CV}
} | li2024fine-detailed |
arxiv-667914 | 2410.07599 | Causal Image Modeling for Efficient Visual Understanding | <|reference_start|>Causal Image Modeling for Efficient Visual Understanding: In this work, we present a comprehensive analysis of causal image modeling and introduce the Adventurer series models where we treat images as sequences of patch tokens and employ uni-directional language models to learn visual representations. This modeling paradigm allows us to process images in a recurrent formulation with linear complexity relative to the sequence length, which can effectively address the memory and computation explosion issues posed by high-resolution and fine-grained images. In detail, we introduce two simple designs that seamlessly integrate image inputs into the causal inference framework: a global pooling token placed at the beginning of the sequence and a flipping operation between every two layers. Extensive empirical studies demonstrate the significant efficiency and effectiveness of this causal image modeling paradigm. For example, our base-sized Adventurer model attains a competitive test accuracy of 84.0% on the standard ImageNet-1k benchmark with 216 images/s training throughput, which is 5.3 times more efficient than vision transformers to achieve the same result.<|reference_end|> | arxiv | @article{wang2024causal,
title={Causal Image Modeling for Efficient Visual Understanding},
author={Feng Wang, Timing Yang, Yaodong Yu, Sucheng Ren, Guoyizhe Wei, Angtian
Wang, Wei Shao, Yuyin Zhou, Alan Yuille, Cihang Xie},
journal={arXiv preprint arXiv:2410.07599},
year={2024},
archivePrefix={arXiv},
eprint={2410.07599},
primaryClass={cs.CV}
} | wang2024causal |
arxiv-667915 | 2410.07600 | RNA: Video Editing with ROI-based Neural Atlas | <|reference_start|>RNA: Video Editing with ROI-based Neural Atlas: With the recent growth of video-based Social Network Service (SNS) platforms, the demand for video editing among common users has increased. However, video editing can be challenging due to the temporally-varying factors such as camera movement and moving objects. While modern atlas-based video editing methods have addressed these issues, they often fail to edit videos including complex motion or multiple moving objects, and demand excessive computational cost, even for very simple edits. In this paper, we propose a novel region-of-interest (ROI)-based video editing framework: ROI-based Neural Atlas (RNA). Unlike prior work, RNA allows users to specify editing regions, simplifying the editing process by removing the need for foreground separation and atlas modeling for foreground objects. However, this simplification presents a unique challenge: acquiring a mask that effectively handles occlusions in the edited area caused by moving objects, without relying on an additional segmentation model. To tackle this, we propose a novel mask refinement approach designed for this specific challenge. Moreover, we introduce a soft neural atlas model for video reconstruction to ensure high-quality editing results. Extensive experiments show that RNA offers a more practical and efficient editing solution, applicable to a wider range of videos with superior quality compared to prior methods.<|reference_end|> | arxiv | @article{lee2024rna:,
title={RNA: Video Editing with ROI-based Neural Atlas},
author={Jaekyeong Lee, Geonung Kim, Sunghyun Cho},
journal={arXiv preprint arXiv:2410.07600},
year={2024},
archivePrefix={arXiv},
eprint={2410.07600},
primaryClass={cs.CV}
} | lee2024rna: |
arxiv-667916 | 2410.07602 | Packed Acyclic Deterministic Finite Automata | <|reference_start|>Packed Acyclic Deterministic Finite Automata: An acyclic deterministic finite automaton (ADFA) is a data structure that represents a set of strings (i.e., a dictionary) and facilitates a pattern searching problem of determining whether a given pattern string is present in the dictionary. We introduce the packed ADFA (PADFA), a compact variant of ADFA, which is designed to achieve more efficient pattern searching by encoding specific paths as packed strings stored in contiguous memory. We theoretically demonstrate that pattern searching in PADFA is near time-optimal with a small additional overhead and becomes fully time-optimal for sufficiently long patterns. Moreover, we prove that a PADFA requires fewer bits than a trie when the dictionary size is relatively smaller than the number of states in the PADFA. Lastly, we empirically show that PADFAs improve both the space and time efficiency of pattern searching on real-world datasets.<|reference_end|> | arxiv | @article{shibata2024packed,
title={Packed Acyclic Deterministic Finite Automata},
author={Hiroki Shibata, Masakazu Ishihata, Shunsuke Inenaga},
journal={arXiv preprint arXiv:2410.07602},
year={2024},
archivePrefix={arXiv},
eprint={2410.07602},
primaryClass={cs.DS}
} | shibata2024packed |
arxiv-667917 | 2410.07603 | An Analysis of XML Compression Efficiency | <|reference_start|>An Analysis of XML Compression Efficiency: XML simplifies data exchange among heterogeneous computers, but it is notoriously verbose and has spawned the development of many XML-specific compressors and binary formats. We present an XML test corpus and a combined efficiency metric integrating compression ratio and execution speed. We use this corpus and linear regression to assess 14 general-purpose and XML-specific compressors relative to the proposed metric. We also identify key factors when selecting a compressor. Our results show XMill or WBXML may be useful in some instances, but a general-purpose compressor is often the best choice.<|reference_end|> | arxiv | @article{augeri2024an,
title={An Analysis of XML Compression Efficiency},
author={Christopher James Augeri, Barry E. Mullins, Leemon C. Baird III,
Dursun A. Bulutoglu, Rusty O. Baldwin},
journal={Proceedings of the 2007 workshop on Experimental Computer Science
(ExpCS) at ACM FCRC 2007},
year={2024},
doi={10.1145/1281700.1281707},
archivePrefix={arXiv},
eprint={2410.07603},
primaryClass={cs.DB cs.IT cs.PF math.IT}
} | augeri2024an |
arxiv-667918 | 2410.07605 | A Variational Bayesian Inference Theory of Elasticity and Its Mixed Probabilistic Finite Element Method for Inverse Deformation Solutions in Any Dimension | <|reference_start|>A Variational Bayesian Inference Theory of Elasticity and Its Mixed Probabilistic Finite Element Method for Inverse Deformation Solutions in Any Dimension: In this work, we have developed a variational Bayesian inference theory of elasticity, which is accomplished by using a mixed Variational Bayesian inference Finite Element Method (VBI-FEM) that can be used to solve the inverse deformation problems of continua. In the proposed variational Bayesian inference theory of continuum mechanics, the elastic strain energy is used as a prior in a Bayesian inference network, which can intelligently recover the detailed continuum deformation mappings with only given the information on the deformed and undeformed continuum body shapes without knowing the interior deformation and the precise actual boundary conditions, both traction as well as displacement boundary conditions, and the actual material constitutive relation. Moreover, we have implemented the related finite element formulation in a computational probabilistic mechanics framework. To numerically solve mixed variational problem, we developed an operator splitting or staggered algorithm that consists of the finite element (FE) step and the Bayesian learning (BL) step as an analogue of the well-known the Expectation-Maximization (EM) algorithm. By solving the mixed probabilistic Galerkin variational problem, we demonstrated that the proposed method is able to inversely predict continuum deformation mappings with strong discontinuity or fracture without knowing the external load conditions. The proposed method provides a robust machine intelligent solution for the long-sought-after inverse problem solution, which has been a major challenge in structure failure forensic pattern analysis in past several decades. The proposed method may become a promising artificial intelligence-based inverse method for solving general partial differential equations.<|reference_end|> | arxiv | @article{wang2024a,
title={A Variational Bayesian Inference Theory of Elasticity and Its Mixed
Probabilistic Finite Element Method for Inverse Deformation Solutions in Any
Dimension},
author={Chao Wang and Shaofan Li},
journal={arXiv preprint arXiv:2410.07605},
year={2024},
archivePrefix={arXiv},
eprint={2410.07605},
primaryClass={cs.CV cs.LG cs.NA math.NA}
} | wang2024a |
arxiv-667919 | 2410.07606 | Stop-N-Go: Search-based Conflict Resolution for Motion Planning of Multiple Robotic Manipulators | <|reference_start|>Stop-N-Go: Search-based Conflict Resolution for Motion Planning of Multiple Robotic Manipulators: We address the motion planning problem for multiple robotic manipulators in packed environments where shared workspace can result in goal positions occupied or blocked by other robots unless those other robots move away to make the goal positions free. While planning in a coupled configuration space (C-space) is straightforward, it struggles to scale with the number of robots and often fails to find solutions. Decoupled planning is faster but frequently leads to conflicts between trajectories. We propose a conflict resolution approach that inserts pauses into individually planned trajectories using an A* search strategy to minimize the makespan--the total time until all robots complete their tasks. This method allows some robots to stop, enabling others to move without collisions, and maintains short distances in the C-space. It also effectively handles cases where goal positions are initially blocked by other robots. Experimental results show that our method successfully solves challenging instances where baseline methods fail to find feasible solutions.<|reference_end|> | arxiv | @article{han2024stop-n-go:,
title={Stop-N-Go: Search-based Conflict Resolution for Motion Planning of
Multiple Robotic Manipulators},
author={Gidon Han, Jeongwoo Park and Changjoo Nam},
journal={arXiv preprint arXiv:2410.07606},
year={2024},
archivePrefix={arXiv},
eprint={2410.07606},
primaryClass={cs.RO}
} | han2024stop-n-go: |
arxiv-667920 | 2410.07608 | Assessing the impacts of convening experts: a bibliometric analysis of a research program spanning four decades | <|reference_start|>Assessing the impacts of convening experts: a bibliometric analysis of a research program spanning four decades: Over the last few decades, research institutions and funders have begun policies and programs that incentivize large-scale collaboration across institutions on focal research areas. Yet, few studies have evaluated the impact of those programs on research, particularly on timelines longer than a few years. Using the Canadian Institute for Advanced Research (CIFAR) as a case study, we examined the impacts of supporting a research program that convened experts across intuitions and countries for 40+ years. In this study, we used the Scopus bibliometric database to analyse publishing and citation trends within this team since its formation in 1986 and used nearest neighbour matching to compare these trends against authors across the globe with similar career characteristics to measure how effectively the CIFAR program Gravity & the Extreme Universe (CIFAR-GEU) has catalyzed collaborations and produced high quality research outputs. We found a greater degree of co-authorship within the CIFAR-GEU group compared to the Control group. We also found that the outputs generated by the CIFAR-GEU group had, overall, higher values for citation-based impact indicators (e.g., stronger metrics around citations, impact from international collaborations and reach beyond academia).<|reference_end|> | arxiv | @article{buehler2024assessing,
title={Assessing the impacts of convening experts: a bibliometric analysis of a
research program spanning four decades},
author={Deborah M. Buehler, Mark J Daley and Kyle Demes},
journal={arXiv preprint arXiv:2410.07608},
year={2024},
archivePrefix={arXiv},
eprint={2410.07608},
primaryClass={cs.DL}
} | buehler2024assessing |
arxiv-667921 | 2410.07610 | CSA: Data-efficient Mapping of Unimodal Features to Multimodal Features | <|reference_start|>CSA: Data-efficient Mapping of Unimodal Features to Multimodal Features: Multimodal encoders like CLIP excel in tasks such as zero-shot image classification and cross-modal retrieval. However, they require excessive training data. We propose canonical similarity analysis (CSA), which uses two unimodal encoders to replicate multimodal encoders using limited data. CSA maps unimodal features into a multimodal space, using a new similarity score to retain only the multimodal information. CSA only involves the inference of unimodal encoders and a cubic-complexity matrix decomposition, eliminating the need for extensive GPU-based model training. Experiments show that CSA outperforms CLIP while requiring $300,000\times$ fewer multimodal data pairs and $6\times$ fewer unimodal data for ImageNet classification and misinformative news captions detection. CSA surpasses the state-of-the-art method to map unimodal features to multimodal features. We also demonstrate the ability of CSA with modalities beyond image and text, paving the way for future modality pairs with limited paired multimodal data but abundant unpaired unimodal data, such as lidar and text.<|reference_end|> | arxiv | @article{li2024csa:,
title={CSA: Data-efficient Mapping of Unimodal Features to Multimodal Features},
author={Po-han Li, Sandeep P. Chinchali, Ufuk Topcu},
journal={arXiv preprint arXiv:2410.07610},
year={2024},
archivePrefix={arXiv},
eprint={2410.07610},
primaryClass={cs.LG cs.AI cs.CV cs.IR}
} | li2024csa: |
arxiv-667922 | 2410.07611 | Parallel Digital Twin-driven Deep Reinforcement Learning for User Association and Load Balancing in Dynamic Wireless Networks | <|reference_start|>Parallel Digital Twin-driven Deep Reinforcement Learning for User Association and Load Balancing in Dynamic Wireless Networks: Optimization of user association in a densely deployed heterogeneous cellular network is usually challenging and even more complicated due to the dynamic nature of user mobility and fluctuation in user counts. While deep reinforcement learning (DRL) emerges as a promising solution, its application in practice is hindered by high trial-and-error costs in real world and unsatisfactory physical network performance during training. In addition, existing DRL-based user association methods are usually only applicable to scenarios with a fixed number of users due to convergence and compatibility challenges. In this paper, we propose a parallel digital twin (DT)-driven DRL method for user association and load balancing in networks with both dynamic user counts, distribution, and mobility patterns. Our method employs a distributed DRL strategy to handle varying user numbers and exploits a refined neural network structure for faster convergence. To address these DRL training-related challenges, we devise a high-fidelity DT construction technique, featuring a zero-shot generative user mobility model, named Map2Traj, based on a diffusion model. Map2Traj estimates user trajectory patterns and spatial distributions solely from street maps. Armed with this DT environment, DRL agents are enabled to be trained without the need for interactions with the physical network. To enhance the generalization ability of DRL models for dynamic scenarios, a parallel DT framework is further established to alleviate strong correlation and non-stationarity in single-environment training and improve the training efficiency. Numerical results show that the proposed parallel DT-driven DRL method achieves closely comparable performance to real environment training, and even outperforms those trained in a single real-world environment with nearly 20% gain in terms of cell-edge user performance.<|reference_end|> | arxiv | @article{tao2024parallel,
title={Parallel Digital Twin-driven Deep Reinforcement Learning for User
Association and Load Balancing in Dynamic Wireless Networks},
author={Zhenyu Tao, Wei Xu, Xiaohu You},
journal={arXiv preprint arXiv:2410.07611},
year={2024},
archivePrefix={arXiv},
eprint={2410.07611},
primaryClass={cs.LG cs.SY eess.SY}
} | tao2024parallel |
arxiv-667923 | 2410.07612 | A Survey for Deep Reinforcement Learning Based Network Intrusion Detection | <|reference_start|>A Survey for Deep Reinforcement Learning Based Network Intrusion Detection: Cyber-attacks are becoming increasingly sophisticated and frequent, highlighting the importance of network intrusion detection systems. This paper explores the potential and challenges of using deep reinforcement learning (DRL) in network intrusion detection. It begins by introducing key DRL concepts and frameworks, such as deep Q-networks and actor-critic algorithms, and reviews recent research utilizing DRL for intrusion detection. The study evaluates challenges related to model training efficiency, detection of minority and unknown class attacks, feature selection, and handling unbalanced datasets. The performance of DRL models is comprehensively analyzed, showing that while DRL holds promise, many recent technologies remain underexplored. Some DRL models achieve state-of-the-art results on public datasets, occasionally outperforming traditional deep learning methods. The paper concludes with recommendations for enhancing DRL deployment and testing in real-world network scenarios, with a focus on Internet of Things intrusion detection. It discusses recent DRL architectures and suggests future policy functions for DRL-based intrusion detection. Finally, the paper proposes integrating DRL with generative methods to further improve performance, addressing current gaps and supporting more robust and adaptive network intrusion detection systems.<|reference_end|> | arxiv | @article{yang2024a,
title={A Survey for Deep Reinforcement Learning Based Network Intrusion
Detection},
author={Wanrong Yang, Alberto Acuto, Yihang Zhou, Dominik Wojtczak},
journal={arXiv preprint arXiv:2410.07612},
year={2024},
archivePrefix={arXiv},
eprint={2410.07612},
primaryClass={cs.CR cs.AI}
} | yang2024a |
arxiv-667924 | 2410.07613 | Explainability of Deep Neural Networks for Brain Tumor Detection | <|reference_start|>Explainability of Deep Neural Networks for Brain Tumor Detection: Medical image classification is crucial for supporting healthcare professionals in decision-making and training. While Convolutional Neural Networks (CNNs) have traditionally dominated this field, Transformer-based models are gaining attention. In this study, we apply explainable AI (XAI) techniques to assess the performance of various models on real-world medical data and identify areas for improvement. We compare CNN models such as VGG-16, ResNet-50, and EfficientNetV2L with a Transformer model: ViT-Base-16. Our results show that data augmentation has little impact, but hyperparameter tuning and advanced modeling improve performance. CNNs, particularly VGG-16 and ResNet-50, outperform ViT-Base-16 and EfficientNetV2L, likely due to underfitting from limited data. XAI methods like LIME and SHAP further reveal that better-performing models visualize tumors more effectively. These findings suggest that CNNs with shallower architectures are more effective for small datasets and can support medical decision-making.<|reference_end|> | arxiv | @article{park2024explainability,
title={Explainability of Deep Neural Networks for Brain Tumor Detection},
author={S.Park and J.Kim},
journal={arXiv preprint arXiv:2410.07613},
year={2024},
archivePrefix={arXiv},
eprint={2410.07613},
primaryClass={cs.CV}
} | park2024explainability |
arxiv-667925 | 2410.07616 | The Plug-in Approach for Average-Reward and Discounted MDPs: Optimal Sample Complexity Analysis | <|reference_start|>The Plug-in Approach for Average-Reward and Discounted MDPs: Optimal Sample Complexity Analysis: We study the sample complexity of the plug-in approach for learning $\varepsilon$-optimal policies in average-reward Markov decision processes (MDPs) with a generative model. The plug-in approach constructs a model estimate then computes an average-reward optimal policy in the estimated model. Despite representing arguably the simplest algorithm for this problem, the plug-in approach has never been theoretically analyzed. Unlike the more well-studied discounted MDP reduction method, the plug-in approach requires no prior problem information or parameter tuning. Our results fill this gap and address the limitations of prior approaches, as we show that the plug-in approach is optimal in several well-studied settings without using prior knowledge. Specifically it achieves the optimal diameter- and mixing-based sample complexities of $\widetilde{O}\left(SA \frac{D}{\varepsilon^2}\right)$ and $\widetilde{O}\left(SA \frac{\tau_{\mathrm{unif}}}{\varepsilon^2}\right)$, respectively, without knowledge of the diameter $D$ or uniform mixing time $\tau_{\mathrm{unif}}$. We also obtain span-based bounds for the plug-in approach, and complement them with algorithm-specific lower bounds suggesting that they are unimprovable. Our results require novel techniques for analyzing long-horizon problems which may be broadly useful and which also improve results for the discounted plug-in approach, removing effective-horizon-related sample size restrictions and obtaining the first optimal complexity bounds for the full range of sample sizes without reward perturbation.<|reference_end|> | arxiv | @article{zurek2024the,
title={The Plug-in Approach for Average-Reward and Discounted MDPs: Optimal
Sample Complexity Analysis},
author={Matthew Zurek, Yudong Chen},
journal={arXiv preprint arXiv:2410.07616},
year={2024},
archivePrefix={arXiv},
eprint={2410.07616},
primaryClass={cs.LG cs.IT math.IT math.OC stat.ML}
} | zurek2024the |
arxiv-667926 | 2410.07617 | Prototype-based Optimal Transport for Out-of-Distribution Detection | <|reference_start|>Prototype-based Optimal Transport for Out-of-Distribution Detection: Detecting Out-of-Distribution (OOD) inputs is crucial for improving the reliability of deep neural networks in the real-world deployment. In this paper, inspired by the inherent distribution shift between ID and OOD data, we propose a novel method that leverages optimal transport to measure the distribution discrepancy between test inputs and ID prototypes. The resulting transport costs are used to quantify the individual contribution of each test input to the overall discrepancy, serving as a desirable measure for OOD detection. To address the issue that solely relying on the transport costs to ID prototypes is inadequate for identifying OOD inputs closer to ID data, we generate virtual outliers to approximate the OOD region via linear extrapolation. By combining the transport costs to ID prototypes with the costs to virtual outliers, the detection of OOD data near ID data is emphasized, thereby enhancing the distinction between ID and OOD inputs. Experiments demonstrate the superiority of our method over state-of-the-art methods.<|reference_end|> | arxiv | @article{ke2024prototype-based,
title={Prototype-based Optimal Transport for Out-of-Distribution Detection},
author={Ao Ke, Wenlong Chen, Chuanwen Feng, Yukun Cao, Xike Xie, S.Kevin Zhou,
Lei Feng},
journal={arXiv preprint arXiv:2410.07617},
year={2024},
archivePrefix={arXiv},
eprint={2410.07617},
primaryClass={cs.CV}
} | ke2024prototype-based |
arxiv-667927 | 2410.07618 | Moyun: A Diffusion-Based Model for Style-Specific Chinese Calligraphy Generation | <|reference_start|>Moyun: A Diffusion-Based Model for Style-Specific Chinese Calligraphy Generation: Although Chinese calligraphy generation has achieved style transfer, generating calligraphy by specifying the calligrapher, font, and character style remains challenging. To address this, we propose a new Chinese calligraphy generation model 'Moyun' , which replaces the Unet in the Diffusion model with Vision Mamba and introduces the TripleLabel control mechanism to achieve controllable calligraphy generation. The model was tested on our large-scale dataset 'Mobao' of over 1.9 million images, and the results demonstrate that 'Moyun' can effectively control the generation process and produce calligraphy in the specified style. Even for calligraphy the calligrapher has not written, 'Moyun' can generate calligraphy that matches the style of the calligrapher.<|reference_end|> | arxiv | @article{liu2024moyun:,
title={Moyun: A Diffusion-Based Model for Style-Specific Chinese Calligraphy
Generation},
author={Kaiyuan Liu, Jiahao Mei, Hengyu Zhang, Yihuai Zhang, Xingjiao Wu,
Daoguo Dong, Liang He},
journal={arXiv preprint arXiv:2410.07618},
year={2024},
archivePrefix={arXiv},
eprint={2410.07618},
primaryClass={cs.CV cs.AI}
} | liu2024moyun: |
arxiv-667928 | 2410.07625 | MorCode: Face Morphing Attack Generation using Generative Codebooks | <|reference_start|>MorCode: Face Morphing Attack Generation using Generative Codebooks: Face recognition systems (FRS) can be compromised by face morphing attacks, which blend textural and geometric information from multiple facial images. The rapid evolution of generative AI, especially Generative Adversarial Networks (GAN) or Diffusion models, where encoded images are interpolated to generate high-quality face morphing images. In this work, we present a novel method for the automatic face morphing generation method \textit{MorCode}, which leverages a contemporary encoder-decoder architecture conditioned on codebook learning to generate high-quality morphing images. Extensive experiments were performed on the newly constructed morphing dataset using five state-of-the-art morphing generation techniques using both digital and print-scan data. The attack potential of the proposed morphing generation technique, \textit{MorCode}, was benchmarked using three different face recognition systems. The obtained results indicate the highest attack potential of the proposed \textit{MorCode} when compared with five state-of-the-art morphing generation methods on both digital and print scan data.<|reference_end|> | arxiv | @article{pn2024morcode:,
title={MorCode: Face Morphing Attack Generation using Generative Codebooks},
author={Aravinda Reddy PN, Raghavendra Ramachandra, Sushma Venkatesh,
Krothapalli Sreenivasa Rao, Pabitra Mitra, Rakesh Krishna},
journal={arXiv preprint arXiv:2410.07625},
year={2024},
archivePrefix={arXiv},
eprint={2410.07625},
primaryClass={cs.CV}
} | pn2024morcode: |
arxiv-667929 | 2410.07627 | Automatic Curriculum Expert Iteration for Reliable LLM Reasoning | <|reference_start|>Automatic Curriculum Expert Iteration for Reliable LLM Reasoning: Hallucinations (i.e., generating plausible but inaccurate content) and laziness (i.e. excessive refusals or defaulting to "I don't know") persist as major challenges in LLM reasoning. Current efforts to reduce hallucinations primarily focus on factual errors in knowledge-grounded tasks, often neglecting hallucinations related to faulty reasoning. Meanwhile, some approaches render LLMs overly conservative, limiting their problem-solving capabilities. To mitigate hallucination and laziness in reasoning tasks, we propose Automatic Curriculum Expert Iteration (Auto-CEI) to enhance LLM reasoning and align responses to the model's capabilities--assertively answering within its limits and declining when tasks exceed them. In our method, Expert Iteration explores the reasoning trajectories near the LLM policy, guiding incorrect paths back on track to reduce compounding errors and improve robustness; it also promotes appropriate "I don't know" responses after sufficient reasoning attempts. The curriculum automatically adjusts rewards, incentivizing extended reasoning before acknowledging incapability, thereby pushing the limits of LLM reasoning and aligning its behaviour with these limits. We compare Auto-CEI with various SOTA baselines across logical reasoning, mathematics, and planning tasks, where Auto-CEI achieves superior alignment by effectively balancing assertiveness and conservativeness.<|reference_end|> | arxiv | @article{zhao2024automatic,
title={Automatic Curriculum Expert Iteration for Reliable LLM Reasoning},
author={Zirui Zhao, Hanze Dong, Amrita Saha, Caiming Xiong, Doyen Sahoo},
journal={arXiv preprint arXiv:2410.07627},
year={2024},
archivePrefix={arXiv},
eprint={2410.07627},
primaryClass={cs.LG cs.AI cs.CL stat.ML}
} | zhao2024automatic |
arxiv-667930 | 2410.07629 | Secure Wearable Apps for Remote Healthcare Through Modern Cryptography | <|reference_start|>Secure Wearable Apps for Remote Healthcare Through Modern Cryptography: Wearable devices like smartwatches, wristbands, and fitness trackers are designed to be lightweight devices to be worn on the human body. With the increased connectivity of wearable devices, they will become integral to remote healthcare solutions. For example, a smartwatch can measure and upload a patient's vital signs to the cloud through a network which is monitored by software backed with Artificial Intelligence. When an anomaly of a patient is detected, it will be alerted to healthcare professionals for proper intervention. Remote healthcare offers substantial benefits for both patients and healthcare providers as patients may avoid expensive in-patient care by choosing the comfort of staying at home while being monitored after a surgery and healthcare providers can resolve challenges between limited resources and a growing population. While remote healthcare through wearable devices is ubiquitous and affordable, it raises concerns about patient privacy. Patients may wonder: Is my data stored in the cloud safe? Can anyone access and manipulate my data for blackmailing? Hence, securing patient private information end-to-end becomes crucial. This paper explores solutions for applying modern cryptography to secure wearable apps and ensure patient data is protected with confidentiality, integrity, and authenticity from wearable edge to cloud.<|reference_end|> | arxiv | @article{li2024secure,
title={Secure Wearable Apps for Remote Healthcare Through Modern Cryptography},
author={Andric Li, Grace Luo, Christopher Tao, Diego Zuluaga},
journal={arXiv preprint arXiv:2410.07629},
year={2024},
archivePrefix={arXiv},
eprint={2410.07629},
primaryClass={cs.CR}
} | li2024secure |
arxiv-667931 | 2410.07630 | Simplified POMDP Planning with an Alternative Observation Space and Formal Performance Guarantees | <|reference_start|>Simplified POMDP Planning with an Alternative Observation Space and Formal Performance Guarantees: Online planning under uncertainty in partially observable domains is an essential capability in robotics and AI. The partially observable Markov decision process (POMDP) is a mathematically principled framework for addressing decision-making problems in this challenging setting. However, finding an optimal solution for POMDPs is computationally expensive and is feasible only for small problems. In this work, we contribute a novel method to simplify POMDPs by switching to an alternative, more compact, observation space and simplified model to speedup planning with formal performance guarantees. We introduce the notion of belief tree topology, which encodes the levels and branches in the tree that use the original and alternative observation space and models. Each belief tree topology comes with its own policy space and planning performance. Our key contribution is to derive bounds between the optimal Q-function of the original POMDP and the simplified tree defined by a given topology with a corresponding simplified policy space. These bounds are then used as an adaptation mechanism between different tree topologies until the optimal action of the original POMDP can be determined. Further, we consider a specific instantiation of our framework, where the alternative observation space and model correspond to a setting where the state is fully observable. We evaluate our approach in simulation, considering exact and approximate POMDP solvers and demonstrating a significant speedup while preserving solution quality. We believe this work opens new exciting avenues for online POMDP planning with formal performance guarantees.<|reference_end|> | arxiv | @article{kong2024simplified,
title={Simplified POMDP Planning with an Alternative Observation Space and
Formal Performance Guarantees},
author={Da Kong, Vadim Indelman},
journal={arXiv preprint arXiv:2410.07630},
year={2024},
archivePrefix={arXiv},
eprint={2410.07630},
primaryClass={cs.RO}
} | kong2024simplified |
arxiv-667932 | 2410.07632 | Provable Privacy Attacks on Trained Shallow Neural Networks | <|reference_start|>Provable Privacy Attacks on Trained Shallow Neural Networks: We study what provable privacy attacks can be shown on trained, 2-layer ReLU neural networks. We explore two types of attacks; data reconstruction attacks, and membership inference attacks. We prove that theoretical results on the implicit bias of 2-layer neural networks can be used to provably reconstruct a set of which at least a constant fraction are training points in a univariate setting, and can also be used to identify with high probability whether a given point was used in the training set in a high dimensional setting. To the best of our knowledge, our work is the first to show provable vulnerabilities in this setting.<|reference_end|> | arxiv | @article{smorodinsky2024provable,
title={Provable Privacy Attacks on Trained Shallow Neural Networks},
author={Guy Smorodinsky and Gal Vardi and Itay Safran},
journal={arXiv preprint arXiv:2410.07632},
year={2024},
archivePrefix={arXiv},
eprint={2410.07632},
primaryClass={cs.LG cs.CR}
} | smorodinsky2024provable |
arxiv-667933 | 2410.07633 | DPL: Cross-quality DeepFake Detection via Dual Progressive Learning | <|reference_start|>DPL: Cross-quality DeepFake Detection via Dual Progressive Learning: Real-world DeepFake videos often undergo various compression operations, resulting in a range of video qualities. These varying qualities diversify the pattern of forgery traces, significantly increasing the difficulty of DeepFake detection. To address this challenge, we introduce a new Dual Progressive Learning (DPL) framework for cross-quality DeepFake detection. We liken this task to progressively drilling for underground water, where low-quality videos require more effort than high-quality ones. To achieve this, we develop two sequential-based branches to "drill waters" with different efforts. The first branch progressively excavates the forgery traces according to the levels of video quality, i.e., time steps, determined by a dedicated CLIP-based indicator. In this branch, a Feature Selection Module is designed to adaptively assign appropriate features to the corresponding time steps. Considering that different techniques may introduce varying forgery traces within the same video quality, we design a second branch targeting forgery identifiability as complementary. This branch operates similarly and shares the feature selection module with the first branch. Our design takes advantage of the sequential model where computational units share weights across different time steps and can memorize previous progress, elegantly achieving progressive learning while maintaining reasonable memory costs. Extensive experiments demonstrate the superiority of our method for cross-quality DeepFake detection.<|reference_end|> | arxiv | @article{zhang2024dpl:,
title={DPL: Cross-quality DeepFake Detection via Dual Progressive Learning},
author={Dongliang Zhang, Yunfei Li, Jiaran Zhou, Yuezun Li},
journal={arXiv preprint arXiv:2410.07633},
year={2024},
archivePrefix={arXiv},
eprint={2410.07633},
primaryClass={cs.CV}
} | zhang2024dpl: |
arxiv-667934 | 2410.07635 | Shift and matching queries for video semantic segmentation | <|reference_start|>Shift and matching queries for video semantic segmentation: Video segmentation is a popular task, but applying image segmentation models frame-by-frame to videos does not preserve temporal consistency. In this paper, we propose a method to extend a query-based image segmentation model to video using feature shift and query matching. The method uses a query-based architecture, where decoded queries represent segmentation masks. These queries should be matched before performing the feature shift to ensure that the shifted queries represent the same mask across different frames. Experimental results on CityScapes-VPS and VSPW show significant improvements from the baselines, highlighting the method's effectiveness in enhancing segmentation quality while efficiently reusing pre-trained weights.<|reference_end|> | arxiv | @article{mizuno2024shift,
title={Shift and matching queries for video semantic segmentation},
author={Tsubasa Mizuno, Toru Tamaki},
journal={arXiv preprint arXiv:2410.07635},
year={2024},
archivePrefix={arXiv},
eprint={2410.07635},
primaryClass={cs.CV}
} | mizuno2024shift |
arxiv-667935 | 2410.07638 | Almost Minimax Optimal Best Arm Identification in Piecewise Stationary Linear Bandits | <|reference_start|>Almost Minimax Optimal Best Arm Identification in Piecewise Stationary Linear Bandits: We propose a {\em novel} piecewise stationary linear bandit (PSLB) model, where the environment randomly samples a context from an unknown probability distribution at each changepoint, and the quality of an arm is measured by its return averaged over all contexts. The contexts and their distribution, as well as the changepoints are unknown to the agent. We design {\em Piecewise-Stationary $\varepsilon$-Best Arm Identification$^+$} (PS$\varepsilon$BAI$^+$), an algorithm that is guaranteed to identify an $\varepsilon$-optimal arm with probability $\ge 1-\delta$ and with a minimal number of samples. PS$\varepsilon$BAI$^+$ consists of two subroutines, PS$\varepsilon$BAI and {\sc Na\"ive $\varepsilon$-BAI} (N$\varepsilon$BAI), which are executed in parallel. PS$\varepsilon$BAI actively detects changepoints and aligns contexts to facilitate the arm identification process. When PS$\varepsilon$BAI and N$\varepsilon$BAI are utilized judiciously in parallel, PS$\varepsilon$BAI$^+$ is shown to have a finite expected sample complexity. By proving a lower bound, we show the expected sample complexity of PS$\varepsilon$BAI$^+$ is optimal up to a logarithmic factor. We compare PS$\varepsilon$BAI$^+$ to baseline algorithms using numerical experiments which demonstrate its efficiency. Both our analytical and numerical results corroborate that the efficacy of PS$\varepsilon$BAI$^+$ is due to the delicate change detection and context alignment procedures embedded in PS$\varepsilon$BAI.<|reference_end|> | arxiv | @article{hou2024almost,
title={Almost Minimax Optimal Best Arm Identification in Piecewise Stationary
Linear Bandits},
author={Yunlong Hou, Vincent Y. F. Tan, Zixin Zhong},
journal={arXiv preprint arXiv:2410.07638},
year={2024},
archivePrefix={arXiv},
eprint={2410.07638},
primaryClass={cs.LG cs.AI cs.IT math.IT stat.ML}
} | hou2024almost |
arxiv-667936 | 2410.07642 | Improving Numerical Stability of Normalized Mutual Information Estimator on High Dimensions | <|reference_start|>Improving Numerical Stability of Normalized Mutual Information Estimator on High Dimensions: Mutual information provides a powerful, general-purpose metric for quantifying the amount of shared information between variables. Estimating normalized mutual information using a k-Nearest Neighbor (k-NN) based approach involves the calculation of the scaling-invariant k-NN radius. Calculation of the radius suffers from numerical overflow when the joint dimensionality of the data becomes high, typically in the range of several hundred dimensions. To address this issue, we propose a logarithmic transformation technique that improves the numerical stability of the radius calculation in high-dimensional spaces. By applying the proposed transformation during the calculation of the radius, numerical overflow is avoided, and precision is maintained. Proposed transformation is validated through both theoretical analysis and empirical evaluation, demonstrating its ability to stabilize the calculation without compromizing the precision of the results.<|reference_end|> | arxiv | @article{tuononen2024improving,
title={Improving Numerical Stability of Normalized Mutual Information Estimator
on High Dimensions},
author={Marko Tuononen and Ville Hautam"aki},
journal={arXiv preprint arXiv:2410.07642},
year={2024},
archivePrefix={arXiv},
eprint={2410.07642},
primaryClass={cs.IT cs.LG math.IT math.ST stat.TH}
} | tuononen2024improving |
arxiv-667937 | 2410.07643 | Rethinking Adversarial Inverse Reinforcement Learning: From the Angles of Policy Imitation and Transferable Reward Recovery | <|reference_start|>Rethinking Adversarial Inverse Reinforcement Learning: From the Angles of Policy Imitation and Transferable Reward Recovery: In scenarios of inverse reinforcement learning (IRL) with a single expert, adversarial inverse reinforcement learning (AIRL) serves as a foundational approach to providing comprehensive and transferable task descriptions by restricting the reward class, e.g., to state-only rewards. However, AIRL faces practical challenges, primarily stemming from the difficulty of verifying the unobservable transition matrix - often encountered in practice - under the specific conditions necessary for effective transfer. This paper reexamines AIRL in light of the unobservable transition matrix or limited informative priors. By applying random matrix theory (RMT), we demonstrate that AIRL can disentangle rewards for effective transfer with high probability, irrespective of specific conditions. This perspective reframes inadequate transfer in certain contexts. Specifically, it is attributed to the selection problem of the reinforcement learning algorithm employed by AIRL, which is characterized by training variance. Based on this insight, we propose a hybrid framework that integrates on-policy proximal policy optimization (PPO) in the source environment with off-policy soft actor-critic (SAC) in the target environment, leading to significant improvements in reward transfer effectiveness.<|reference_end|> | arxiv | @article{zhang2024rethinking,
title={Rethinking Adversarial Inverse Reinforcement Learning: From the Angles
of Policy Imitation and Transferable Reward Recovery},
author={Yangchun Zhang, Wang Zhou, Yirui Zhou},
journal={arXiv preprint arXiv:2410.07643},
year={2024},
archivePrefix={arXiv},
eprint={2410.07643},
primaryClass={stat.ML cs.LG}
} | zhang2024rethinking |
arxiv-667938 | 2410.07646 | R-Adaptive Mesh Optimization to Enhance Finite Element Basis Compression | <|reference_start|>R-Adaptive Mesh Optimization to Enhance Finite Element Basis Compression: Modern computing systems are capable of exascale calculations, which are revolutionizing the development and application of high-fidelity numerical models in computational science and engineering. While these systems continue to grow in processing power, the available system memory has not increased commensurately, and electrical power consumption continues to grow. A predominant approach to limit the memory usage in large-scale applications is to exploit the abundant processing power and continually recompute many low-level simulation quantities, rather than storing them. However, this approach can adversely impact the throughput of the simulation and diminish the benefits of modern computing architectures. We present two novel contributions to reduce the memory burden while maintaining performance in simulations based on finite element discretizations. The first contribution develops dictionary-based data compression schemes that detect and exploit the structure of the discretization, due to redundancies across the finite element mesh. These schemes are shown to reduce memory requirements by more than 99 percent on meshes with large numbers of nearly identical mesh cells. For applications where this structure does not exist, our second contribution leverages a recently developed augmented Lagrangian sequential quadratic programming algorithm to enable r-adaptive mesh optimization, with the goal of enhancing redundancies in the mesh. Numerical results demonstrate the effectiveness of the proposed methods to detect, exploit and enhance mesh structure on examples inspired by large-scale applications.<|reference_end|> | arxiv | @article{harper2024r-adaptive,
title={R-Adaptive Mesh Optimization to Enhance Finite Element Basis Compression},
author={Graham Harper, Denis Ridzal and Tim Wildey},
journal={arXiv preprint arXiv:2410.07646},
year={2024},
archivePrefix={arXiv},
eprint={2410.07646},
primaryClass={math.OC cs.NA math.NA}
} | harper2024r-adaptive |
arxiv-667939 | 2410.07648 | FLIER: Few-shot Language Image Models Embedded with Latent Representations | <|reference_start|>FLIER: Few-shot Language Image Models Embedded with Latent Representations: As the boosting development of large vision-language models like Contrastive Language-Image Pre-training (CLIP), many CLIP-like methods have shown impressive abilities on visual recognition, especially in low-data regimes scenes. However, we have noticed that most of these methods are limited to introducing new modifications on text and image encoder. Recently, latent diffusion models (LDMs) have shown good ability on image generation. The potent capabilities of LDMs direct our focus towards the latent representations sampled by UNet. Inspired by the conjecture in CoOp that learned prompts encode meanings beyond the existing vocabulary, we assume that, for deep models, the latent representations are concise and accurate understanding of images, in which high-frequency, imperceptible details are abstracted away. In this paper, we propose a Few-shot Language Image model Embedded with latent Representations (FLIER) for image recognition by introducing a latent encoder jointly trained with CLIP's image encoder, it incorporates pre-trained vision-language knowledge of CLIP and the latent representations from Stable Diffusion. We first generate images and corresponding latent representations via Stable Diffusion with the textual inputs from GPT-3. With latent representations as "models-understandable pixels", we introduce a flexible convolutional neural network with two convolutional layers to be the latent encoder, which is simpler than most encoders in vision-language models. The latent encoder is jointly trained with CLIP's image encoder, transferring pre-trained knowledge to downstream tasks better. Experiments and extensive ablation studies on various visual classification tasks demonstrate that FLIER performs state-of-the-art on 11 datasets for most few-shot classification.<|reference_end|> | arxiv | @article{zhou2024flier:,
title={FLIER: Few-shot Language Image Models Embedded with Latent
Representations},
author={Zhinuo Zhou, Peng Zhou, Xiaoyong Pan},
journal={arXiv preprint arXiv:2410.07648},
year={2024},
archivePrefix={arXiv},
eprint={2410.07648},
primaryClass={cs.CV}
} | zhou2024flier: |
arxiv-667940 | 2410.07650 | Optimal additive quaternary codes of dimension $35$ | <|reference_start|>Optimal additive quaternary codes of dimension $35$: After the optimal parameters of additive quaternary codes of dimension $k\le 3$ have been determined there is some recent activity to settle the next case of dimension $k=3.5$. Here we complete dimension $k=3.5$ and give partial results for dimension $k=4$.<|reference_end|> | arxiv | @article{kurz2024optimal,
title={Optimal additive quaternary codes of dimension $3.5$},
author={Sascha Kurz},
journal={arXiv preprint arXiv:2410.07650},
year={2024},
archivePrefix={arXiv},
eprint={2410.07650},
primaryClass={math.CO cs.IT math.IT}
} | kurz2024optimal |
arxiv-667941 | 2410.07651 | Theoretical limits of descending $\ell_0$ sparse-regression ML algorithms | <|reference_start|>Theoretical limits of descending $\ell_0$ sparse-regression ML algorithms: We study the theoretical limits of the $\ell_0$ (quasi) norm based optimization algorithms when employed for solving classical compressed sensing or sparse regression problems. Considering standard contexts with deterministic signals and statistical systems, we utilize \emph{Fully lifted random duality theory} (Fl RDT) and develop a generic analytical program for studying performance of the \emph{maximum-likelihood} (ML) decoding. The key ML performance parameter, the residual \emph{root mean square error} ($\textbf{RMSE}$), is uncovered to exhibit the so-called \emph{phase-transition} (PT) phenomenon. The associated aPT curve, which separates the regions of systems dimensions where \emph{an} $\ell_0$ based algorithm succeeds or fails in achieving small (comparable to the noise) ML optimal $\textbf{RMSE}$ is precisely determined as well. In parallel, we uncover the existence of another dPT curve which does the same separation but for practically feasible \emph{descending} $\ell_0$ ($d\ell_0$) algorithms. Concrete implementation and practical relevance of the Fl RDT typically rely on the ability to conduct a sizeable set of the underlying numerical evaluations which reveal that for the ML decoding the Fl RDT converges astonishingly fast with corrections in the estimated quantities not exceeding $\sim 0.1\%$ already on the third level of lifting. Analytical results are supplemented by a sizeable set of numerical experiments where we implement a simple variant of $d\ell_0$ and demonstrate that its practical performance very accurately matches the theoretical predictions. Completely surprisingly, a remarkably precise agreement between the simulations and the theory is observed for fairly small dimensions of the order of 100.<|reference_end|> | arxiv | @article{stojnic2024theoretical,
title={Theoretical limits of descending $\ell_0$ sparse-regression ML
algorithms},
author={Mihailo Stojnic},
journal={arXiv preprint arXiv:2410.07651},
year={2024},
archivePrefix={arXiv},
eprint={2410.07651},
primaryClass={stat.ML cs.IT cs.LG math.IT math.ST stat.TH}
} | stojnic2024theoretical |
arxiv-667942 | 2410.07652 | StablePrompt: Automatic Prompt Tuning using Reinforcement Learning for Large Language Models | <|reference_start|>StablePrompt: Automatic Prompt Tuning using Reinforcement Learning for Large Language Models: Finding appropriate prompts for the specific task has become an important issue as the usage of Large Language Models (LLM) has expanded. Reinforcement Learning (RL) is widely used for prompt tuning, but its inherent instability and environmental dependency make it difficult to use in practice. In this paper, we propose StablePrompt, which strikes a balance between training stability and search space, mitigating the instability of RL and producing high-performance prompts. We formulate prompt tuning as an online RL problem between the agent and target LLM and introduce Adaptive Proximal Policy Optimization (APPO). APPO introduces an LLM anchor model to adaptively adjust the rate of policy updates. This allows for flexible prompt search while preserving the linguistic ability of the pre-trained LLM. StablePrompt outperforms previous methods on various tasks including text classification, question answering, and text generation. Our code can be found in github.<|reference_end|> | arxiv | @article{kwon2024stableprompt:,
title={StablePrompt: Automatic Prompt Tuning using Reinforcement Learning for
Large Language Models},
author={Minchan Kwon, Gaeun Kim, Jongsuk Kim, Haeil Lee, Junmo Kim},
journal={arXiv preprint arXiv:2410.07652},
year={2024},
archivePrefix={arXiv},
eprint={2410.07652},
primaryClass={cs.CL}
} | kwon2024stableprompt: |
arxiv-667943 | 2410.07654 | Firzen: Firing Strict Cold-Start Items with Frozen Heterogeneous and Homogeneous Graphs for Recommendation | <|reference_start|>Firzen: Firing Strict Cold-Start Items with Frozen Heterogeneous and Homogeneous Graphs for Recommendation: Recommendation models utilizing unique identities (IDs) to represent distinct users and items have dominated the recommender systems literature for over a decade. Since multi-modal content of items (e.g., texts and images) and knowledge graphs (KGs) may reflect the interaction-related users' preferences and items' characteristics, they have been utilized as useful side information to further improve the recommendation quality. However, the success of such methods often limits to either warm-start or strict cold-start item recommendation in which some items neither appear in the training data nor have any interactions in the test stage: (1) Some fail to learn the embedding of a strict cold-start item since side information is only utilized to enhance the warm-start ID representations; (2) The others deteriorate the performance of warm-start recommendation since unrelated multi-modal content or entities in KGs may blur the final representations. In this paper, we propose a unified framework incorporating multi-modal content of items and KGs to effectively solve both strict cold-start and warm-start recommendation termed Firzen, which extracts the user-item collaborative information over frozen heterogeneous graph (collaborative knowledge graph), and exploits the item-item semantic structures and user-user behavioral association over frozen homogeneous graphs (item-item relation graph and user-user co-occurrence graph). Furthermore, we build four unified strict cold-start evaluation benchmarks based on publicly available Amazon datasets and a real-world industrial dataset from Weixin Channels via rearranging the interaction data and constructing KGs. Extensive empirical results demonstrate that our model yields significant improvements for strict cold-start recommendation and outperforms or matches the state-of-the-art performance in the warm-start scenario.<|reference_end|> | arxiv | @article{he2024firzen:,
title={Firzen: Firing Strict Cold-Start Items with Frozen Heterogeneous and
Homogeneous Graphs for Recommendation},
author={Hulingxiao He, Xiangteng He, Yuxin Peng, Zifei Shan, Xin Su},
journal={arXiv preprint arXiv:2410.07654},
year={2024},
archivePrefix={arXiv},
eprint={2410.07654},
primaryClass={cs.IR}
} | he2024firzen: |
arxiv-667944 | 2410.07656 | Mechanistic Permutability: Match Features Across Layers | <|reference_start|>Mechanistic Permutability: Match Features Across Layers: Understanding how features evolve across layers in deep neural networks is a fundamental challenge in mechanistic interpretability, particularly due to polysemanticity and feature superposition. While Sparse Autoencoders (SAEs) have been used to extract interpretable features from individual layers, aligning these features across layers has remained an open problem. In this paper, we introduce SAE Match, a novel, data-free method for aligning SAE features across different layers of a neural network. Our approach involves matching features by minimizing the mean squared error between the folded parameters of SAEs, a technique that incorporates activation thresholds into the encoder and decoder weights to account for differences in feature scales. Through extensive experiments on the Gemma 2 language model, we demonstrate that our method effectively captures feature evolution across layers, improving feature matching quality. We also show that features persist over several layers and that our approach can approximate hidden states across layers. Our work advances the understanding of feature dynamics in neural networks and provides a new tool for mechanistic interpretability studies.<|reference_end|> | arxiv | @article{balagansky2024mechanistic,
title={Mechanistic Permutability: Match Features Across Layers},
author={Nikita Balagansky, Ian Maksimov, Daniil Gavrilov},
journal={arXiv preprint arXiv:2410.07656},
year={2024},
archivePrefix={arXiv},
eprint={2410.07656},
primaryClass={cs.LG}
} | balagansky2024mechanistic |
arxiv-667945 | 2410.07658 | SeMv-3D: Towards Semantic and Mutil-view Consistency simultaneously for General Text-to-3D Generation with Triplane Priors | <|reference_start|>SeMv-3D: Towards Semantic and Mutil-view Consistency simultaneously for General Text-to-3D Generation with Triplane Priors: Recent advancements in generic 3D content generation from text prompts have been remarkable by fine-tuning text-to-image diffusion (T2I) models or employing these T2I models as priors to learn a general text-to-3D model. While fine-tuning-based methods ensure great alignment between text and generated views, i.e., semantic consistency, their ability to achieve multi-view consistency is hampered by the absence of 3D constraints, even in limited view. In contrast, prior-based methods focus on regressing 3D shapes with any view that maintains uniformity and coherence across views, i.e., multi-view consistency, but such approaches inevitably compromise visual-textual alignment, leading to a loss of semantic details in the generated objects. To achieve semantic and multi-view consistency simultaneously, we propose SeMv-3D, a novel framework for general text-to-3d generation. Specifically, we propose a Triplane Prior Learner (TPL) that learns triplane priors with 3D spatial features to maintain consistency among different views at the 3D level, e.g., geometry and texture. Moreover, we design a Semantic-aligned View Synthesizer (SVS) that preserves the alignment between 3D spatial features and textual semantics in latent space. In SVS, we devise a simple yet effective batch sampling and rendering strategy that can generate arbitrary views in a single feed-forward inference. Extensive experiments present our SeMv-3D's superiority over state-of-the-art performances with semantic and multi-view consistency in any view. Our code and more visual results are available at https://anonymous.4open.science/r/SeMv-3D-6425.<|reference_end|> | arxiv | @article{cai2024semv-3d:,
title={SeMv-3D: Towards Semantic and Mutil-view Consistency simultaneously for
General Text-to-3D Generation with Triplane Priors},
author={Xiao Cai, Pengpeng Zeng, Lianli Gao, Junchen Zhu, Jiaxin Zhang, Sitong
Su, Heng Tao Shen, Jingkuan Song},
journal={arXiv preprint arXiv:2410.07658},
year={2024},
archivePrefix={arXiv},
eprint={2410.07658},
primaryClass={cs.CV}
} | cai2024semv-3d: |
arxiv-667946 | 2410.07659 | MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete Diffusion | <|reference_start|>MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete Diffusion: The spatio-temporal complexity of video data presents significant challenges in tasks such as compression, generation, and inpainting. We present four key contributions to address the challenges of spatiotemporal video processing. First, we introduce the 3D Mobile Inverted Vector-Quantization Variational Autoencoder (3D-MBQ-VAE), which combines Variational Autoencoders (VAEs) with masked token modeling to enhance spatiotemporal video compression. The model achieves superior temporal consistency and state-of-the-art (SOTA) reconstruction quality by employing a novel training strategy with full frame masking. Second, we present MotionAura, a text-to-video generation framework that utilizes vector-quantized diffusion models to discretize the latent space and capture complex motion dynamics, producing temporally coherent videos aligned with text prompts. Third, we propose a spectral transformer-based denoising network that processes video data in the frequency domain using the Fourier Transform. This method effectively captures global context and long-range dependencies for high-quality video generation and denoising. Lastly, we introduce a downstream task of Sketch Guided Video Inpainting. This task leverages Low-Rank Adaptation (LoRA) for parameter-efficient fine-tuning. Our models achieve SOTA performance on a range of benchmarks. Our work offers robust frameworks for spatiotemporal modeling and user-driven video content manipulation. We will release the code, datasets, and models in open-source.<|reference_end|> | arxiv | @article{susladkar2024motionaura:,
title={MotionAura: Generating High-Quality and Motion Consistent Videos using
Discrete Diffusion},
author={Onkar Susladkar, Jishu Sen Gupta, Chirag Sehgal, Sparsh Mittal, Rekha
Singhal},
journal={arXiv preprint arXiv:2410.07659},
year={2024},
archivePrefix={arXiv},
eprint={2410.07659},
primaryClass={cs.CV}
} | susladkar2024motionaura: |
arxiv-667947 | 2410.07662 | Scalable and Resource-Efficient Second-Order Federated Learning via Over-the-Air Aggregation | <|reference_start|>Scalable and Resource-Efficient Second-Order Federated Learning via Over-the-Air Aggregation: Second-order federated learning (FL) algorithms offer faster convergence than their first-order counterparts by leveraging curvature information. However, they are hindered by high computational and storage costs, particularly for large-scale models. Furthermore, the communication overhead associated with large models and digital transmission exacerbates these challenges, causing communication bottlenecks. In this work, we propose a scalable second-order FL algorithm using a sparse Hessian estimate and leveraging over-the-air aggregation, making it feasible for larger models. Our simulation results demonstrate more than $67\%$ of communication resources and energy savings compared to other first and second-order baselines.<|reference_end|> | arxiv | @article{ghalkha2024scalable,
title={Scalable and Resource-Efficient Second-Order Federated Learning via
Over-the-Air Aggregation},
author={Abdulmomen Ghalkha, Chaouki Ben Issaid, and Mehdi Bennis},
journal={arXiv preprint arXiv:2410.07662},
year={2024},
archivePrefix={arXiv},
eprint={2410.07662},
primaryClass={cs.LG}
} | ghalkha2024scalable |
arxiv-667948 | 2410.07663 | TDDSR: Single-Step Diffusion with Two Discriminators for Super Resolution | <|reference_start|>TDDSR: Single-Step Diffusion with Two Discriminators for Super Resolution: Super-resolution methods are increasingly being specialized for both real-world and face-specific tasks. However, many existing approaches rely on simplistic degradation models, which limits their ability to handle complex and unknown degradation patterns effectively. While diffusion-based super-resolution techniques have recently shown impressive results, they are still constrained by the need for numerous inference steps. To address this, we propose TDDSR, an efficient single-step diffusion-based super-resolution method. Our method, distilled from a pre-trained teacher model and based on a diffusion network, performs super-resolution in a single step. It integrates a learnable downsampler to capture diverse degradation patterns and employs two discriminators, one for high-resolution and one for low-resolution images, to enhance the overall performance. Experimental results demonstrate its effectiveness across real-world and face-specific SR tasks, achieving performance comparable to, or even surpassing, another single-step method, previous state-of-the-art models, and the teacher model.<|reference_end|> | arxiv | @article{kim2024tddsr:,
title={TDDSR: Single-Step Diffusion with Two Discriminators for Super
Resolution},
author={Sohwi Kim, Tae-Kyun Kim},
journal={arXiv preprint arXiv:2410.07663},
year={2024},
archivePrefix={arXiv},
eprint={2410.07663},
primaryClass={eess.IV cs.CV}
} | kim2024tddsr: |
arxiv-667949 | 2410.07666 | Computational Complexities of Folding | <|reference_start|>Computational Complexities of Folding: We prove several hardness results on folding origami crease patterns. Flat-folding finite crease patterns is fixed-parameter tractable in the ply of the folded pattern (how many layers overlap at any point) and the treewidth of an associated cell adjacency graph. Under the exponential time hypothesis, the singly-exponential dependence of our algorithm on treewidth is necessary, even for bounded ply. Improving the dependence on ply would require progress on the unsolved map folding problem. Finding the shape of a polyhedron folded from a net with triangular faces and integer edge lengths is not possible in algebraic computation tree models of computation that at each tree node allow either the computation of arbitrary integer roots of real numbers, or the extraction of roots of polynomials with bounded degree and integer coefficients. For a model of reconfigurable origami with origami squares are attached at one edge by a hinge to a rigid surface, moving from one flat-folded state to another by changing the position of one square at a time is PSPACE-complete, and counting flat-folded states is #P-complete. For self-similar square crease patterns with infinitely many folds, testing flat-foldability is undecidable.<|reference_end|> | arxiv | @article{eppstein2024computational,
title={Computational Complexities of Folding},
author={David Eppstein},
journal={arXiv preprint arXiv:2410.07666},
year={2024},
archivePrefix={arXiv},
eprint={2410.07666},
primaryClass={cs.CG cs.CC cs.DS}
} | eppstein2024computational |
arxiv-667950 | 2410.07669 | Delta-ICM: Entropy Modeling with Delta Function for Learned Image Compression | <|reference_start|>Delta-ICM: Entropy Modeling with Delta Function for Learned Image Compression: Image Coding for Machines (ICM) is becoming more important as research in computer vision progresses. ICM is a vital research field that pursues the use of images for image recognition models, facilitating efficient image transmission and storage. The demand for recognition models is growing rapidly among the general public, and their performance continues to improve. To meet these needs, exchanging image data between consumer devices and cloud AI using ICM technology could be one possible solution. In ICM, various image compression methods have adopted Learned Image Compression (LIC). LIC includes an entropy model for estimating the bitrate of latent features, and the design of this model significantly affects its performance. Typically, LIC methods assume that the distribution of latent features follows a normal distribution. This assumption is effective for compressing images intended for human vision. However, employing an entropy model based on normal distribution is inefficient in ICM due to the limitation of image parts that require precise decoding. To address this, we propose Delta-ICM, which uses a probability distribution based on a delta function. Assuming the delta distribution as a distribution of latent features reduces the entropy of image portions unnecessary for machines. We compress the remaining portions using an entropy model based on normal distribution, similar to existing methods. Delta-ICM selects between the entropy model based on the delta distribution and the one based on the normal distribution for each latent feature. Our method outperforms existing ICM methods in image compression performance aimed at machines.<|reference_end|> | arxiv | @article{shindo2024delta-icm:,
title={Delta-ICM: Entropy Modeling with Delta Function for Learned Image
Compression},
author={Takahiro Shindo, Taiju Watanabe, Yui Tatsumi, Hiroshi Watanabe},
journal={arXiv preprint arXiv:2410.07669},
year={2024},
archivePrefix={arXiv},
eprint={2410.07669},
primaryClass={cs.CV eess.IV}
} | shindo2024delta-icm: |
arxiv-667951 | 2410.07670 | Invisibility Cloak: Disappearance under Human Pose Estimation via Backdoor Attacks | <|reference_start|>Invisibility Cloak: Disappearance under Human Pose Estimation via Backdoor Attacks: Human Pose Estimation (HPE) has been widely applied in autonomous systems such as self-driving cars. However, the potential risks of HPE to adversarial attacks have not received comparable attention with image classification or segmentation tasks. Existing works on HPE robustness focus on misleading an HPE system to provide wrong predictions that still indicate some human poses. In this paper, we study the vulnerability of HPE systems to disappearance attacks, where the attacker aims to subtly alter the HPE training process via backdoor techniques so that any input image with some specific trigger will not be recognized as involving any human pose. As humans are typically at the center of HPE systems, such attacks can induce severe security hazards, e.g., pedestrians' lives will be threatened if a self-driving car incorrectly understands the front scene due to disappearance attacks. To achieve the adversarial goal of disappearance, we propose IntC, a general framework to craft Invisibility Cloak in the HPE domain. The core of our work lies in the design of target HPE labels that do not represent any human pose. In particular, we propose three specific backdoor attacks based on our IntC framework with different label designs. IntC-S and IntC-E, respectively designed for regression- and heatmap-based HPE techniques, concentrate the keypoints of triggered images in a tiny, imperceptible region. Further, to improve the attack's stealthiness, IntC-L designs the target poisons to capture the label outputs of typical landscape images without a human involved, achieving disappearance and reducing detectability simultaneously. Extensive experiments demonstrate the effectiveness and generalizability of our IntC methods in achieving the disappearance goal. By revealing the vulnerability of HPE to disappearance and backdoor attacks, we hope our work can raise awareness of the potential risks ...<|reference_end|> | arxiv | @article{zhang2024invisibility,
title={Invisibility Cloak: Disappearance under Human Pose Estimation via
Backdoor Attacks},
author={Minxing Zhang, Michael Backes, Xiao Zhang},
journal={arXiv preprint arXiv:2410.07670},
year={2024},
archivePrefix={arXiv},
eprint={2410.07670},
primaryClass={cs.CR}
} | zhang2024invisibility |
arxiv-667952 | 2410.07671 | DISCO: A Hierarchical Disentangled Cognitive Diagnosis Framework for Interpretable Job Recommendation | <|reference_start|>DISCO: A Hierarchical Disentangled Cognitive Diagnosis Framework for Interpretable Job Recommendation: The rapid development of online recruitment platforms has created unprecedented opportunities for job seekers while concurrently posing the significant challenge of quickly and accurately pinpointing positions that align with their skills and preferences. Job recommendation systems have significantly alleviated the extensive search burden for job seekers by optimizing user engagement metrics, such as clicks and applications, thus achieving notable success. In recent years, a substantial amount of research has been devoted to developing effective job recommendation models, primarily focusing on text-matching based and behavior modeling based methods. While these approaches have realized impressive outcomes, it is imperative to note that research on the explainability of recruitment recommendations remains profoundly unexplored. To this end, in this paper, we propose DISCO, a hierarchical Disentanglement based Cognitive diagnosis framework, aimed at flexibly accommodating the underlying representation learning model for effective and interpretable job recommendations. Specifically, we first design a hierarchical representation disentangling module to explicitly mine the hierarchical skill-related factors implied in hidden representations of job seekers and jobs. Subsequently, we propose level-aware association modeling to enhance information communication and robust representation learning both inter- and intra-level, which consists of the interlevel knowledge influence module and the level-wise contrastive learning. Finally, we devise an interaction diagnosis module incorporating a neural diagnosis function for effectively modeling the multi-level recruitment interaction process between job seekers and jobs, which introduces the cognitive measurement theory.<|reference_end|> | arxiv | @article{yu2024disco:,
title={DISCO: A Hierarchical Disentangled Cognitive Diagnosis Framework for
Interpretable Job Recommendation},
author={Xiaoshan Yu, Chuan Qin, Qi Zhang, Chen Zhu, Haiping Ma, Xingyi Zhang,
Hengshu Zhu},
journal={arXiv preprint arXiv:2410.07671},
year={2024},
archivePrefix={arXiv},
eprint={2410.07671},
primaryClass={cs.IR cs.AI}
} | yu2024disco: |
arxiv-667953 | 2410.07672 | MACPO: Weak-to-Strong Alignment via Multi-Agent Contrastive Preference Optimization | <|reference_start|>MACPO: Weak-to-Strong Alignment via Multi-Agent Contrastive Preference Optimization: As large language models (LLMs) are rapidly advancing and achieving near-human capabilities, aligning them with human values is becoming more urgent. In scenarios where LLMs outperform humans, we face a weak-to-strong alignment problem where we need to effectively align strong student LLMs through weak supervision generated by weak teachers. Existing alignment methods mainly focus on strong-to-weak alignment and self-alignment settings, and it is impractical to adapt them to the much harder weak-to-strong alignment setting. To fill this gap, we propose a multi-agent contrastive preference optimization (MACPO) framework. MACPO facilitates weak teachers and strong students to learn from each other by iteratively reinforcing unfamiliar positive behaviors while penalizing familiar negative ones. To get this, we devise a mutual positive behavior augmentation strategy to encourage weak teachers and strong students to learn from each other's positive behavior and further provide higher quality positive behavior for the next iteration. Additionally, we propose a hard negative behavior construction strategy to induce weak teachers and strong students to generate familiar negative behavior by fine-tuning on negative behavioral data. Experimental results on the HH-RLHF and PKU-SafeRLHF datasets, evaluated using both automatic metrics and human judgments, demonstrate that MACPO simultaneously improves the alignment performance of strong students and weak teachers. Moreover, as the number of weak teachers increases, MACPO achieves better weak-to-strong alignment performance through more iteration optimization rounds.<|reference_end|> | arxiv | @article{lyu2024macpo:,
title={MACPO: Weak-to-Strong Alignment via Multi-Agent Contrastive Preference
Optimization},
author={Yougang Lyu, Lingyong Yan, Zihan Wang, Dawei Yin, Pengjie Ren, Maarten
de Rijke, Zhaochun Ren},
journal={arXiv preprint arXiv:2410.07672},
year={2024},
archivePrefix={arXiv},
eprint={2410.07672},
primaryClass={cs.CL cs.AI}
} | lyu2024macpo: |
arxiv-667954 | 2410.07673 | Multimodal Clickbait Detection by De-confounding Biases Using Causal Representation Inference | <|reference_start|>Multimodal Clickbait Detection by De-confounding Biases Using Causal Representation Inference: This paper focuses on detecting clickbait posts on the Web. These posts often use eye-catching disinformation in mixed modalities to mislead users to click for profit. That affects the user experience and thus would be blocked by content provider. To escape detection, malicious creators use tricks to add some irrelevant non-bait content into bait posts, dressing them up as legal to fool the detector. This content often has biased relations with non-bait labels, yet traditional detectors tend to make predictions based on simple co-occurrence rather than grasping inherent factors that lead to malicious behavior. This spurious bias would easily cause misjudgments. To address this problem, we propose a new debiased method based on causal inference. We first employ a set of features in multiple modalities to characterize the posts. Considering these features are often mixed up with unknown biases, we then disentangle three kinds of latent factors from them, including the invariant factor that indicates intrinsic bait intention; the causal factor which reflects deceptive patterns in a certain scenario, and non-causal noise. By eliminating the noise that causes bias, we can use invariant and causal factors to build a robust model with good generalization ability. Experiments on three popular datasets show the effectiveness of our approach.<|reference_end|> | arxiv | @article{yu2024multimodal,
title={Multimodal Clickbait Detection by De-confounding Biases Using Causal
Representation Inference},
author={Jianxing Yu, Shiqi Wang, Han Yin, Zhenlong Sun, Ruobing Xie, Bo Zhang,
Yanghui Rao},
journal={arXiv preprint arXiv:2410.07673},
year={2024},
archivePrefix={arXiv},
eprint={2410.07673},
primaryClass={cs.LG cs.AI}
} | yu2024multimodal |
arxiv-667955 | 2410.07675 | Adversarial Robustness Overestimation and Instability in TRADES | <|reference_start|>Adversarial Robustness Overestimation and Instability in TRADES: This paper examines the phenomenon of probabilistic robustness overestimation in TRADES, a prominent adversarial training method. Our study reveals that TRADES sometimes yields disproportionately high PGD validation accuracy compared to the AutoAttack testing accuracy in the multiclass classification task. This discrepancy highlights a significant overestimation of robustness for these instances, potentially linked to gradient masking. We further analyze the parameters contributing to unstable models that lead to overestimation. Our findings indicate that smaller batch sizes, lower beta values (which control the weight of the robust loss term in TRADES), larger learning rates, and higher class complexity (e.g., CIFAR-100 versus CIFAR-10) are associated with an increased likelihood of robustness overestimation. By examining metrics such as the First-Order Stationary Condition (FOSC), inner-maximization, and gradient information, we identify the underlying cause of this phenomenon as gradient masking and provide insights into it. Furthermore, our experiments show that certain unstable training instances may return to a state without robust overestimation, inspiring our attempts at a solution. In addition to adjusting parameter settings to reduce instability or retraining when overestimation occurs, we recommend incorporating Gaussian noise in inputs when the FOSC score exceed the threshold. This method aims to mitigate robustness overestimation of TRADES and other similar methods at its source, ensuring more reliable representation of adversarial robustness during evaluation.<|reference_end|> | arxiv | @article{li2024adversarial,
title={Adversarial Robustness Overestimation and Instability in TRADES},
author={Jonathan Weiping Li, Ren-Wei Liang, Cheng-Han Yeh, Cheng-Chang Tsai,
Kuanchun Yu, Chun-Shien Lu, Shang-Tse Chen},
journal={arXiv preprint arXiv:2410.07675},
year={2024},
archivePrefix={arXiv},
eprint={2410.07675},
primaryClass={cs.LG cs.AI}
} | li2024adversarial |
arxiv-667956 | 2410.07677 | Smart Audit System Empowered by LLM | <|reference_start|>Smart Audit System Empowered by LLM: Manufacturing quality audits are pivotal for ensuring high product standards in mass production environments. Traditional auditing processes, however, are labor-intensive and reliant on human expertise, posing challenges in maintaining transparency, accountability, and continuous improvement across complex global supply chains. To address these challenges, we propose a smart audit system empowered by large language models (LLMs). Our approach introduces three innovations: a dynamic risk assessment model that streamlines audit procedures and optimizes resource allocation; a manufacturing compliance copilot that enhances data processing, retrieval, and evaluation for a self-evolving manufacturing knowledge base; and a Re-act framework commonality analysis agent that provides real-time, customized analysis to empower engineers with insights for supplier improvement. These enhancements elevate audit efficiency and effectiveness, with testing scenarios demonstrating an improvement of over 24%.<|reference_end|> | arxiv | @article{yao2024smart,
title={Smart Audit System Empowered by LLM},
author={Xu Yao, Xiaoxu Wu, Xi Li, Huan Xu, Chenlei Li, Ping Huang, Si Li,
Xiaoning Ma, Jiulong Shan},
journal={arXiv preprint arXiv:2410.07677},
year={2024},
archivePrefix={arXiv},
eprint={2410.07677},
primaryClass={cs.CL}
} | yao2024smart |
arxiv-667957 | 2410.07678 | FedEP: Tailoring Attention to Heterogeneous Data Distribution with Entropy Pooling for Decentralized Federated Learning | <|reference_start|>FedEP: Tailoring Attention to Heterogeneous Data Distribution with Entropy Pooling for Decentralized Federated Learning: Federated Learning (FL) performance is highly influenced by data distribution across clients, and non-Independent and Identically Distributed (non-IID) leads to a slower convergence of the global model and a decrease in model effectiveness. The existing algorithms for solving the non-IID problem are focused on the traditional centralized FL (CFL), where a central server is used for model aggregation. However, in decentralized FL (DFL), nodes lack the overall vision of the federation. To address the non-IID problem in DFL, this paper proposes a novel DFL aggregation algorithm, Federated Entropy Pooling (FedEP). FedEP mitigates the client drift problem by incorporating the statistical characteristics of local distributions instead of any actual data. Prior to training, each client conducts a local distribution fitting using a Gaussian Mixture Model (GMM) and shares the resulting statistical characteristics with its neighbors. After receiving the statistical characteristics shared by its neighbors, each node tries to fit the global data distribution. In the aggregation phase, each node calculates the Kullback-Leibler (KL) divergences of the local data distribution over the fitted global data distribution, giving the weights to generate the aggregated model. Extensive experiments have demonstrated that FedEP can achieve faster convergence and show higher test performance than state-of-the-art approaches.<|reference_end|> | arxiv | @article{feng2024fedep:,
title={FedEP: Tailoring Attention to Heterogeneous Data Distribution with
Entropy Pooling for Decentralized Federated Learning},
author={Chao Feng, Hongjie Guan, Alberto Huertas Celdr'an, Jan von der Assen,
G'er^ome Bovet, Burkhard Stiller},
journal={arXiv preprint arXiv:2410.07678},
year={2024},
archivePrefix={arXiv},
eprint={2410.07678},
primaryClass={cs.LG}
} | feng2024fedep: |
arxiv-667958 | 2410.07679 | Relational Diffusion Distillation for Efficient Image Generation | <|reference_start|>Relational Diffusion Distillation for Efficient Image Generation: Although the diffusion model has achieved remarkable performance in the field of image generation, its high inference delay hinders its wide application in edge devices with scarce computing resources. Therefore, many training-free sampling methods have been proposed to reduce the number of sampling steps required for diffusion models. However, they perform poorly under a very small number of sampling steps. Thanks to the emergence of knowledge distillation technology, the existing training scheme methods have achieved excellent results at very low step numbers. However, the current methods mainly focus on designing novel diffusion model sampling methods with knowledge distillation. How to transfer better diffusion knowledge from teacher models is a more valuable problem but rarely studied. Therefore, we propose Relational Diffusion Distillation (RDD), a novel distillation method tailored specifically for distilling diffusion models. Unlike existing methods that simply align teacher and student models at pixel level or feature distributions, our method introduces cross-sample relationship interaction during the distillation process and alleviates the memory constraints induced by multiple sample interactions. Our RDD significantly enhances the effectiveness of the progressive distillation framework within the diffusion model. Extensive experiments on several datasets (e.g., CIFAR-10 and ImageNet) demonstrate that our proposed RDD leads to 1.47 FID decrease under 1 sampling step compared to state-of-the-art diffusion distillation methods and achieving 256x speed-up compared to DDIM strategy. Code is available at https://github.com/cantbebetter2/RDD.<|reference_end|> | arxiv | @article{feng2024relational,
title={Relational Diffusion Distillation for Efficient Image Generation},
author={Weilun Feng, Chuanguang Yang, Zhulin An, Libo Huang, Boyu Diao, Fei
Wang, Yongjun Xu},
journal={arXiv preprint arXiv:2410.07679},
year={2024},
archivePrefix={arXiv},
eprint={2410.07679},
primaryClass={cs.CV}
} | feng2024relational |
arxiv-667959 | 2410.07682 | Patterned Structure Muscle : Arbitrary Shaped Wire-driven Artificial Muscle Utilizing Anisotropic Flexible Structure for Musculoskeletal Robots | <|reference_start|>Patterned Structure Muscle : Arbitrary Shaped Wire-driven Artificial Muscle Utilizing Anisotropic Flexible Structure for Musculoskeletal Robots: Muscles of the human body are composed of tiny actuators made up of myosin and actin filaments. They can exert force in various shapes such as curved or flat, under contact forces and deformations from the environment. On the other hand, muscles in musculoskeletal robots so far have faced challenges in generating force in such shapes and environments. To address this issue, we propose Patterned Structure Muscle (PSM), artificial muscles for musculoskeletal robots. PSM utilizes patterned structures with anisotropic characteristics, wire-driven mechanisms, and is made of flexible material Thermoplastic Polyurethane (TPU) using FDM 3D printing. This method enables the creation of various shapes of muscles, such as simple 1 degree-of-freedom (DOF) muscles, Multi-DOF wide area muscles, joint-covering muscles, and branched muscles. We created an upper arm structure using these muscles to demonstrate wide range of motion, lifting heavy objects, and movements through environmental contact. These experiments show that the proposed PSM is capable of operating in various shapes and environments, and is suitable for the muscles of musculoskeletal robots.<|reference_end|> | arxiv | @article{yoshimura2024patterned,
title={Patterned Structure Muscle : Arbitrary Shaped Wire-driven Artificial
Muscle Utilizing Anisotropic Flexible Structure for Musculoskeletal Robots},
author={Shunnosuke Yoshimura, Akihiro Miki, Kazuhiro Miyama, Yuta Sahara,
Kento Kawaharazuka, Kei Okada, and Masayuki Inaba},
journal={arXiv preprint arXiv:2410.07682},
year={2024},
archivePrefix={arXiv},
eprint={2410.07682},
primaryClass={cs.RO}
} | yoshimura2024patterned |
arxiv-667960 | 2410.07685 | Breaking the curse of dimensionality in structured density estimation | <|reference_start|>Breaking the curse of dimensionality in structured density estimation: We consider the problem of estimating a structured multivariate density, subject to Markov conditions implied by an undirected graph. In the worst case, without Markovian assumptions, this problem suffers from the curse of dimensionality. Our main result shows how the curse of dimensionality can be avoided or greatly alleviated under the Markov property, and applies to arbitrary graphs. While existing results along these lines focus on sparsity or manifold assumptions, we introduce a new graphical quantity called "graph resilience" and show how it controls the sample complexity. Surprisingly, although one might expect the sample complexity of this problem to scale with local graph parameters such as the degree, this turns out not to be the case. Through explicit examples, we compute uniform deviation bounds and illustrate how the curse of dimensionality in density estimation can thus be circumvented. Notable examples where the rate improves substantially include sequential, hierarchical, and spatial data.<|reference_end|> | arxiv | @article{vandermeulen2024breaking,
title={Breaking the curse of dimensionality in structured density estimation},
author={Robert A. Vandermeulen, Wai Ming Tai, Bryon Aragam},
journal={arXiv preprint arXiv:2410.07685},
year={2024},
archivePrefix={arXiv},
eprint={2410.07685},
primaryClass={stat.ML cs.CV cs.LG math.ST stat.TH}
} | vandermeulen2024breaking |
arxiv-667961 | 2410.07686 | The Power of Input: Benchmarking Zero-Shot Sim-To-Real Transfer of Reinforcement Learning Control Policies for Quadrotor Control | <|reference_start|>The Power of Input: Benchmarking Zero-Shot Sim-To-Real Transfer of Reinforcement Learning Control Policies for Quadrotor Control: In the last decade, data-driven approaches have become popular choices for quadrotor control, thanks to their ability to facilitate the adaptation to unknown or uncertain flight conditions. Among the different data-driven paradigms, Deep Reinforcement Learning (DRL) is currently one of the most explored. However, the design of DRL agents for Micro Aerial Vehicles (MAVs) remains an open challenge. While some works have studied the output configuration of these agents (i.e., what kind of control to compute), there is no general consensus on the type of input data these approaches should employ. Multiple works simply provide the DRL agent with full state information, without questioning if this might be redundant and unnecessarily complicate the learning process, or pose superfluous constraints on the availability of such information in real platforms. In this work, we provide an in-depth benchmark analysis of different configurations of the observation space. We optimize multiple DRL agents in simulated environments with different input choices and study their robustness and their sim-to-real transfer capabilities with zero-shot adaptation. We believe that the outcomes and discussions presented in this work supported by extensive experimental results could be an important milestone in guiding future research on the development of DRL agents for aerial robot tasks.<|reference_end|> | arxiv | @article{dionigi2024the,
title={The Power of Input: Benchmarking Zero-Shot Sim-To-Real Transfer of
Reinforcement Learning Control Policies for Quadrotor Control},
author={Alberto Dionigi, Gabriele Costante, Giuseppe Loianno},
journal={arXiv preprint arXiv:2410.07686},
year={2024},
archivePrefix={arXiv},
eprint={2410.07686},
primaryClass={cs.RO}
} | dionigi2024the |
arxiv-667962 | 2410.07687 | Learning to Compress: Local Rank and Information Compression in Deep Neural Networks | <|reference_start|>Learning to Compress: Local Rank and Information Compression in Deep Neural Networks: Deep neural networks tend to exhibit a bias toward low-rank solutions during training, implicitly learning low-dimensional feature representations. This paper investigates how deep multilayer perceptrons (MLPs) encode these feature manifolds and connects this behavior to the Information Bottleneck (IB) theory. We introduce the concept of local rank as a measure of feature manifold dimensionality and demonstrate, both theoretically and empirically, that this rank decreases during the final phase of training. We argue that networks that reduce the rank of their learned representations also compress mutual information between inputs and intermediate layers. This work bridges the gap between feature manifold rank and information compression, offering new insights into the interplay between information bottlenecks and representation learning.<|reference_end|> | arxiv | @article{patel2024learning,
title={Learning to Compress: Local Rank and Information Compression in Deep
Neural Networks},
author={Niket Patel, Ravid Shwartz-Ziv},
journal={arXiv preprint arXiv:2410.07687},
year={2024},
archivePrefix={arXiv},
eprint={2410.07687},
primaryClass={cs.LG cs.IT math.IT}
} | patel2024learning |
arxiv-667963 | 2410.07688 | PokeFlex: A Real-World Dataset of Deformable Objects for Robotics | <|reference_start|>PokeFlex: A Real-World Dataset of Deformable Objects for Robotics: Data-driven methods have shown great potential in solving challenging manipulation tasks, however, their application in the domain of deformable objects has been constrained, in part, by the lack of data. To address this, we propose PokeFlex, a dataset featuring real-world paired and annotated multimodal data that includes 3D textured meshes, point clouds, RGB images, and depth maps. Such data can be leveraged for several downstream tasks such as online 3D mesh reconstruction, and it can potentially enable underexplored applications such as the real-world deployment of traditional control methods based on mesh simulations. To deal with the challenges posed by real-world 3D mesh reconstruction, we leverage a professional volumetric capture system that allows complete 360{\deg} reconstruction. PokeFlex consists of 18 deformable objects with varying stiffness and shapes. Deformations are generated by dropping objects onto a flat surface or by poking the objects with a robot arm. Interaction forces and torques are also reported for the latter case. Using different data modalities, we demonstrated a use case for the PokeFlex dataset in online 3D mesh reconstruction. We refer the reader to our website ( https://pokeflex-dataset.github.io/ ) for demos and examples of our dataset.<|reference_end|> | arxiv | @article{obrist2024pokeflex:,
title={PokeFlex: A Real-World Dataset of Deformable Objects for Robotics},
author={Jan Obrist, Miguel Zamora, Hehui Zheng, Ronan Hinchet, Firat Ozdemir,
Juan Zarate, Robert K. Katzschmann, Stelian Coros},
journal={arXiv preprint arXiv:2410.07688},
year={2024},
archivePrefix={arXiv},
eprint={2410.07688},
primaryClass={cs.RO cs.CV}
} | obrist2024pokeflex: |
arxiv-667964 | 2410.07689 | When the Small-Loss Trick is Not Enough: Multi-Label Image Classification with Noisy Labels Applied to CCTV Sewer Inspections | <|reference_start|>When the Small-Loss Trick is Not Enough: Multi-Label Image Classification with Noisy Labels Applied to CCTV Sewer Inspections: The maintenance of sewerage networks, with their millions of kilometers of pipe, heavily relies on efficient Closed-Circuit Television (CCTV) inspections. Many promising approaches based on multi-label image classification have leveraged databases of historical inspection reports to automate these inspections. However, the significant presence of label noise in these databases, although known, has not been addressed. While extensive research has explored the issue of label noise in singlelabel classification (SLC), little attention has been paid to label noise in multi-label classification (MLC). To address this, we first adapted three sample selection SLC methods (Co-teaching, CoSELFIE, and DISC) that have proven robust to label noise. Our findings revealed that sample selection based solely on the small-loss trick can handle complex label noise, but it is sub-optimal. Adapting hybrid sample selection methods to noisy MLC appeared to be a more promising approach. In light of this, we developed a novel method named MHSS (Multi-label Hybrid Sample Selection) based on CoSELFIE. Through an in-depth comparative study, we demonstrated the superior performance of our approach in dealing with both synthetic complex noise and real noise, thus contributing to the ongoing efforts towards effective automation of CCTV sewer pipe inspections.<|reference_end|> | arxiv | @article{chelouche2024when,
title={When the Small-Loss Trick is Not Enough: Multi-Label Image
Classification with Noisy Labels Applied to CCTV Sewer Inspections},
author={Keryan Chelouche, Marie Lachaize (VERI), Marine Bernard (VERI), Louise
Olgiati, Remi Cuingnet},
journal={arXiv preprint arXiv:2410.07689},
year={2024},
archivePrefix={arXiv},
eprint={2410.07689},
primaryClass={cs.CV cs.LG}
} | chelouche2024when |
arxiv-667965 | 2410.07690 | Stackelberg vs Nash in the Lottery Colonel Blotto Game | <|reference_start|>Stackelberg vs Nash in the Lottery Colonel Blotto Game: Resource competition problems are often modeled using Colonel Blotto games. However, Colonel Blotto games only simulate scenarios where players act simultaneously. In many real-life scenarios, competition is sequential, such as cybersecurity, cloud services, business investments, and more. To model such sequential competition, we model the Lottery Colonel Blotto game as a Stackelberg game. We solve the Stackelberg equilibrium in the Lottery Colonel Blotto game in which the first mover's strategy is actually a solution to a bi-level optimization problem. We develop a constructive method that allows for a series of game reductions. This method enables us to compute the leader's optimal commitment strategy in a polynomial number of iterations. Furthermore, we identify the conditions under which the Stackelberg equilibrium aligns with the Nash equilibria. Finally, we show that by making the optimal first move, the leader's utility can be unbounded compared to its utility in Nash equilibria. We find that the player with a smaller budget has a greater incentive to become the leader in the game. Surprisingly, even when the leader adopts the optimal commitment strategy, the follower's utility may improve compared to that in Nash equilibria.<|reference_end|> | arxiv | @article{liu2024stackelberg,
title={Stackelberg vs. Nash in the Lottery Colonel Blotto Game},
author={Yan Liu, Bonan Ni, Weiran Shen, Zihe Wang, Jie Zhang},
journal={arXiv preprint arXiv:2410.07690},
year={2024},
archivePrefix={arXiv},
eprint={2410.07690},
primaryClass={cs.GT}
} | liu2024stackelberg |
arxiv-667966 | 2410.07691 | Growing Efficient Accurate and Robust Neural Networks on the Edge | <|reference_start|>Growing Efficient Accurate and Robust Neural Networks on the Edge: The ubiquitous deployment of deep learning systems on resource-constrained Edge devices is hindered by their high computational complexity coupled with their fragility to out-of-distribution (OOD) data, especially to naturally occurring common corruptions. Current solutions rely on the Cloud to train and compress models before deploying to the Edge. This incurs high energy and latency costs in transmitting locally acquired field data to the Cloud while also raising privacy concerns. We propose GEARnn (Growing Efficient, Accurate, and Robust neural networks) to grow and train robust networks in-situ, i.e., completely on the Edge device. Starting with a low-complexity initial backbone network, GEARnn employs One-Shot Growth (OSG) to grow a network satisfying the memory constraints of the Edge device using clean data, and robustifies the network using Efficient Robust Augmentation (ERA) to obtain the final network. We demonstrate results on a NVIDIA Jetson Xavier NX, and analyze the trade-offs between accuracy, robustness, model size, energy consumption, and training time. Our results demonstrate the construction of efficient, accurate, and robust networks entirely on an Edge device.<|reference_end|> | arxiv | @article{sundaresha2024growing,
title={Growing Efficient Accurate and Robust Neural Networks on the Edge},
author={Vignesh Sundaresha and Naresh Shanbhag},
journal={arXiv preprint arXiv:2410.07691},
year={2024},
archivePrefix={arXiv},
eprint={2410.07691},
primaryClass={cs.LG cs.CV}
} | sundaresha2024growing |
arxiv-667967 | 2410.07693 | Multi-Facet Counterfactual Learning for Content Quality Evaluation | <|reference_start|>Multi-Facet Counterfactual Learning for Content Quality Evaluation: Evaluating the quality of documents is essential for filtering valuable content from the current massive amount of information. Conventional approaches typically rely on a single score as a supervision signal for training content quality evaluators, which is inadequate to differentiate documents with quality variations across multiple facets. In this paper, we propose Multi-facet cOunterfactual LEarning (MOLE), a framework for efficiently constructing evaluators that perceive multiple facets of content quality evaluation. Given a specific scenario, we prompt large language models to generate counterfactual content that exhibits variations in critical quality facets compared to the original document. Furthermore, we leverage a joint training strategy based on contrastive learning and supervised learning to enable the evaluator to distinguish between different quality facets, resulting in more accurate predictions of content quality scores. Experimental results on 2 datasets across different scenarios demonstrate that our proposed MOLE framework effectively improves the correlation of document content quality evaluations with human judgments, which serve as a valuable toolkit for effective information acquisition.<|reference_end|> | arxiv | @article{zheng2024multi-facet,
title={Multi-Facet Counterfactual Learning for Content Quality Evaluation},
author={Jiasheng Zheng, Hongyu Lin, Boxi Cao, Meng Liao, Yaojie Lu, Xianpei
Han, Le Sun},
journal={arXiv preprint arXiv:2410.07693},
year={2024},
archivePrefix={arXiv},
eprint={2410.07693},
primaryClass={cs.CL}
} | zheng2024multi-facet |
arxiv-667968 | 2410.07695 | Test-Time Intensity Consistency Adaptation for Shadow Detection | <|reference_start|>Test-Time Intensity Consistency Adaptation for Shadow Detection: Shadow detection is crucial for accurate scene understanding in computer vision, yet it is challenged by the diverse appearances of shadows caused by variations in illumination, object geometry, and scene context. Deep learning models often struggle to generalize to real-world images due to the limited size and diversity of training datasets. To address this, we introduce TICA, a novel framework that leverages light-intensity information during test-time adaptation to enhance shadow detection accuracy. TICA exploits the inherent inconsistencies in light intensity across shadow regions to guide the model toward a more consistent prediction. A basic encoder-decoder model is initially trained on a labeled dataset for shadow detection. Then, during the testing phase, the network is adjusted for each test sample by enforcing consistent intensity predictions between two augmented input image versions. This consistency training specifically targets both foreground and background intersection regions to identify shadow regions within images accurately for robust adaptation. Extensive evaluations on the ISTD and SBU shadow detection datasets reveal that TICA significantly demonstrates that TICA outperforms existing state-of-the-art methods, achieving superior results in balanced error rate (BER).<|reference_end|> | arxiv | @article{zhu2024test-time,
title={Test-Time Intensity Consistency Adaptation for Shadow Detection},
author={Leyi Zhu, Weihuang Liu, Xinyi Chen, Zimeng Li, Xuhang Chen, Zhen Wang,
and Chi-Man Pun},
journal={arXiv preprint arXiv:2410.07695},
year={2024},
archivePrefix={arXiv},
eprint={2410.07695},
primaryClass={cs.CV}
} | zhu2024test-time |
arxiv-667969 | 2410.07696 | Meta-Learning from Learning Curves for Budget-Limited Algorithm Selection | <|reference_start|>Meta-Learning from Learning Curves for Budget-Limited Algorithm Selection: Training a large set of machine learning algorithms to convergence in order to select the best-performing algorithm for a dataset is computationally wasteful. Moreover, in a budget-limited scenario, it is crucial to carefully select an algorithm candidate and allocate a budget for training it, ensuring that the limited budget is optimally distributed to favor the most promising candidates. Casting this problem as a Markov Decision Process, we propose a novel framework in which an agent must select in the process of learning the most promising algorithm without waiting until it is fully trained. At each time step, given an observation of partial learning curves of algorithms, the agent must decide whether to allocate resources to further train the most promising algorithm (exploitation), to wake up another algorithm previously put to sleep, or to start training a new algorithm (exploration). In addition, our framework allows the agent to meta-learn from learning curves on past datasets along with dataset meta-features and algorithm hyperparameters. By incorporating meta-learning, we aim to avoid myopic decisions based solely on premature learning curves on the dataset at hand. We introduce two benchmarks of learning curves that served in international competitions at WCCI'22 and AutoML-conf'22, of which we analyze the results. Our findings show that both meta-learning and the progression of learning curves enhance the algorithm selection process, as evidenced by methods of winning teams and our DDQN baseline, compared to heuristic baselines or a random search. Interestingly, our cost-effective baseline, which selects the best-performing algorithm w.r.t. a small budget, can perform decently when learning curves do not intersect frequently.<|reference_end|> | arxiv | @article{nguyen2024meta-learning,
title={Meta-Learning from Learning Curves for Budget-Limited Algorithm
Selection},
author={Manh Hung Nguyen, Lisheng Sun-Hosoya (LISN), Isabelle Guyon},
journal={Pattern Recognition Letters, 2024, 185, pp.225-231},
year={2024},
archivePrefix={arXiv},
eprint={2410.07696},
primaryClass={math.OC cs.LG stat.ML}
} | nguyen2024meta-learning |
arxiv-667970 | 2410.07697 | Toward a Better Understanding of Robot Energy Consumption in Agroecological Applications | <|reference_start|>Toward a Better Understanding of Robot Energy Consumption in Agroecological Applications: In this paper, we present a comprehensive analysis and discussion of energy consumption in agricultural robots. Robots are emerging as a promising solution to address food production and agroecological challenges, offering potential reductions in chemical use and the ability to perform strenuous tasks beyond human capabilities. The automation of agricultural tasks introduces a previously unattainable level of complexity, enabling robots to optimize trajectories, control laws, and overall task planning. Consequently, automation can lead to higher levels of energy optimization in agricultural tasks. However, the energy consumption of robotic platforms is not fully understood, and a deeper analysis of contributing factors is essential to optimize energy use. We analyze the energy data of an automated agricultural tractor performing tasks throughout the year, revealing nontrivial correlations between the robot's velocity, the type of task performed, and energy consumption. This suggests a tradeoff between task efficiency, time to completion, and energy expenditure that can be harnessed to improve the energy efficiency of robotic agricultural operations.<|reference_end|> | arxiv | @article{bras2024toward,
title={Toward a Better Understanding of Robot Energy Consumption in
Agroecological Applications},
author={Alexis Bras, Alix Montanaro, Cyrille Pierre, Marilys Pradel and Johann
Laconte},
journal={arXiv preprint arXiv:2410.07697},
year={2024},
archivePrefix={arXiv},
eprint={2410.07697},
primaryClass={cs.RO}
} | bras2024toward |
arxiv-667971 | 2410.07698 | Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank Structures | <|reference_start|>Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank Structures: Parameter-efficient fine-tuning (PEFT) significantly reduces memory costs when adapting large language models (LLMs) for downstream applications. However, traditional first-order (FO) fine-tuning algorithms incur substantial memory overhead due to the need to store activation values for back-propagation during gradient computation, particularly in long-context fine-tuning tasks. Zeroth-order (ZO) algorithms offer a promising alternative by approximating gradients using finite differences of function values, thus eliminating the need for activation storage. Nevertheless, existing ZO methods struggle to capture the low-rank gradient structure common in LLM fine-tuning, leading to suboptimal performance. This paper proposes a low-rank ZO gradient estimator and introduces a novel low-rank ZO algorithm (LOZO) that effectively captures this structure in LLMs. We provide convergence guarantees for LOZO by framing it as a subspace optimization method. Additionally, its low-rank nature enables LOZO to integrate with momentum techniques while incurring negligible extra memory costs. Extensive experiments across various model sizes and downstream tasks demonstrate that LOZO and its momentum-based variant outperform existing ZO methods and closely approach the performance of FO algorithms.<|reference_end|> | arxiv | @article{chen2024enhancing,
title={Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank
Structures},
author={Yiming Chen, Yuan Zhang, Liyuan Cao, Kun Yuan and Zaiwen Wen},
journal={arXiv preprint arXiv:2410.07698},
year={2024},
archivePrefix={arXiv},
eprint={2410.07698},
primaryClass={cs.LG}
} | chen2024enhancing |
arxiv-667972 | 2410.07700 | A Visual Cooperative Localization Method for Airborne Magnetic Surveying Based on a Manifold Sensor Fusion Algorithm Using Lie Groups | <|reference_start|>A Visual Cooperative Localization Method for Airborne Magnetic Surveying Based on a Manifold Sensor Fusion Algorithm Using Lie Groups: Recent advancements in UAV technology have spurred interest in developing multi-UAV aerial surveying systems for use in confined environments where GNSS signals are blocked or jammed. This paper focuses airborne magnetic surveying scenarios. To obtain clean magnetic measurements reflecting the Earth's magnetic field, the magnetic sensor must be isolated from other electronic devices, creating a significant localization challenge. We propose a visual cooperative localization solution. The solution incorporates a visual processing module and an improved manifold-based sensor fusion algorithm, delivering reliable and accurate positioning information. Real flight experiments validate the approach, demonstrating single-axis centimeter-level accuracy and decimeter-level overall 3D positioning accuracy.<|reference_end|> | arxiv | @article{liu2024a,
title={A Visual Cooperative Localization Method for Airborne Magnetic Surveying
Based on a Manifold Sensor Fusion Algorithm Using Lie Groups},
author={Liang Liu, Xiao Hu, Wei Jiang, Guanglei Meng, Zhujun Wang, Taining
Zhang},
journal={arXiv preprint arXiv:2410.07700},
year={2024},
archivePrefix={arXiv},
eprint={2410.07700},
primaryClass={eess.SP cs.RO cs.SY eess.SY}
} | liu2024a |
arxiv-667973 | 2410.07701 | Autonomous Driving in Unstructured Environments: How Far Have We Come? | <|reference_start|>Autonomous Driving in Unstructured Environments: How Far Have We Come?: Research on autonomous driving in unstructured outdoor environments is less advanced than in structured urban settings due to challenges like environmental diversities and scene complexity. These environments-such as rural areas and rugged terrains-pose unique obstacles that are not common in structured urban areas. Despite these difficulties, autonomous driving in unstructured outdoor environments is crucial for applications in agriculture, mining, and military operations. Our survey reviews over 250 papers for autonomous driving in unstructured outdoor environments, covering offline mapping, pose estimation, environmental perception, path planning, end-to-end autonomous driving, datasets, and relevant challenges. We also discuss emerging trends and future research directions. This review aims to consolidate knowledge and encourage further research for autonomous driving in unstructured environments. To support ongoing work, we maintain an active repository with up-to-date literature and open-source projects at: https://github.com/chaytonmin/Survey-Autonomous-Driving-in-Unstructured-Environments.<|reference_end|> | arxiv | @article{min2024autonomous,
title={Autonomous Driving in Unstructured Environments: How Far Have We Come?},
author={Chen Min, Shubin Si, Xu Wang, Hanzhang Xue, Weizhong Jiang, Yang Liu,
Juan Wang, Qingtian Zhu, Qi Zhu, Lun Luo, Fanjie Kong, Jinyu Miao, Xudong
Cai, Shuai An, Wei Li, Jilin Mei, Tong Sun, Heng Zhai, Qifeng Liu, Fangzhou
Zhao, Liang Chen, Shuai Wang, Erke Shang, Linzhi Shang, Kunlong Zhao, Fuyang
Li, Hao Fu, Lei Jin, Jian Zhao, Fangyuan Mao, Zhipeng Xiao, Chengyang Li, Bin
Dai, Dawei Zhao, Liang Xiao, Yiming Nie, Yu Hu, Xuelong Li},
journal={arXiv preprint arXiv:2410.07701},
year={2024},
archivePrefix={arXiv},
eprint={2410.07701},
primaryClass={cs.RO}
} | min2024autonomous |
arxiv-667974 | 2410.07703 | Time-domain direct sampling method for inverse electromagnetic scattering with a single incident source | <|reference_start|>Time-domain direct sampling method for inverse electromagnetic scattering with a single incident source: In this paper, we consider an inverse electromagnetic medium scattering problem of reconstructing unknown objects from time-dependent boundary measurements. A novel time-domain direct sampling method is developed for determining the locations of unknown scatterers by using only a single incident source. Notably, our method imposes no restrictions on the the waveform of the incident wave. Based on the Fourier-Laplace transform, we first establish the connection between the frequency-domain and the time-domain direct sampling method. Furthermore, we elucidate the mathematical mechanism of the imaging functional through the properties of modified Bessel functions. Theoretical justifications and stability analyses are provided to demonstrate the effectiveness of the proposed method. Finally, several numerical experiments are presented to illustrate the feasibility of our approach.<|reference_end|> | arxiv | @article{geng2024time-domain,
title={Time-domain direct sampling method for inverse electromagnetic
scattering with a single incident source},
author={Chen Geng, Minghui Song, Xianchao Wang and Yuliang Wang},
journal={arXiv preprint arXiv:2410.07703},
year={2024},
archivePrefix={arXiv},
eprint={2410.07703},
primaryClass={math.NA cs.NA}
} | geng2024time-domain |
arxiv-667975 | 2410.07704 | A Generalization Result for Convergence in Learning-to-Optimize | <|reference_start|>A Generalization Result for Convergence in Learning-to-Optimize: Convergence in learning-to-optimize is hardly studied, because conventional convergence guarantees in optimization are based on geometric arguments, which cannot be applied easily to learned algorithms. Thus, we develop a probabilistic framework that resembles deterministic optimization and allows for transferring geometric arguments into learning-to-optimize. Our main theorem is a generalization result for parametric classes of potentially non-smooth, non-convex loss functions and establishes the convergence of learned optimization algorithms to stationary points with high probability. This can be seen as a statistical counterpart to the use of geometric safeguards to ensure convergence. To the best of our knowledge, we are the first to prove convergence of optimization algorithms in such a probabilistic framework.<|reference_end|> | arxiv | @article{sucker2024a,
title={A Generalization Result for Convergence in Learning-to-Optimize},
author={Michael Sucker and Peter Ochs},
journal={arXiv preprint arXiv:2410.07704},
year={2024},
archivePrefix={arXiv},
eprint={2410.07704},
primaryClass={cs.LG math.OC math.PR}
} | sucker2024a |
arxiv-667976 | 2410.07705 | Lean Methodology for Garment Modernization | <|reference_start|>Lean Methodology for Garment Modernization: Lean Methodology for Garment Modernization. This article presents the lean methodology for modernizing garment manufacturing, focusing on lean thinking, lean practices, automation development, VSM, and CRP, and how to integrate them effectively. While isolated automation of specific operations can improve efficiency and reduce cycle time, it does not necessarily enhance overall garment output and efficiency. To achieve these broader improvements, it is essential to consider the entire production line and process using VSM and CRP to optimize production and center balance. This approach can increase efficiency, and reduce manufacturing costs, labor time, and lead time, ultimately adding value to the company and factory.<|reference_end|> | arxiv | @article{kong2024lean,
title={Lean Methodology for Garment Modernization},
author={Ray Wai Man Kong, Theodore Ho Tin Kong, Tianxu Huang},
journal={arXiv preprint arXiv:2410.07705},
year={2024},
archivePrefix={arXiv},
eprint={2410.07705},
primaryClass={cs.RO}
} | kong2024lean |
arxiv-667977 | 2410.07706 | AgentBank: Towards Generalized LLM Agents via Fine-Tuning on 50000+ Interaction Trajectories | <|reference_start|>AgentBank: Towards Generalized LLM Agents via Fine-Tuning on 50000+ Interaction Trajectories: Fine-tuning on agent-environment interaction trajectory data holds significant promise for surfacing generalized agent capabilities in open-source large language models (LLMs). In this work, we introduce AgentBank, by far the largest trajectory tuning data collection featuring more than 50k diverse high-quality interaction trajectories which comprises 16 tasks covering five distinct agent skill dimensions. Leveraging a novel annotation pipeline, we are able to scale the annotated trajectories and generate a trajectory dataset with minimized difficulty bias. Furthermore, we fine-tune LLMs on AgentBank to get a series of agent models, Samoyed. Our comparative experiments demonstrate the effectiveness of scaling the interaction trajectory data to acquire generalized agent capabilities. Additional studies also reveal some key observations regarding trajectory tuning and agent skill generalization.<|reference_end|> | arxiv | @article{song2024agentbank:,
title={AgentBank: Towards Generalized LLM Agents via Fine-Tuning on 50000+
Interaction Trajectories},
author={Yifan Song, Weimin Xiong, Xiutian Zhao, Dawei Zhu, Wenhao Wu, Ke Wang,
Cheng Li, Wei Peng, Sujian Li},
journal={arXiv preprint arXiv:2410.07706},
year={2024},
archivePrefix={arXiv},
eprint={2410.07706},
primaryClass={cs.CL cs.AI}
} | song2024agentbank: |
arxiv-667978 | 2410.07707 | MotionGS: Exploring Explicit Motion Guidance for Deformable 3D Gaussian Splatting | <|reference_start|>MotionGS: Exploring Explicit Motion Guidance for Deformable 3D Gaussian Splatting: Dynamic scene reconstruction is a long-term challenge in the field of 3D vision. Recently, the emergence of 3D Gaussian Splatting has provided new insights into this problem. Although subsequent efforts rapidly extend static 3D Gaussian to dynamic scenes, they often lack explicit constraints on object motion, leading to optimization difficulties and performance degradation. To address the above issues, we propose a novel deformable 3D Gaussian splatting framework called MotionGS, which explores explicit motion priors to guide the deformation of 3D Gaussians. Specifically, we first introduce an optical flow decoupling module that decouples optical flow into camera flow and motion flow, corresponding to camera movement and object motion respectively. Then the motion flow can effectively constrain the deformation of 3D Gaussians, thus simulating the motion of dynamic objects. Additionally, a camera pose refinement module is proposed to alternately optimize 3D Gaussians and camera poses, mitigating the impact of inaccurate camera poses. Extensive experiments in the monocular dynamic scenes validate that MotionGS surpasses state-of-the-art methods and exhibits significant superiority in both qualitative and quantitative results. Project page: https://ruijiezhu94.github.io/MotionGS_page<|reference_end|> | arxiv | @article{zhu2024motiongs:,
title={MotionGS: Exploring Explicit Motion Guidance for Deformable 3D Gaussian
Splatting},
author={Ruijie Zhu, Yanzhe Liang, Hanzhi Chang, Jiacheng Deng, Jiahao Lu,
Wenfei Yang, Tianzhu Zhang, Yongdong Zhang},
journal={arXiv preprint arXiv:2410.07707},
year={2024},
archivePrefix={arXiv},
eprint={2410.07707},
primaryClass={cs.CV cs.GR cs.LG}
} | zhu2024motiongs: |
arxiv-667979 | 2410.07708 | Learning Tree Pattern Transformations | <|reference_start|>Learning Tree Pattern Transformations: Explaining why and how a tree $t$ structurally differs from another tree $t^*$ is a question that is encountered throughout computer science, including in understanding tree-structured data such as XML or JSON data. In this article, we explore how to learn explanations for structural differences between pairs of trees from sample data: suppose we are given a set $\{(t_1, t_1^*),\dots, (t_n, t_n^*)\}$ of pairs of labelled, ordered trees; is there a small set of rules that explains the structural differences between all pairs $(t_i, t_i^*)$? This raises two research questions: (i) what is a good notion of "rule" in this context?; and (ii) how can sets of rules explaining a data set be learnt algorithmically? We explore these questions from the perspective of database theory by (1) introducing a pattern-based specification language for tree transformations; (2) exploring the computational complexity of variants of the above algorithmic problem, e.g. showing NP-hardness for very restricted variants; and (3) discussing how to solve the problem for data from CS education research using SAT solvers.<|reference_end|> | arxiv | @article{neider2024learning,
title={Learning Tree Pattern Transformations},
author={Daniel Neider and Leif Sabellek and Johannes Schmidt and Fabian
Vehlken and Thomas Zeume},
journal={arXiv preprint arXiv:2410.07708},
year={2024},
archivePrefix={arXiv},
eprint={2410.07708},
primaryClass={cs.LG cs.AI cs.CC cs.DB}
} | neider2024learning |
arxiv-667980 | 2410.07711 | Rethinking the Principle of Gradient Smooth Methods in Model Explanation | <|reference_start|>Rethinking the Principle of Gradient Smooth Methods in Model Explanation: Gradient Smoothing is an efficient approach to reducing noise in gradient-based model explanation method. SmoothGrad adds Gaussian noise to mitigate much of these noise. However, the crucial hyper-parameter in this method, the variance $\sigma$ of Gaussian noise, is set manually or with heuristic approach. However, it results in the smoothed gradients still containing a certain amount of noise. In this paper, we aim to interpret SmoothGrad as a corollary of convolution, thereby re-understanding the gradient noise and the role of $\sigma$ from the perspective of confidence level. Furthermore, we propose an adaptive gradient smoothing method, AdaptGrad, based on these insights. Through comprehensive experiments, both qualitative and quantitative results demonstrate that AdaptGrad could effectively reduce almost all the noise in vanilla gradients compared with baselines methods. AdaptGrad is simple and universal, making it applicable for enhancing gradient-based interpretability methods for better visualization.<|reference_end|> | arxiv | @article{zhou2024rethinking,
title={Rethinking the Principle of Gradient Smooth Methods in Model Explanation},
author={Linjiang Zhou, Chao Ma, Zepeng Wang, Xiaochuan Shi},
journal={arXiv preprint arXiv:2410.07711},
year={2024},
archivePrefix={arXiv},
eprint={2410.07711},
primaryClass={cs.LG}
} | zhou2024rethinking |
arxiv-667981 | 2410.07713 | A Hate Speech Moderated Chat Application: Use Case for GDPR and DSA Compliance | <|reference_start|>A Hate Speech Moderated Chat Application: Use Case for GDPR and DSA Compliance: The detection of hate speech or toxic content online is a complex and sensitive issue. While the identification itself is highly dependent on the context of the situation, sensitive personal attributes such as age, language, and nationality are rarely available due to privacy concerns. Additionally, platforms struggle with a wide range of local jurisdictions regarding online hate speech and the evaluation of content based on their internal ethical norms. This research presents a novel approach that demonstrates a GDPR-compliant application capable of implementing legal and ethical reasoning into the content moderation process. The application increases the explainability of moderation decisions by utilizing user information. Two use cases fundamental to online communication are presented and implemented using technologies such as GPT-3.5, Solid Pods, and the rule language Prova. The first use case demonstrates the scenario of a platform aiming to protect adolescents from potentially harmful content by limiting the ability to post certain content when minors are present. The second use case aims to identify and counter problematic statements online by providing counter hate speech. The counter hate speech is generated using personal attributes to appeal to the user. This research lays the groundwork for future DSA compliance of online platforms. The work proposes a novel approach to reason within different legal and ethical definitions of hate speech and plan the fitting counter hate speech. Overall, the platform provides a fitted protection to users and a more explainable and individualized response. The hate speech detection service, the chat platform, and the reasoning in Prova are discussed, and the potential benefits for content moderation and algorithmic hate speech detection are outlined. A selection of important aspects for DSA compliance is outlined.<|reference_end|> | arxiv | @article{fillies2024a,
title={A Hate Speech Moderated Chat Application: Use Case for GDPR and DSA
Compliance},
author={Jan Fillies, Theodoros Mitsikas, Ralph Sch"afermeier, Adrian Paschke},
journal={arXiv preprint arXiv:2410.07713},
year={2024},
archivePrefix={arXiv},
eprint={2410.07713},
primaryClass={cs.MA cs.CY cs.SI}
} | fillies2024a |
arxiv-667982 | 2410.07717 | On the Generalization Properties of Deep Learning for Aircraft Fuel Flow Estimation Models | <|reference_start|>On the Generalization Properties of Deep Learning for Aircraft Fuel Flow Estimation Models: Accurately estimating aircraft fuel flow is essential for evaluating new procedures, designing next-generation aircraft, and monitoring the environmental impact of current aviation practices. This paper investigates the generalization capabilities of deep learning models in predicting fuel consumption, focusing particularly on their performance for aircraft types absent from the training data. We propose a novel methodology that integrates neural network architectures with domain generalization techniques to enhance robustness and reliability across a wide range of aircraft. A comprehensive dataset containing 101 different aircraft types, separated into training and generalization sets, with each aircraft type set containing 1,000 flights. We employed the base of aircraft data (BADA) model for fuel flow estimates, introduced a pseudo-distance metric to assess aircraft type similarity, and explored various sampling strategies to optimize model performance in data-sparse regions. Our results reveal that for previously unseen aircraft types, the introduction of noise into aircraft and engine parameters improved model generalization. The model is able to generalize with acceptable mean absolute percentage error between 2\% and 10\% for aircraft close to existing aircraft, while performance is below 1\% error for known aircraft in the training set. This study highlights the potential of combining domain-specific insights with advanced machine learning techniques to develop scalable, accurate, and generalizable fuel flow estimation models.<|reference_end|> | arxiv | @article{jarry2024on,
title={On the Generalization Properties of Deep Learning for Aircraft Fuel Flow
Estimation Models},
author={Gabriel Jarry, Ramon Dalmau, Philippe Very and Junzi Sun},
journal={arXiv preprint arXiv:2410.07717},
year={2024},
archivePrefix={arXiv},
eprint={2410.07717},
primaryClass={cs.LG cs.AI}
} | jarry2024on |
arxiv-667983 | 2410.07718 | Hallo2: Long-Duration and High-Resolution Audio-Driven Portrait Image Animation | <|reference_start|>Hallo2: Long-Duration and High-Resolution Audio-Driven Portrait Image Animation: Recent advances in latent diffusion-based generative models for portrait image animation, such as Hallo, have achieved impressive results in short-duration video synthesis. In this paper, we present updates to Hallo, introducing several design enhancements to extend its capabilities. First, we extend the method to produce long-duration videos. To address substantial challenges such as appearance drift and temporal artifacts, we investigate augmentation strategies within the image space of conditional motion frames. Specifically, we introduce a patch-drop technique augmented with Gaussian noise to enhance visual consistency and temporal coherence over long duration. Second, we achieve 4K resolution portrait video generation. To accomplish this, we implement vector quantization of latent codes and apply temporal alignment techniques to maintain coherence across the temporal dimension. By integrating a high-quality decoder, we realize visual synthesis at 4K resolution. Third, we incorporate adjustable semantic textual labels for portrait expressions as conditional inputs. This extends beyond traditional audio cues to improve controllability and increase the diversity of the generated content. To the best of our knowledge, Hallo2, proposed in this paper, is the first method to achieve 4K resolution and generate hour-long, audio-driven portrait image animations enhanced with textual prompts. We have conducted extensive experiments to evaluate our method on publicly available datasets, including HDTF, CelebV, and our introduced "Wild" dataset. The experimental results demonstrate that our approach achieves state-of-the-art performance in long-duration portrait video animation, successfully generating rich and controllable content at 4K resolution for duration extending up to tens of minutes. Project page https://fudan-generative-vision.github.io/hallo2<|reference_end|> | arxiv | @article{cui2024hallo2:,
title={Hallo2: Long-Duration and High-Resolution Audio-Driven Portrait Image
Animation},
author={Jiahao Cui, Hui Li, Yao Yao, Hao Zhu, Hanlin Shang, Kaihui Cheng, Hang
Zhou, Siyu Zhu, Jingdong Wang},
journal={arXiv preprint arXiv:2410.07718},
year={2024},
archivePrefix={arXiv},
eprint={2410.07718},
primaryClass={cs.CV}
} | cui2024hallo2: |
arxiv-667984 | 2410.07719 | Understanding Adversarially Robust Generalization via Weight-Curvature Index | <|reference_start|>Understanding Adversarially Robust Generalization via Weight-Curvature Index: Despite extensive research on adversarial examples, the underlying mechanisms of adversarially robust generalization, a critical yet challenging task for deep learning, remain largely unknown. In this work, we propose a novel perspective to decipher adversarially robust generalization through the lens of the Weight-Curvature Index (WCI). The proposed WCI quantifies the vulnerability of models to adversarial perturbations using the Frobenius norm of weight matrices and the trace of Hessian matrices. We prove generalization bounds based on PAC-Bayesian theory and second-order loss function approximations to elucidate the interplay between robust generalization gap, model parameters, and loss landscape curvature. Our theory and experiments show that WCI effectively captures the robust generalization performance of adversarially trained models. By offering a nuanced understanding of adversarial robustness based on the scale of model parameters and the curvature of the loss landscape, our work provides crucial insights for designing more resilient deep learning models, enhancing their reliability and security.<|reference_end|> | arxiv | @article{xu2024understanding,
title={Understanding Adversarially Robust Generalization via Weight-Curvature
Index},
author={Yuelin Xu, Xiao Zhang},
journal={arXiv preprint arXiv:2410.07719},
year={2024},
archivePrefix={arXiv},
eprint={2410.07719},
primaryClass={cs.LG}
} | xu2024understanding |
arxiv-667985 | 2410.07722 | DyVo: Dynamic Vocabularies for Learned Sparse Retrieval with Entities | <|reference_start|>DyVo: Dynamic Vocabularies for Learned Sparse Retrieval with Entities: Learned Sparse Retrieval (LSR) models use vocabularies from pre-trained transformers, which often split entities into nonsensical fragments. Splitting entities can reduce retrieval accuracy and limits the model's ability to incorporate up-to-date world knowledge not included in the training data. In this work, we enhance the LSR vocabulary with Wikipedia concepts and entities, enabling the model to resolve ambiguities more effectively and stay current with evolving knowledge. Central to our approach is a Dynamic Vocabulary (DyVo) head, which leverages existing entity embeddings and an entity retrieval component that identifies entities relevant to a query or document. We use the DyVo head to generate entity weights, which are then merged with word piece weights to create joint representations for efficient indexing and retrieval using an inverted index. In experiments across three entity-rich document ranking datasets, the resulting DyVo model substantially outperforms state-of-the-art baselines.<|reference_end|> | arxiv | @article{nguyen2024dyvo:,
title={DyVo: Dynamic Vocabularies for Learned Sparse Retrieval with Entities},
author={Thong Nguyen and Shubham Chatterjee and Sean MacAvaney and Iain Mackie
and Jeff Dalton and Andrew Yates},
journal={EMNLP 2024},
year={2024},
archivePrefix={arXiv},
eprint={2410.07722},
primaryClass={cs.IR}
} | nguyen2024dyvo: |
arxiv-667986 | 2410.07723 | High-order discretized ACMS method for the simulation of finite-size two-dimensional photonic crystals | <|reference_start|>High-order discretized ACMS method for the simulation of finite-size two-dimensional photonic crystals: The computational complexity and efficiency of the approximate mode component synthesis (ACMS) method is investigated for the two-dimensional heterogeneous Helmholtz equations, aiming at the simulation of large but finite-size photonic crystals. The ACMS method is a Galerkin method that relies on a non-overlapping domain decomposition and special basis functions defined based on the domain decomposition. While, in previous works, the ACMS method was realized using first-order finite elements, we use an underlying hp-finite element method. We study the accuracy of the ACMS method for different wavenumbers, domain decompositions, and discretization parameters. Moreover, the computational complexity of the method is investigated theoretically and compared with computing times for an implementation based on the open source software package NGSolve. The numerical results indicate that, for relevant wavenumber regimes, the size of the resulting linear systems for the ACMS method remains moderate, such that sparse direct solvers are a reasonable choice. Moreover, the ACMS method exhibits only a weak dependence on the selected domain decomposition, allowing for greater flexibility in its choice. Additionally, the numerical results show that the error of the ACMS method achieves the predicted convergence rate for increasing wavenumbers. Finally, to display the versatility of the implementation, the results of simulations of large but finite-size photonic crystals with defects are presented.<|reference_end|> | arxiv | @article{giammatteo2024high-order,
title={High-order discretized ACMS method for the simulation of finite-size
two-dimensional photonic crystals},
author={Elena Giammatteo, Alexander Heinlein, Philip Lukas Lederer, Matthias
Schlottbom},
journal={arXiv preprint arXiv:2410.07723},
year={2024},
archivePrefix={arXiv},
eprint={2410.07723},
primaryClass={math.NA cs.NA}
} | giammatteo2024high-order |
arxiv-667987 | 2410.07725 | Towards Trustworthy Web Attack Detection: An Uncertainty-Aware Ensemble Deep Kernel Learning Model | <|reference_start|>Towards Trustworthy Web Attack Detection: An Uncertainty-Aware Ensemble Deep Kernel Learning Model: Web attacks are one of the major and most persistent forms of cyber threats, which bring huge costs and losses to web application-based businesses. Various detection methods, such as signature-based, machine learning-based, and deep learning-based, have been proposed to identify web attacks. However, these methods either (1) heavily rely on accurate and complete rule design and feature engineering, which may not adapt to fast-evolving attacks, or (2) fail to estimate model uncertainty, which is essential to the trustworthiness of the prediction made by the model. In this study, we proposed an Uncertainty-aware Ensemble Deep Kernel Learning (UEDKL) model to detect web attacks from HTTP request payload data with the model uncertainty captured from the perspective of both data distribution and model parameters. The proposed UEDKL utilizes a deep kernel learning model to distinguish normal HTTP requests from different types of web attacks with model uncertainty estimated from data distribution perspective. Multiple deep kernel learning models were trained as base learners to capture the model uncertainty from model parameters perspective. An attention-based ensemble learning approach was designed to effectively integrate base learners' predictions and model uncertainty. We also proposed a new metric named High Uncertainty Ratio-F Score Curve to evaluate model uncertainty estimation. Experiments on BDCI and SRBH datasets demonstrated that the proposed UEDKL framework yields significant improvement in both web attack detection performance and uncertainty estimation quality compared to benchmark models.<|reference_end|> | arxiv | @article{zhou2024towards,
title={Towards Trustworthy Web Attack Detection: An Uncertainty-Aware Ensemble
Deep Kernel Learning Model},
author={Yonghang Zhou, Hongyi Zhu, Yidong Chai, Yuanchun Jiang and Yezheng Liu},
journal={arXiv preprint arXiv:2410.07725},
year={2024},
archivePrefix={arXiv},
eprint={2410.07725},
primaryClass={cs.LG cs.NE}
} | zhou2024towards |
arxiv-667988 | 2410.07727 | On the Detection of Aircraft Single Engine Taxi using Deep Learning Models | <|reference_start|>On the Detection of Aircraft Single Engine Taxi using Deep Learning Models: The aviation industry is vital for global transportation but faces increasing pressure to reduce its environmental footprint, particularly CO2 emissions from ground operations such as taxiing. Single Engine Taxiing (SET) has emerged as a promising technique to enhance fuel efficiency and sustainability. However, evaluating SET's benefits is hindered by the limited availability of SET-specific data, typically accessible only to aircraft operators. In this paper, we present a novel deep learning approach to detect SET operations using ground trajectory data. Our method involves using proprietary Quick Access Recorder (QAR) data of A320 flights to label ground movements as SET or conventional taxiing during taxi-in operations, while using only trajectory features equivalent to those available in open-source surveillance systems such as Automatic Dependent Surveillance-Broadcast (ADS-B) or ground radar. This demonstrates that SET can be inferred from ground movement patterns, paving the way for future work with non-proprietary data sources. Our results highlight the potential of deep learning to improve SET detection and support more comprehensive environmental impact assessments.<|reference_end|> | arxiv | @article{jarry2024on,
title={On the Detection of Aircraft Single Engine Taxi using Deep Learning
Models},
author={Gabriel Jarry, Philippe Very, Ramon Dalmau, Daniel Delahaye and Arthur
Houdant},
journal={arXiv preprint arXiv:2410.07727},
year={2024},
archivePrefix={arXiv},
eprint={2410.07727},
primaryClass={cs.LG}
} | jarry2024on |
arxiv-667989 | 2410.07728 | Give Me a Choice: The Consequences of Restricting Choices Through AI-Support for Perceived Autonomy, Motivational Variables, and Decision Performance | <|reference_start|>Give Me a Choice: The Consequences of Restricting Choices Through AI-Support for Perceived Autonomy, Motivational Variables, and Decision Performance: Design optimizations in human-AI collaboration often focus on cognitive aspects like attention and task load. Drawing on work design literature, we propose that effective human-AI collaboration requires broader consideration of human needs (e.g., autonomy) that affect motivational variables (e.g., meaningfulness). In a simulated drone oversight experiment, participants (N=274, between-subject) faced 10 critical decision-making scenarios with varying levels of choice restrictions with an AI recommending only 1, 2, 4 or all 6 possible actions. Restricting participants to one selectable action improved task performance (with a perfect AI) but significantly reduced perceived autonomy and work meaningfulness, and these effects intensified over time. In conditions with multiple action choices, participants with higher perceived autonomy performed better. The findings underscore the importance of considering motivational factors to design successful long-term human-AI collaboration at work.<|reference_end|> | arxiv | @article{faas2024give,
title={Give Me a Choice: The Consequences of Restricting Choices Through
AI-Support for Perceived Autonomy, Motivational Variables, and Decision
Performance},
author={Cedric Faas, Richard Bergs, Sarah Sterz, Markus Langer, Anna Maria
Feit},
journal={arXiv preprint arXiv:2410.07728},
year={2024},
archivePrefix={arXiv},
eprint={2410.07728},
primaryClass={cs.HC}
} | faas2024give |
arxiv-667990 | 2410.07732 | Partitioning Trillion Edge Graphs on Edge Devices | <|reference_start|>Partitioning Trillion Edge Graphs on Edge Devices: Processing large-scale graphs, containing billions of entities, is critical across fields like bioinformatics, high-performance computing, navigation and route planning, among others. Efficient graph partitioning, which divides a graph into sub-graphs while minimizing inter-block edges, is essential to graph processing, as it optimizes parallel computing and enhances data locality. Traditional in-memory partitioners, such as METIS and KaHIP, offer high-quality partitions but are often infeasible for enormous graphs due to their substantial memory overhead. Streaming partitioners reduce memory usage to O(n), where 'n' is the number of nodes of the graph, by loading nodes sequentially and assigning them to blocks on-the-fly. This paper introduces StreamCPI, a novel framework that further reduces the memory overhead of streaming partitioners through run-length compression of block assignments. Notably, StreamCPI enables the partitioning of trillion-edge graphs on edge devices. Additionally, within this framework, we propose a modification to the LA-vector bit vector for append support, which can be used for online run-length compression in other streaming applications. Empirical results show that StreamCPI reduces memory usage while maintaining or improving partition quality. For instance, using StreamCPI, the Fennel partitioner effectively partitions a graph with 17 billion nodes and 1.03 trillion edges on a Raspberry Pi, achieving significantly better solution quality than Hashing, the only other feasible algorithm on edge devices. StreamCPI thus advances graph processing by enabling high-quality partitioning on low-cost machines.<|reference_end|> | arxiv | @article{chhabra2024partitioning,
title={Partitioning Trillion Edge Graphs on Edge Devices},
author={Adil Chhabra, Florian Kurpicz, Christian Schulz, Dominik Schweisgut,
Daniel Seemaier},
journal={arXiv preprint arXiv:2410.07732},
year={2024},
archivePrefix={arXiv},
eprint={2410.07732},
primaryClass={cs.DS}
} | chhabra2024partitioning |
arxiv-667991 | 2410.07733 | MGMapNet: Multi-Granularity Representation Learning for End-to-End Vectorized HD Map Construction | <|reference_start|>MGMapNet: Multi-Granularity Representation Learning for End-to-End Vectorized HD Map Construction: The construction of Vectorized High-Definition (HD) map typically requires capturing both category and geometry information of map elements. Current state-of-the-art methods often adopt solely either point-level or instance-level representation, overlooking the strong intrinsic relationships between points and instances. In this work, we propose a simple yet efficient framework named MGMapNet (Multi-Granularity Map Network) to model map element with a multi-granularity representation, integrating both coarse-grained instance-level and fine-grained point-level queries. Specifically, these two granularities of queries are generated from the multi-scale bird's eye view (BEV) features using a proposed Multi-Granularity Aggregator. In this module, instance-level query aggregates features over the entire scope covered by an instance, and the point-level query aggregates features locally. Furthermore, a Point Instance Interaction module is designed to encourage information exchange between instance-level and point-level queries. Experimental results demonstrate that the proposed MGMapNet achieves state-of-the-art performance, surpassing MapTRv2 by 5.3 mAP on nuScenes and 4.4 mAP on Argoverse2 respectively.<|reference_end|> | arxiv | @article{yang2024mgmapnet:,
title={MGMapNet: Multi-Granularity Representation Learning for End-to-End
Vectorized HD Map Construction},
author={Jing Yang and Minyue Jiang and Sen Yang and Xiao Tan and Yingying Li
and Errui Ding and Hanli Wang and Jingdong Wang},
journal={arXiv preprint arXiv:2410.07733},
year={2024},
archivePrefix={arXiv},
eprint={2410.07733},
primaryClass={cs.CV}
} | yang2024mgmapnet: |
arxiv-667992 | 2410.07737 | Plug-and-Play Performance Estimation for LLM Services without Relying on Labeled Data | <|reference_start|>Plug-and-Play Performance Estimation for LLM Services without Relying on Labeled Data: Large Language Model (LLM) services exhibit impressive capability on unlearned tasks leveraging only a few examples by in-context learning (ICL). However, the success of ICL varies depending on the task and context, leading to heterogeneous service quality. Directly estimating the performance of LLM services at each invocation can be laborious, especially requiring abundant labeled data or internal information within the LLM. This paper introduces a novel method to estimate the performance of LLM services across different tasks and contexts, which can be "plug-and-play" utilizing only a few unlabeled samples like ICL. Our findings suggest that the negative log-likelihood and perplexity derived from LLM service invocation can function as effective and significant features. Based on these features, we utilize four distinct meta-models to estimate the performance of LLM services. Our proposed method is compared against unlabeled estimation baselines across multiple LLM services and tasks. And it is experimentally applied to two scenarios, demonstrating its effectiveness in the selection and further optimization of LLM services.<|reference_end|> | arxiv | @article{wang2024plug-and-play,
title={Plug-and-Play Performance Estimation for LLM Services without Relying on
Labeled Data},
author={Can Wang, Dianbo Sui, Hongliang Sun, Hao Ding, Bolin Zhang, Zhiying Tu},
journal={arXiv preprint arXiv:2410.07737},
year={2024},
archivePrefix={arXiv},
eprint={2410.07737},
primaryClass={cs.PF cs.LG}
} | wang2024plug-and-play |
arxiv-667993 | 2410.07738 | Enhancing Federated Domain Adaptation with Multi-Domain Prototype-Based Federated Fine-Tuning | <|reference_start|>Enhancing Federated Domain Adaptation with Multi-Domain Prototype-Based Federated Fine-Tuning: Federated Domain Adaptation (FDA) is a Federated Learning (FL) scenario where models are trained across multiple clients with unique data domains but a shared category space, without transmitting private data. The primary challenge in FDA is data heterogeneity, which causes significant divergences in gradient updates when using conventional averaging-based aggregation methods, reducing the efficacy of the global model. This further undermines both in-domain and out-of-domain performance (within the same federated system but outside the local client). To address this, we propose a novel framework called \textbf{M}ulti-domain \textbf{P}rototype-based \textbf{F}ederated Fine-\textbf{T}uning (MPFT). MPFT fine-tunes a pre-trained model using multi-domain prototypes, i.e., pretrained representations enriched with domain-specific information from category-specific local data. This enables supervised learning on the server to derive a globally optimized adapter that is subsequently distributed to local clients, without the intrusion of data privacy. Empirical results show that MPFT significantly improves both in-domain and out-of-domain accuracy over conventional methods, enhancing knowledge preservation and adaptation in FDA. Notably, MPFT achieves convergence within a single communication round, greatly reducing computation and communication costs. To ensure privacy, MPFT applies differential privacy to protect the prototypes. Additionally, we develop a prototype-based feature space hijacking attack to evaluate robustness, confirming that raw data samples remain unrecoverable even after extensive training epochs. The complete implementation of MPFL is available at \url{https://anonymous.4open.science/r/DomainFL/}.<|reference_end|> | arxiv | @article{zhang2024enhancing,
title={Enhancing Federated Domain Adaptation with Multi-Domain Prototype-Based
Federated Fine-Tuning},
author={Jingyuan Zhang, Yiyang Duan, Shuaicheng Niu, Yang Cao, Wei Yang Bryan
Lim},
journal={arXiv preprint arXiv:2410.07738},
year={2024},
archivePrefix={arXiv},
eprint={2410.07738},
primaryClass={cs.LG cs.AI}
} | zhang2024enhancing |
arxiv-667994 | 2410.07739 | SLIM: Let LLM Learn More and Forget Less with Soft LoRA and Identity Mixture | <|reference_start|>SLIM: Let LLM Learn More and Forget Less with Soft LoRA and Identity Mixture: Although many efforts have been made, it is still a challenge to balance the training budget, downstream performance, and the general capabilities of the LLMs in many applications. Training the whole model for downstream tasks is expensive, and could easily result in catastrophic forgetting. By introducing parameter-efficient fine-tuning (PEFT), the training cost could be reduced, but it still suffers from forgetting, and limits the learning on the downstream tasks. To efficiently fine-tune the LLMs with less limitation to their downstream performance while mitigating the forgetting of general capabilities, we propose a novel mixture of expert (MoE) framework based on Soft LoRA and Identity Mixture (SLIM), that allows dynamic routing between LoRA adapters and skipping connection, enables the suppression of forgetting. We adopt weight-yielding with sliding clustering for better out-of-domain distinguish to enhance the routing. We also propose to convert the mixture of low-rank adapters to the model merging formulation and introduce fast dynamic merging of LoRA adapters to keep the general capabilities of the base model. Extensive experiments demonstrate that the proposed SLIM is comparable to the state-of-the-art PEFT approaches on the downstream tasks while achieving the leading performance in mitigating catastrophic forgetting.<|reference_end|> | arxiv | @article{han2024slim:,
title={SLIM: Let LLM Learn More and Forget Less with Soft LoRA and Identity
Mixture},
author={Jiayi Han, Liang Du, Hongwei Du, Xiangguo Zhou, Yiwen Wu, Weibo Zheng,
Donghong Han},
journal={arXiv preprint arXiv:2410.07739},
year={2024},
archivePrefix={arXiv},
eprint={2410.07739},
primaryClass={cs.LG cs.CL}
} | han2024slim: |
arxiv-667995 | 2410.07740 | The Impact of Grid Storage on Balancing Costs and Carbon Emissions in Great Britain | <|reference_start|>The Impact of Grid Storage on Balancing Costs and Carbon Emissions in Great Britain: Grid energy storage can help to balance supply and demand, but its financial viability and operational carbon emissions impact is poorly understood because of the complexity of grid constraints and market outcomes. We analyse the impact of several technologies (Li-ion and flow batteries, pumped hydro, hydrogen) on Great Britain balancing mechanism, the main market for supply-demand balancing and congestion management. We find that, for many locations and technologies, financially optimal operation of storage for balancing can result in higher carbon emissions. For example, the extra emissions associated with a 1 MW 2-hour duration Li-ion battery in winter vary between +230 to -71 kgCO2/h. Although storage enable higher usage of renewables, it can also unlock additional demand leading to greater use of gas. In addition, balancing services alone are presently insufficient for financial viability of storage projects. This work highlights the need for market reform aligning financial incentives with environmental impacts.<|reference_end|> | arxiv | @article{nosratabadi2024the,
title={The Impact of Grid Storage on Balancing Costs and Carbon Emissions in
Great Britain},
author={Seyyed Mostafa Nosratabadi, Iacopo Savelli, Volkan Kumtepeli, Phil
Grunewald, Marko Aunedi, David A. Howey, Thomas Morstyn},
journal={arXiv preprint arXiv:2410.07740},
year={2024},
archivePrefix={arXiv},
eprint={2410.07740},
primaryClass={eess.SY cs.SY}
} | nosratabadi2024the |
arxiv-667996 | 2410.07742 | Design Method of a Kangaroo Robot with High Power Legs and an Articulated Soft Tail | <|reference_start|>Design Method of a Kangaroo Robot with High Power Legs and an Articulated Soft Tail: In this paper, we focus on the kangaroo, which has powerful legs capable of jumping and a soft and strong tail. To incorporate these unique structure into a robot for utilization, we propose a design method that takes into account both the feasibility as a robot and the kangaroo-mimetic structure. Based on the kangaroo's musculoskeletal structure, we determine the structure of the robot that enables it to jump by analyzing the muscle arrangement and prior verification in simulation. Also, to realize a tail capable of body support, we use an articulated, elastic structure as a tail. In order to achieve both softness and high power output, the robot is driven by a direct-drive, high-power wire-winding mechanism, and weight of legs and the tail is reduced by placing motors in the torso. The developed kangaroo robot can jump with its hind legs, moving its tail, and supporting its body using its hind legs and tail.<|reference_end|> | arxiv | @article{yoshimura2024design,
title={Design Method of a Kangaroo Robot with High Power Legs and an
Articulated Soft Tail},
author={Shunnosuke Yoshimura, Temma Suzuki, Masahiro Bando, Sota Yuzaki, Kento
Kawaharazuka, Kei Okada, and Masayuki Inaba},
journal={arXiv preprint arXiv:2410.07742},
year={2024},
doi={10.1109/IROS55552.2023.10341756},
archivePrefix={arXiv},
eprint={2410.07742},
primaryClass={cs.RO}
} | yoshimura2024design |
arxiv-667997 | 2410.07744 | Modularity maximization and community detection in complex networks through recursive and hierarchical annealing in the D-Wave Advantage quantum processing units | <|reference_start|>Modularity maximization and community detection in complex networks through recursive and hierarchical annealing in the D-Wave Advantage quantum processing units: Quantum adiabatic optimization has long been expected to outperform classical methods in solving NP-type problems. While this has been proven in certain experiments, its main applications still reside in academic problems where the size of the system to be solved would not represent an obstacle to any modern desktop computer. Here we develop a systematic procedure to find the global optima of the modularity function to discover community structure in complex networks solely relying on pure annealers rather than hybrid solutions. We bypass the one-hot encoding constraints by hierarchically and recursively encoding binary instances of the problem that can be solved without the need to guess the exact penalties for the Lagrange multipliers. We study the variability, and robustness of the annealer as a function of network size, directness of connections, topology, and the resolution of the communities. We show how our approach produces meaningful and at least equally optimal solutions to state-of-the-art community detection algorithms while maintaining tractable computing times. Lastly, due to its recursive nature, the annealing process returns intermediate subdivisions thus offering interpretable rather than black-box solutions. These \textit{dendrograms} can be used to unveil normal and pathological hidden hierarchies in brain networks hence opening the door to clinical workflows. Overall, this represents a first step towards an applicable practice-oriented usage of pure quantum annealing potentially bridging two segregated communities in modern science and engineering; that of network science and quantum computing.<|reference_end|> | arxiv | @article{falcó-roget2024modularity,
title={Modularity maximization and community detection in complex networks
through recursive and hierarchical annealing in the D-Wave Advantage quantum
processing units},
author={Joan Falc'o-Roget, Kacper Jurek, Barbara Wojtarowicz, Karol
Capa{l}a, Katarzyna Rycerz},
journal={arXiv preprint arXiv:2410.07744},
year={2024},
archivePrefix={arXiv},
eprint={2410.07744},
primaryClass={physics.soc-ph cs.SI math.CO}
} | falcó-roget2024modularity |
arxiv-667998 | 2410.07745 | StepTool: A Step-grained Reinforcement Learning Framework for Tool Learning in LLMs | <|reference_start|>StepTool: A Step-grained Reinforcement Learning Framework for Tool Learning in LLMs: Despite having powerful reasoning and inference capabilities, Large Language Models (LLMs) still need external tools to acquire real-time information retrieval or domain-specific expertise to solve complex tasks, which is referred to as tool learning. Existing tool learning methods primarily rely on tuning with expert trajectories, focusing on token-sequence learning from a linguistic perspective. However, there are several challenges: 1) imitating static trajectories limits their ability to generalize to new tasks. 2) even expert trajectories can be suboptimal, and better solution paths may exist. In this work, we introduce StepTool, a novel step-grained reinforcement learning framework to improve tool learning in LLMs. It consists of two components: Step-grained Reward Shaping, which assigns rewards at each tool interaction based on tool invocation success and its contribution to the task, and Step-grained Optimization, which uses policy gradient methods to optimize the model in a multi-step manner. Experimental results demonstrate that StepTool significantly outperforms existing methods in multi-step, tool-based tasks, providing a robust solution for complex task environments. Codes are available at https://github.com/yuyq18/StepTool.<|reference_end|> | arxiv | @article{yu2024steptool:,
title={StepTool: A Step-grained Reinforcement Learning Framework for Tool
Learning in LLMs},
author={Yuanqing Yu, Zhefan Wang, Weizhi Ma, Zhicheng Guo, Jingtao Zhan, Shuai
Wang, Chuhan Wu, Zhiqiang Guo, Min Zhang},
journal={arXiv preprint arXiv:2410.07745},
year={2024},
archivePrefix={arXiv},
eprint={2410.07745},
primaryClass={cs.CL}
} | yu2024steptool: |
arxiv-667999 | 2410.07746 | Benign Overfitting in Single-Head Attention | <|reference_start|>Benign Overfitting in Single-Head Attention: The phenomenon of benign overfitting, where a trained neural network perfectly fits noisy training data but still achieves near-optimal test performance, has been extensively studied in recent years for linear models and fully-connected/convolutional networks. In this work, we study benign overfitting in a single-head softmax attention model, which is the fundamental building block of Transformers. We prove that under appropriate conditions, the model exhibits benign overfitting in a classification setting already after two steps of gradient descent. Moreover, we show conditions where a minimum-norm/maximum-margin interpolator exhibits benign overfitting. We study how the overfitting behavior depends on the signal-to-noise ratio (SNR) of the data distribution, namely, the ratio between norms of signal and noise tokens, and prove that a sufficiently large SNR is both necessary and sufficient for benign overfitting.<|reference_end|> | arxiv | @article{magen2024benign,
title={Benign Overfitting in Single-Head Attention},
author={Roey Magen, Shuning Shang, Zhiwei Xu, Spencer Frei, Wei Hu, Gal Vardi},
journal={arXiv preprint arXiv:2410.07746},
year={2024},
archivePrefix={arXiv},
eprint={2410.07746},
primaryClass={cs.LG stat.ML}
} | magen2024benign |
arxiv-668000 | 2410.07750 | PHODCOS: Pythagorean Hodograph-based Differentiable Coordinate System | <|reference_start|>PHODCOS: Pythagorean Hodograph-based Differentiable Coordinate System: This paper presents PHODCOS, an algorithm that assigns a moving coordinate system to a given curve. The parametric functions underlying the coordinate system, i.e., the path function, the moving frame and its angular velocity, are exact -- approximation free -- differentiable, and sufficiently continuous. This allows for computing a coordinate system for highly nonlinear curves, while remaining compliant with autonomous navigation algorithms that require first and second order gradient information. In addition, the coordinate system obtained by PHODCOS is fully defined by a finite number of coefficients, which may then be used to compute additional geometric properties of the curve, such as arc-length, curvature, torsion, etc. Therefore, PHODCOS presents an appealing paradigm to enhance the geometrical awareness of existing guidance and navigation on-orbit spacecraft maneuvers. The PHODCOS algorithm is presented alongside an analysis of its error and approximation order, and thus, it is guaranteed that the obtained coordinate system matches the given curve within a desired tolerance. To demonstrate the applicability of the coordinate system resulting from PHODCOS, we present numerical examples in the Near Rectilinear Halo Orbit (NRHO) for the Lunar Gateway.<|reference_end|> | arxiv | @article{arrizabalaga2024phodcos:,
title={PHODCOS: Pythagorean Hodograph-based Differentiable Coordinate System},
author={Jon Arrizabalaga, Fausto Vega, Zbynv{e}k v{S}'IR, Zachary
Manchester, Markus Ryll},
journal={arXiv preprint arXiv:2410.07750},
year={2024},
archivePrefix={arXiv},
eprint={2410.07750},
primaryClass={cs.RO cs.SY eess.SY}
} | arrizabalaga2024phodcos: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.